Planet MySQL

Node.js & MySQL CRUD — Database Connection & SQL Queries

MySQL is one of the most popular databases in the world and Node.js is a complete platform for building server side applications. Node.js is commonly used with NoSQL databases but you can also use it with SQL-based databases like MySQL and Oracle. Check out Node.js Oracle CRUD Example: Database Connection & SQL Queries. In this tutorial, we'll see how you can use MySQL in Node.js by creating a connection and executing SQL queries for performing CRUD operations. We'll be looking at the node-mysql module for connecting to a MySQL server from your Node.js applications. Using MySQL in Node.js You can use MySQL in Node.js through various modules such as node-mysql or node-mysql2. Let's see how we can use the node-mysql2 (fast node-mysql compatible mysql driver for node.js) for connecting to create a database and perform CRUD operations against it. In nutshell, these are the required steps to use MySQL in Node: Create a folder for your project and navigate inside it: mkdir node-mysql-example && cd node-mysql-example, Add a package.json file using the npm init –y command, Install the node-mysql2 module from npm using the npm install mysql2 –save command, Create a server.js file and add the code below, Run the application using the node server.js. Creating a Node.js Project Let's start by creating our Node.js project. First, create a folder for your project using the following command: $ mkdir node-mysql-demo Next, navigate inside your project's folder and create a package.json file: $ cd node-mysql-demo $ npm init -y This will create a package.json with default values. Installing the MySQL Driver for Node.js After creating a project, you can install the node-mysql2 module using npm: $ npm install mysql2 --save This will add node-mysql2 the node_modules folder of your project (which will be created if not exists) and add it to the dependencies array of the package.json file. Connecting to your MySQL Database In your project's folder, create a server.js file. Open it and add the following code to import mysql2 and use it to create a connection to your MySQL server: const mysql = require('mysql2'); const connection = mysql.createConnection({ host: 'localhost', user: 'root', password: 'password' database: 'test' }); We are connecting the localhost server using the root user, a password and a test database. Change the values accordingly in your system. For this to work you need to have MySQL installed on your system. If you don't, refer to the installation page on the official website. Before doing anything else we need to create database and a table. You can use the mysqlcommand in your terminal to create a database. First run the following command in your terminal: $ mysql Next, run this SQL instruction: create database test; Next, you need to create a table by adding the following code: connection.query(`CREATE TABLE IF NOT EXISTS contacts(id NUMBER, name VARCHAR2(50), email VARCHAR2(100) )` , function(err) { if(err){ console.log("Error!"); } } ); Now that we have a database and a contacts table. Let's see how to perform CRUD operations using SQL queries. Performing CRUD Operations CRUD stands for create, read, update and delete and it refers to common operations that are used in most data-driven applications. You create data in the database tables using the INSERT statement. You read data from the database tables using the SELECT statement. You update data in the database tables using the UPDATE statement. You delete data from the database tables using the DELETE statement. Creating/Inserting Data connection.query('INSERT INTO contacts SET ?', ["name 001","name001@email.com"], (err, res) => { if(err) throw err; }); Reading/Selecting data connection.query('SELECT * FROM contacts', (err,rows) => { if(err) throw err; console.log(rows); }); The rows variable contains the returned rows from the database table. Updating Data connection.query( 'UPDATE contacts SET email = ? Where ID = ?', ['updated@email.com', 1], (err, result) => { if (err) throw err; } ); Deleting Data connection.query( 'DELETE FROM contacts where id = ?', [1], (err, result) => { if (err) throw err; } ); Conclusion In this tutorial, you have seen how you can use the node-mysql2 driver for opening connections to MySQL databases in your Node.js applications and you created a simple CRUD example that demonstrates how to perform basic create, read, update and delete operations via SQL select, insert, update and delete statements.

Shinguz: FromDual Performance Monitor for MariaDB and MySQL 1.0.2 has been released

FromDual has the pleasure to announce the release of the new version 1.0.2 of its popular Database Performance Monitor for MariaDB, MySQL, Galera Cluster and Percona Server fpmmm.

The new FromDual Performance Monitor for MariaDB and MySQL (fpmmm) can be downloaded from here. How to install and use fpmmm is documented in the fpmmm Installation Guide.

In the inconceivable case that you find a bug in the FromDual Performance Manager for MariaDB and MySQL please report it the FromDual Bugtracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Monitoring as a Service (MaaS)

You do not want to set-up your Database monitoring yourself? No problem: Choose our MariaDB and MySQL Monitoring as a Service (Maas) program to safe costs!

Upgrade from 1.0.x to 1.0.2 shell> cd /opt shell> tar xf /download/fpmmm-1.0.2.tar.gz shell> rm -f fpmmm shell> ln -s fpmmm-1.0.2 fpmmm
Changes in FromDual Performance Monitor for MariaDB and MySQL 1.0.2

This release contains various bug fixes.

You can verify your current FromDual Performance Monitor for MariaDB and MySQL version with the following command:

shell> fpmmm --version
fpmmm agent
  • Server entropy probe added.
  • Processlist empty state is covered.
  • Processlist statements made more robust.
  • Error caught properly after query.
  • Branch for Ubuntu is different, fixed.
  • PHP Variable variables_order is included into program.
  • Fixed the documentation URL in file INSTALL.
  • Connection was not set to utf8. This is fixed now.
  • fprint error fixed.
  • Library myEnv.inc updated from MyEnv project.

fpmmm Templates
  • Backup template added.
  • SQL thread and IO thread error more verbose and running again triggers implemented. Typo in slave template fixed.
  • Forks graph fixed, y axis starts from 0.

fpmmm agent installer
  • Error messages made more flexible.

For subscriptions of commercial use of fpmmm please get in contact with us.

Taxonomy upgrade extras:  performance monitor monitoring fpmmm maas release

MySQL InnoDB Cluster – howto install it from scratch

MySQL InnoDB Cluster is evolving very nicely. I realized that the MySQL Shell also improved a lot and that it has never been so easy to setup a cluster on 3 new nodes.

This is a video of the updated procedure on how to install MySQL InnoDB Cluster on GNU Linux rpm based (Oracle Linux, RedHat, CentOS, Fedora, …)

Super Saver Discount Ends 17 March for Percona Live 2019

Tutorials and initial sessions are set for the Percona Live Open Source Database Conference 2019, to be held May 28-30 at the Hyatt Regency in Austin, Texas! Percona Live 2019 is the premier open source database conference event for users of MySQL®, MariaDB®, MongoDB®, and PostgreSQL. It will feature 13 tracks presented over two days, plus a day of hands-on tutorials. Register now to enjoy our best Super Saver Registration rates which end March 17, 2019 at 11:30 p.m. PST.

Sample Sessions

Here is one item from each of our 13 tracks, samples from our full conference schedule.  Note too that many more great talks will be announced soon!

  1. MySQL®: The MySQL Query Optimizer Explained Through Optimizer Trace by Øystein Grøvlen of Alibaba Cloud.
  2. MariaDB®:  MariaDB Security Features and Best Practices by Robert Bindar of MariaDB Foundation.
  3. MongoDB®: MongoDB: Where Are We Going From Here? presented by David Murphy, Huawei
  4. PostgreSQL: A Deep Dive into PostgreSQL Indexing by Ibrar Ahmed, Percona
  5. Other Open Source Databases: ClickHouse Data Warehouse 101: The First Billion Rows by Alexander Zaitsev and Robert Hodges, Altinity
  6. Observability & Monitoring: Automated Database Monitoring at Uber with M3 and Prometheus by Rob Skillington and Richard Artoul, Uber
  7. Kubernetes: Complex Stateful Applications Made Easier with Kubernetes by Patrick Galbraith of Oracle MySQL
  8. Automation & AI: Databases at Scale, at Square by Emily Slocombe, Square
  9. Java Development for Open Source Databases: Introducing Java Profiling via Flame Graphs by Agustín Gallego, Percona
  10. Migration to Open Source Databases: Migrating between Proprietary and Open Source Database Technologies – Calculating your ROI by John Schultz, The Pythian Group
  11. Polyglot Persistence: A Tale of 8T (Transportable Tablespaces Vs Mysqldump) by Kristofer Grahn, Verisure AB
  12. Database Security & Compliance: MySQL Security and Standardization at PayPal by Stacy Yuan and Yashada Jadhav, Paypal Holdings Inc
  13. Business and Enterprise: MailChimp – Scale A MySQL Perspective by John Scott, MailChimp
Venue

Percona Live 2019 will be held at the downtown Hyatt Regency Austin Texas.  Located on the shores of Lady Bird Lake, it’s near water sports like kayaking, canoeing, stand-up paddling, and rowing. There are many food and historical sites nearby, such as the Texas Capitol, the LBJ Library, and Barton Springs Pool.  Book here for Percona’s conference room rate.

Sponsorships

Sponsors of Percona Live 2019 can interact with DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors, and entrepreneurs who typically attend Percona Live. Download our prospectus for more information.

Optimize Slow Queries with RDS Performance Insights & EverSQL

In today’s tech companies, engineers are expected to know and care more about the database and the performance of their application. To effectively face these challenging requirements, DBA, engineering and DevOps teams need the right tools to monitor, diagnose and resolve performance issues.

In this post, I’ll demonstrate how one can track down slow SQL queries and optimize them automatically, by utilizing the recent integration of EverSQL Query Optimizer with Amazon RDS Performance Insights, using EverSQL’s Chrome extension.

Monitoring and Optimizing Slow SQL Queries

As an AWS user, you probably already have Performance Instance enabled (as it’s the default configuration). When navigating to the Performance Insights dashboard, the first thing you’ll notice is the database load chart. Right below that, you’ll see a list of SQL queries sorted by the amount of load each query is generating on your server. The load each query generates is measured and visualized using the Average Active Sessions (AAS) metric.


In this post, we’ll use the StackOverflow database to demonstrate the optimization process. The SQL query at the top of the list seems to be very I/O intensive (as indicated by the light blue color). The query is scanning StackOverflow’s comments table, looking to fetch the ones written at the first day of 2019, or the ones with very high scores (>500). The table contains roughly 6.5 million records.

SELECT * FROM comments c WHERE DATE(c.CreationDate) = '2019-01-01' OR c.score > 500 ORDER BY c.id DESC LIMIT 1000;

The execution duration of the original query is 38 seconds.

If you have EverSQL’s Chrome extension installed, you’ll get an additional detailed report within the same page on RDS Performance Insights, explaining how each of the SQL queries can be optimized. EverSQL will automatically present the optimized query and suggest the optimal indexes you should create, to improve the query’s performance.


After applying the recommendations provided by EverSQL, the execution duration decreased to roughly 20ms.

This is a short glance at how the integration looks like and how the recommendations appear on Performance Insights:

Optimization Internals

To better understand the internals of this optimization process, let’s start from the beginning, by looking at the comments table’s structure and the query’s execution plan:

CREATE TABLE `comments` ( `Id` int(11) NOT NULL, `CreationDate` datetime NOT NULL, `PostId` int(11) NOT NULL, `Score` int(11) DEFAULT NULL, `Text` text CHARACTER SET utf8 NOT NULL, `UserId` int(11) DEFAULT NULL, PRIMARY KEY (`Id`), KEY `comments_idx_score` (`Score`), KEY `comments_idx_creationdate` (`CreationDate`), KEY `comments_idx_creationdate_score` (`CreationDate`,`Score`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;


As you can see, MySQL didn’t choose to use any of the indexes containing the Score or CreationDate columns. This is why:

  1. The column CreationDate is hidden inside the DATE function, which prevents MySQL from using an index on that column. To overcome it, we can transform the original condition to one with a date range check (lines 24-25 in the query below).
  2. There are two conditions in the WHERE clause, with an OR operator between them. In many similar cases, we saw that MySQL can’t / won’t use composite indexes effectively, nor use index_merge. Therefore, splitting the query to two parts combined with a UNION, will allow MySQL to use the relevant indexes.

So after applying the query transformations:

SELECT * FROM ( ( SELECT * FROM comments c WHERE c.score > 500 ORDER BY c.score DESC LIMIT 1000 ) UNION DISTINCT ( SELECT * FROM comments c WHERE c.CreationDate BETWEEN '2019-01-01 00:00:00' AND '2019-01-01 23:59:59' ORDER BY c.score DESC LIMIT 1000 ) ) AS union1 ORDER BY union1.score DESC LIMIT 1000;

As you can see from the new execution plan, MySQL now uses the indexes for both columns.

As mentioned above, the optimized query’s execution duration is roughly 20ms, which is significantly faster than the original one.

Wrapping up

If you’re using AWS RDS and would like to optimize SQL queries directly from Performance Insights, all you have to do is install EverSQL’s Chrome extension, to integrate EverSQL with RDS Performance Insights.
EverSQL currently supports MySQL and MariaDB, though support for more database types is coming up soon. Feel free to send us your feedback at hello@eversql.com.

Live MySQL Slave Rebuild with Percona Toolkit

Recently, we had an edge case where a MySQL slave went out-of-sync but it couldn’t be rebuilt from scratch. The slave was acting as a master server to some applications and it had data was being written to it. It was a design error, and this is not recommended, but it happened. So how do you synchronize the data in this circumstance? This blog post describes the steps taken to recover from this situation. The tools used to recover the slave were pt-slave-restartpt-table-checksum, pt-table-sync and mysqldiff.

Scenario

To illustrate this situation, it was built a master x slave configuration with sysbench running on the master server to simulate a general application workload. The environment was set with a Percona Server 5.7.24-26 and sysbench 1.0.16.

Below are the sysbench commands to prepare and simulate the workload:

# Create Data sysbench --db-driver=mysql --mysql-user=root --mysql-password=msandbox \ --mysql-socket=/tmp/mysql_sandbox45008.sock --mysql-db=test --range_size=100 \ --table_size=5000 --tables=100 --threads=1 --events=0 --time=60 \ --rand-type=uniform /usr/share/sysbench/oltp_read_only.lua prepare # Simulate Workload sysbench --db-driver=mysql --mysql-user=root --mysql-password=msandbox \ --mysql-socket=/tmp/mysql_sandbox45008.sock --mysql-db=test --range_size=100 \ --table_size=5000 --tables=100 --threads=10 --events=0 --time=6000 \ --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua --report-interval=1 run

With the environment set, the slave server was stopped, and some operations to desynchronize the slave were performed to reproduce the problem.

Fixing the issue

With the slave desynchronized, a restart on the replication was executed. Immediately, the error below appeared:

Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

To recover the slave from this error, we had to point the slave to an existing binary log with a valid binary log position. To get a valid binary log position, the command shown below had to be executed on the master:

master [localhost] {msandbox} ((none)) > show master status\G *************************** 1. row *************************** File: mysql-bin.000007 Position: 218443612 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.01 sec)

Then, a CHANGE MASTER command was run on the slave:

slave1 [localhost] {msandbox} (test) > change master to master_log_file='mysql-bin.000007', MASTER_LOG_POS=218443612; Query OK, 0 rows affected (0.00 sec) slave1 [localhost] {msandbox} (test) > start slave; Query OK, 0 rows affected (0.00 sec)

Now the slave had a valid binary log file to read, but since it was inconsistent, it hit another error:

Last_SQL_Errno: 1032 Last_SQL_Error: Could not execute Delete_rows event on table test.sbtest8; Can't find record in 'sbtest8', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000005, end_log_pos 326822861

Working past the errors

Before fixing the inconsistencies, it was necessary to keep the replication running and to skip the errors. For this, the pt-slave-restart tool will be used. The tool needs to be run on the slave server:

pt-slave-restart --user root --socket=/tmp/mysql_sandbox45008.sock --ask-pass

The tool skips errors and starts the replication threads. Below is an example of the output of the pt-slave-restart:

$ pt-slave-restart --user root --socket=/tmp/mysql_sandbox45009.sock --ask-pass Enter password: 2019-02-22T14:18:01 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007 1996 1146 2019-02-22T14:18:02 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007 8698 1146 2019-02-22T14:18:02 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007 38861 1146

Finding the inconsistencies

With the tool running on one terminal, the phase to check the inconsistencies began. First things first, an object definition check was performed using mysqldiff utility. The mysqldiff tool is part of MySQL utilities. To execute the tool:

$ mysqldiff --server1=root:msandbox@localhost:48008 --server2=root:msandbox@localhost:48009 test:test --difftype=sql --changes-for=server2

And below are the differences found between the master and the slave:

1-) A table that doesn’t exist

# WARNING: Objects in server1.test but not in server2.test: # TABLE: joinit

2-) A wrong table structure

# Comparing `test`.`sbtest98` to `test`.`sbtest98` [FAIL] # Transformation for --changes-for=server2: # ALTER TABLE `test`.`sbtest98` DROP INDEX k_98, DROP COLUMN x, ADD INDEX k_98 (k);

Performing the recommendations on the slave (creating the table and the table modification) the object’s definition was now equal. The next step was to check data consistency. For this, the pt-table-checksum was used to identify which tables are out-of-sync. This command was run on the master server.

$ pt-table-checksum -uroot -pmsandbox --socket=/tmp/mysql_sandbox48008.sock --replicate=percona.checksums --create-replicate-table --empty-replicate-table --no-check-binlog-format --recursion-method=hosts

And an output example:

01 master]$ pt-table-checksum --recursion-method dsn=D=percona,t=dsns --no-check-binlog-format --nocheck-replication-filter --host 127.0.0.1 --user root --port 48008 --password=msandbox Checking if all tables can be checksummed ... Starting checksum ... at /usr/bin/pt-table-checksum line 332. Replica lag is 66 seconds on bm-support01.bm.int.percona.com. Waiting. Replica lag is 46 seconds on bm-support01.bm.int.percona.com. Waiting. Replica lag is 33 seconds on bm-support01.bm.int.percona.com. Waiting. TS ERRORS DIFFS ROWS DIFF_ROWS CHUNKS SKIPPED TIME TABLE 02-26T16:27:59 0 0 5000 0 1 0 0.037 test.sbtest1 02-26T16:27:59 0 0 5000 0 1 0 0.039 test.sbtest10 02-26T16:27:59 0 1 5000 0 1 0 0.033 test.sbtest100 02-26T16:27:59 0 1 5000 0 1 0 0.034 test.sbtest11 02-26T16:27:59 0 1 5000 0 1 0 0.040 test.sbtest12 02-26T16:27:59 0 1 5000 0 1 0 0.034 test.sbtest13

Fixing the data inconsistencies

Analyzing the DIFFS column it is possible to identify which tables were compromised. With this information, the pt-table-sync tool was used to fix these inconsistencies. The tool synchronizes MySQL table data efficiently, performing non-op changes on the master so they can be replicated and applied on the slave. The tools need to be run on the slave server. Below is an example of the tool running:

$ pt-table-sync --execute --sync-to-master h=localhost,u=root,p=msandbox,D=test,t=sbtest100,S=/tmp/mysql_sandbox48009.sock

It is possible to perform a dry-run of the tool before executing the changes to check what changes the tool will apply:

$ pt-table-sync --print --sync-to-master h=localhost,u=root,p=msandbox,D=test,t=sbtest100,S=/tmp/mysql_sandbox48009.sock REPLACE INTO `test`.`sbtest100`(`id`, `k`, `c`, `pad`) VALUES ('1', '1654', '97484653464-60074971666-42998564849-40530823048-27591234964-93988623123-02188386693-94155746040-59705759910-14095637891', '15000678573-85832916990-95201670192-53956490549-57402857633') /*percona-toolkit src_db:test src_tbl:sbtest100 src_dsn:D=test,P=48008,S=/tmp/mysql_sandbox48009.sock,h=127.0.0.1,p=...,t=sbtest100,u=root dst_db:test dst_tbl:sbtest100 dst_dsn:D=test,S=/tmp/mysql_sandbox48009.sock,h=localhost,p=...,t=sbtest100,u=root lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:17806 user:vinicius.grippa host:bm-support01.bm.int.percona.com*/; REPLACE INTO `test`.`sbtest100`(`id`, `k`, `c`, `pad`) VALUES ('2', '3007', '31679133794-00154186785-50053859647-19493043469-34585653717-64321870163-33743380797-12939513287-31354198555-82828841987', '30122503210-11153873086-87146161761-60299188705-59630949292') /*percona-toolkit src_db:test src_tbl:sbtest100 src_dsn:D=test,P=48008,S=/tmp/mysql_sandbox48009.sock,h=127.0.0.1,p=...,t=sbtest100,u=root dst_db:test dst_tbl:sbtest100 dst_dsn:D=test,S=/tmp/mysql_sandbox48009.sock,h=localhost,p=...,t=sbtest100,u=root lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:17806 user:vinicius.grippa host:bm-support01.bm.int.percona.com*/;

After executing the pt-table-sync, we recommend that you run the pt-table-checksum again and check if the DIFFS column shows the value of 0.

Conclusion

This blog post was intended to cover all possible issues that could happen on a slave when it goes out-of-sync such as DDL operations, binary log purge and DML operations. This process involves many steps and it could take a long time to finish, especially in large databases. Note that this process might take longer than the backup/restore process. However, in situations like the one mentioned above, it might be the only solution to recover a slave.


Image based on Photo by Randy Fath on Unsplash

 

MariaDB OpenWorks was an ocean of Galera Cluster activity

MariaDB OpenWorks happened February 25-27 2019 in New York City, and it was filled with a lot of activity.

The keynote by Michael Howard is an interesting watch, one with many quoted articles especially around “strip-mining open source technologies and companies,” and “abusing the license and privilege, and not giving back to the community.” Many articles were written about this topic, and it is clear that open source has arrived considering all the success around the IBM acquisition of Red Hat, the Elastic IPO, the Microsoft acquisition of GitHub, the SalesForce acquisition fo MuleSoft and the MongoDB IPO.

Seppo Jaakola talked about What’s new in Galera 4. Expect a lot more blogs about this, and don’t forget to read Galera Cluster 4 available for use in the latest MariaDB 10.4.3 Release Candidate!.

From a Galera Cluster standpoint, it would be recommended not just to watch the video: Auto Europe’s ongoing journey with MariaDB and open source but to also read an excellent article by Jon Reed on Diginomica: OpenWorks 2019 – why Auto Europe moved their CMS database to open source, and what they’ve learned.

Auto Europe has a three node Galera Cluster running their MariaDB installation. One of those nodes is in a separate data center from the other two, so even if an entire data center goes down, Auto Europe is still up.

It only took two days of consulting to get them up to speed!

Nokia Networks Rick Lane is also a happy user of Galera Cluster, as you can see in the video Deploying MariaDB databases with containers at Nokia Networks. At about 14 minutes in, you will note that their MariaDB container supports deployment of all configurations including Galera Cluster. At 35:40, he goes into Galera Cluster healing and the challenges they faced getting this going. There is also Galera Cluster scaling shortly thereafter (scale out, and “scaling in”, i.e. shrinking the cluster size, but no less than 3 nodes).

CCV is a Dutch payment processor and loyalty processor, and their video CCV: migrating our payment processing system to MariaDB is a great watch too as they migrated from SQL Server 2008, have 2 data centres, use 5 Galera Cluster nodes per data centre, and the best quote: “MariaDB together with Galera [Cluster] is a really good fit for disaster recovery and near zero downtime. You can roll a data change right through the Galera Cluster starting on the first node and to the fifth node, and that fits us well.”

There were many people coming to the Galera Cluster booth at the Expo Hall, and overall, we are looking forward to MariaDB OpenWorks 2020!

Add Animations to React Apps with React-Lottie

So your designer just came up with an amazing new animation on Adobe After Effects for your web application, fantastic! Just one problem, how do you convert this to a format usable within your web app? No worries, the awesome folks at Airbnb Design have your back. Introducing, Lottie , a fantastic new technology that allows you to use these animations in real time in a lightweight and flexible format just as easily as static images.

How does this work you ask? Well, here's the explanation from Airbnb Design:

Lottie uses animation data exported as JSON files from an open-source After Effects extension called Bodymovin. The extension is bundled with a JavaScript player that can render the animations on the web. Lottie is amazing for 3 main reasons:

  • It makes use of Adobe Effects flexible features to come up with high quality animations
  • It allows you to manipulate your animation in whatever way you would like
  • The animations have a small file size as they are they are in vector format

In this article, we'll be looking at how we can leverage the power of Lottie to add more life to our React Applications. To do this, we'll be using an npm package called react-lottie to generate viewable animations from JSON files exported by Bodymovin. Since we'll be focusing on the app side of things, we won't look into how these animations are created in After Effects but rather use animations created and open sourced by designers on Lottiefiles. If you have animations on After effects that you would like to use, you can export them to JSON using the Bodymovin plugin for After Effects.

What we'll be building

For this tutorial, we'll build this application which consists of two types of Lotties, one regular one and another with controlled animations depending on certain values in state.

https://codesandbox.io/embed/github/austinroy/lottie-demo/tree/master/

Let's get started.

Creating our application

We'll be using create-react-app to create our app. If you don't have it installed yet run the following command to do so:

npm install --g create-react-app

Now let's create our application:

create-react-app lottie-demo

This will create some boilerplate code for our app and configure our react development environment. Open up the lottie-demo directory and let's get coding. Yarn is create-react-app's default package manager and it's what we'll use to handle our dependencies(or rather dependency since there's only one for this project) so it's worth looking into as well.

Now let's install our one and only dependency, react-lottie , to do this run this command in the terminal:

yarn add react-lottie

Great, we are all set to go, now let's add our animations.

Getting our Sample Lotties

We'll bve getting our sample animations from LottieFiles, so head over there and create a free account.

LottieFiles gives you access to a curated library of awesome animations from designers all over the globe, it also provides a platform to test, upload and share your own animations and is a great resource in general.

Simply browse through the animations, tweak the settings if necessary and when you are happy with what you have click Download JSON to get the animation. Now inside the src directory of our application create two more directories, components and lotties . These will hold or React components and Lottie animation data respectively. Place the downloaded JSON files inside the lotties directory.

Awesome, now we are ready to create components that display these animations.

Uncontrolled Lotties

Animations can be allowed to run freely or be manipulated by data in state. First, let's look at the first case and create an animation that simply imports and renders a lottie.

Create a file called UncontrolledLottie.jsx inside the components directory and place the following code inside it.

// UncontrolledLottie.jsx import React, { Component } from 'react' import Lottie from 'react-lottie' import animationData from '../lotties/4203-take-a-selfie.json' class UncontrolledLottie extends Component { render(){ const defaultOptions = { loop: true, autoplay: true, animationData: animationData, rendererSettings: { preserveAspectRatio: 'xMidYMid slice' } }; return( <div> <h1>Lottie</h1> <p>Base animation free from external manipulation</p> <Lottie options={defaultOptions} height={400} width={400} /> </div> ) } } export default UncontrolledLottie

In this case, 4204-take-a-selfie.json is the JSON file of the lottie I had downloaded. Feel free to replace that with whichever you had downloaded. Now let's go through the information provided in the configuration. You will notice we pass an options prop to the Lottie component, this contains the configuration data for the animation to be rendered. This consists of

  • animationData - an Object with the exported animation data, in our case, the json file
  • autoplay - a boolean determining if it will start playing as soon as it is ready
  • loop - a boolean or number, this determines if the animation will repeat or how many times it should repeat
  • rendererSettings - configuration data for the renderer

These are just some of the options you can provide.

We also provide the dimensions(length and width) of out animation as props to Lottie.

Great, this animation is now ready for use by importing it into our App.js and will look like this: But first let's add our controlled Lottie.

Controlled lotties

Lotties can be manipulated in React to change some of their properties using data in state. In our case, we'll look at how we can play, stop and pause the animations in our lottie.

Let's create a file in components and name it ControlledLottie.jsx . Place the following code in that file

// ControlledLottie.jsx import React, { Component } from 'react' import Lottie from 'react-lottie' import animationData from '../lotties/77-im-thirsty.json' class ControlledLottie extends Component { state = {isStopped: false, isPaused: false} render(){ const buttonStyle = { display: 'inline-block', margin: '10px auto', marginRight: '10px', border: 'none', color: 'white', backgroundColor: '#647DFF', borderRadius: '2px', fontSize: '15px' }; const defaultOptions = { loop: true, autoplay: true, animationData: animationData, rendererSettings: { preserveAspectRatio: 'xMidYMid slice' } }; return( <div className="controlled"> <h1>Controlled Lottie</h1> <p>Uses state manipulation to start, stop and pause animations</p> <Lottie options={defaultOptions} height={400} width={400} isStopped={this.state.isStopped} isPaused={this.state.isPaused} /> <button style={buttonStyle} onClick={() => this.setState({isStopped: true})}>Stop</button> <button style={buttonStyle} onClick={() => this.setState({isStopped: false, isPaused: false })}>Play</button> <button style={buttonStyle} onClick={() => this.setState({isPaused: !this.state.isPaused})}>Pause</button> </div> ) } } export default ControlledLottie

Let's analyse this code. At first glance it is pretty much identical to the code in ControlledLottie.jsx but there's a few key differences. We've added 3 buttons at the bottom along with their styling. These buttons are used to toggle the values of the data in state.

The Lottie component also has two more props:

  • isStopped - a boolean indicating whether the animation is active or not
  • isPaused - a boolean that indicates if the animation is paused or not

Here's whar our controlled lottie component will look like:

Both our animations are ready to use now so let's import them into App.js and display them in our app. Edit the code in App.js , importing our components and adding them inside the render function.

// App.js import React, { Component } from 'react'; import './App.css'; import UncontrolledLottie from './components/UncontrolledLottie'; import ControlledLottie from './components/ControlledLottie'; class App extends Component { render() { return ( <div className="App"> <h1>REACT LOTTIES</h1> <div className="lotties"> <UncontrolledLottie /> <ControlledLottie /> </div> </div> ); } } export default App;

Let's style the our app to make it mobile responsive, we can do this using css grid. Add the following code to your App.css file.

.lotties{ display: grid; grid-template-columns: auto auto; } @media only screen and (min-device-width : 320px) and (max-device-width: 480px){ .lotties{ display: grid; grid-template-columns: auto; } }

This places our lotties in two columns that will be reduced to a single column on devices with a smaller width.

Now fire up the application and watch the magic happen.

yarn start

Your browser will open up and you should be able to see the two animations active. Clicking Pause will have the controlled animation freeze in it's current frame, clicking it again should resume it as well as clicking Play . Clicking Stop returns the animation to it's default position and holds it there. Pretty nice, right?

https://codesandbox.io/embed/github/austinroy/lottie-demo/tree/master/

Conclusion

Lottie can be used to really add life to an application, be it on web or mobile. They can be used to make applications more interactive and provide visually appealing feedback, for example animations indicating the state of certain processes. Lotties are also amazingly easy to use and extremely lightweight so they shouldn't have a massive impact on the perfomance of your application.

Angular 7|6 with PHP: Consuming a RESTful CRUD API with HttpClient and Forms

In the previous tutorial, you have implemented the PHP and MySQL backend that exposes a REST API for working with an insurance database. Let's now create the Angular 7 backend. In this tutorial, you'll learn how to use HttpClient to make HTTP calls to a REST API and use template-based forms to submit data. Now that you've created the RESTful API with a PHP script, you can proceed to create your Angular 7 project. Installing Angular CLI 7 The recommended way of creating Angular projects is through using Angular CLI, the official tool created by the Angular team. The latest and best version yet is Angular CLI 7 so head back to another terminal window and run the following command to install the CLI: $ npm install -g @angular/cli Note: This will install Angular CLI globally so make sure you have configured npm to install packages globally without adding sudo in Debian systems and macOS or using an administrator command prompt on Windows. You can also just fix your npm permissions if you get any issues Generating a New Project That's it! You can now use the CLI to create an Angular 7 project using the following command: $ cd angular-php-app $ ng new frontend The CLI will ask you if Would you like to add Angular routing? type y because we'll need routing setup in our application. And Which stylesheet format would you like to use? Select CSS. Wait for the CLI to generate and install the required dependencies and then you can start your development server using: $ cd frontend $ ng serve You can access the frontend application by pointing your browser to the http://localhost:4200 address. Setting up HttpClient & Forms Angular provides developers with a powerful HTTP client for sending HTTP requests to servers. It's based on the XMLHttpRequest interface supported on most browsers and has a plethora of features like the use of RxJS Observable instead of callbacks or promises, typed requests and responses and interceptors. You can setup HttpClient in your project by simply importing the HttpClientModule in your main application module. We'll also be using a template-based form in our application so we need to import FormsModule. Open the src/app/app.module.ts file and import HttpClientModule then ad it to the imports array of @NgModule: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { HttpClientModule } from '@angular/common/http'; import { FormsModule } from '@angular/forms'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, HttpClientModule, FormsModule, AppRoutingModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } That's it! You can now use HttpClient in your components or services via dependency injection. Creating an Angular Service Let's now, create an Angular service that will encapsulate all the code needed for interfacing with the RESTful PHP backend. Open a new terminal window, navigate to your frontend project and run the following command: $ ng generate service api This will create the src/app/api.service.spec.ts and src/app/api.service.ts files with a minimal required code. Open the src/app/api.service.ts and start by importing and injecting HttpClient: import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; @Injectable({ providedIn: 'root' }) export class ApiService { constructor(private httpClient: HttpClient) {} } We inject HttpClient as a private httpClient instance via the service constructor. This is called dependency injection. If you are not familiar with this pattern. Here is the definition from Wikipedia: In software engineering, dependency injection is a technique whereby one object (or static method) supplies the dependencies of another object. A dependency is an object that can be used (a service). An injection is the passing of a dependency to a dependent object (a client) that would use it. The service is made part of the client's state.[1] Passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern. Also this is what Angular docs says about dependency injection in Angular: Dependency injection (DI), is an important application design pattern. Angular has its own DI framework, which is typically used in the design of Angular applications to increase their efficiency and modularity. Dependencies are services or objects that a class needs to perform its function. DI is a coding pattern in which a class asks for dependencies from external sources rather than creating them itself. You can now use the injected httpClient instance to send HTTP requests to your PHP REST API. Creating the Policy Model Create a policy.ts file in the src/app folder of your project and the add the following TypeScript class: export class Policy { id: number; number: number; amount: number; } Defining the CRUD Methods Next, open the src/app/api.service.ts file and import the Policy model and the RxJS Observable interface: import { Policy } from './policy'; import { Observable } from 'rxjs'; Next, define the PHP_API_SERVER variable in the service: export class ApiService { PHP_API_SERVER = "http://127.0.0.1:8080"; The PHP_API_SERVER holds the address of the PHP server. Next, add the readPolicies() method that will be used to retrieve the insurance policies from the REST API endpoint via a GET request: readPolicies(): Observable<Policy[]>{ return this.httpClient.get<Policy[]>(`${this.PHP_API_SERVER}/api/read.php`); } Next, add the createPolicy() method that will be used to crate a policy in the database: createPolicy(policy: Policy): Observable<Policy>{ return this.httpClient.post<Policy>(`${this.PHP_API_SERVER}/api/create.php`, policy); } Next, add the updatePolicy() method to update policies: updatePolicy(policy: Policy){ return this.httpClient.put<Policy>(`${this.PHP_API_SERVER}/api/update.php`, policy); } Finally, add the deletePolicy() to delete policies from the SQL database: deletePolicy(id: number){ return this.httpClient.delete<Policy>(`${this.PHP_API_SERVER}/api/delete.php/?id=${id}`); } That's all for the service. Creating the Angular Component After creating the service that contains the CRUD operations, let's now create an Angular component that will call the service methods and wil contain the table to display policies and a form to submit a policy to the PHP backend. head back to your terminal and run the following command: $ ng generate component dashboard Let's add this component to the Router. Open the src/app/app-routing.module.ts file and add a /dashboard route: import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; import { DashboardComponent } from './dashboard/dashboard.component'; const routes: Routes = [ { path: 'dashboard', component: DashboardComponent } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { } You can now access your dashboard component from the 127.0.0.1:4200/dashboard URL. This is a screenshot of the page at this point: Let's remove the boilerplate content added by Angular CLI. Open the src/app/app.component.html file and update accordingly: <router-outlet></router-outlet> We only leave the router outlet where Angular router will insert the matched component(s). Next, open the src/app/dashboard/dashboard.component.ts file and import ApiService then inject it via the component constructor: import { Component, OnInit } from '@angular/core'; import { ApiService } from '../api.service'; @Component({ selector: 'app-dashboard', templateUrl: './dashboard.component.html', styleUrls: ['./dashboard.component.css'] }) export class DashboardComponent implements OnInit { constructor(private apiService: ApiService) { } ngOnInit() { } } We inject ApiService as a private apiService instance. Next, let's define the policies array that will hold the insurance policies once we get them from the server after we send a GET request and also the selectedPolicy variable that will hold the selected policy from the table. export class DashboardComponent implements OnInit { policies: Policy[]; selectedPolicy: Policy = { id : null , number:null, amount: null}; On the ngOnInit() method of the component, let's call the readPolicies() method of ApiService to get the policies: ngOnInit() { this.apiService.readPolicies().subscribe((policies: Policy[])=>{ this.policies = policies; console.log(this.policies); }) } We call the readPolicies() which return an Observable<Policy[]> object and we subscribe to the RxJS Observable. We then assign the returned policies to the policies array of our component and we also log the result in the console. Note: The actual HTTP request is only sent to the server when you subscribe to the returned Observable. You should see the returned policies in your browser console. This is a screenshot of the output in my case: We'll see a bit later how to display these policies in a table in the component template. Let's add the other methods to create, update and delete policies in our component. createOrUpdatePolicy(form){ if(this.selectedPolicy && this.selectedPolicy.id){ form.value.id = this.selectedPolicy.id; this.apiService.updatePolicy(form.value).subscribe((policy: Policy)=>{ console.log("Policy updated" , policy); }); } else{ this.apiService.createPolicy(form.value).subscribe((policy: Policy)=>{ console.log("Policy created, ", policy); }); } } selectPolicy(policy: Policy){ this.selectedPolicy = policy; } deletePolicy(id){ this.apiService.deletePolicy(id).subscribe((policy: Policy)=>{ console.log("Policy deleted, ", policy); }); } Adding the Table and Form Let's now add a table and form to display and create the policies in our dashboard component. Open the src/app/dashboard/dashboard.component.html and add the following HTML code: <h1>Insurance Policy Management</h1> <div> <table border='1' width='100%' style='border-collapse: collapse;'> <tr> <th>ID</th> <th>Policy Number</th> <th>Policy Amount</th> <th>Operations</th> </tr> <tr *ngFor="let policy of policies"> <td>{{ policy.id }}</td> <td>{{ policy.number }}</td> <td>{{ policy.amount }}</td> <td> <button (click)="deletePolicy(policy.id)">Delete</button> <button (click)="selectPolicy(policy)">Update</button> </td> </tr> </table> This is a screenshot of the page at this point: Next, below the table, let's add a form that will be used to create and update a policy: <br> <form #f = "ngForm"> <label>Number</label> <input type="text" name="number" [(ngModel)] = "selectedPolicy.number"> <br> <label>Amount</label> <input type="text" name="amount" [(ngModel)] = "selectedPolicy.amount"> <br> <input type="button" (click)="createOrUpdatePolicy(f)" value="Create or Update Policy"> </form> This is a screenshot of the page at this point: Next, open the src/app/dashboard/dashboard.component.css file and the following CSS styles: input { width: 100%; padding: 2px 5px; margin: 2px 0; border: 1px solid red; border-radius: 4px; box-sizing: border-box; } button, input[type=button]{ background-color: #4CAF50; border: none; color: white; padding: 4px 7px; text-decoration: none; margin: 2px 1px; cursor: pointer; } th, td { padding: 1px; text-align: left; border-bottom: 1px solid #ddd; } tr:hover {background-color: #f5f5f5;} This is a screenshot of the page after adding some minimal styling: Conclusion In this tutorial, we learned how to create a RESTful CRUD application with PHP, MySQL and Angular 7.

Angular 7|6 with PHP and MySQL RESTful CRUD Example & Tutorial

In this tutorial, you'll create an example REST API CRUD Angular 7 application with PHP and MySQL back-end. You will be creating a simple RESTful API that supports GET, POST, PUT and DELETE requests and allow you to perform CRUD operations against a MySQL database to create, read, update and delete records from a database. For the application design, It's a simple interface for working with vehicle insurance policies. For the sake of simplicity, you are only going to add the following attributes to the policies database table: number which stores to the insurance policy number, amount which stores the insurance amount. This is of course far from being a complete database design for a fully working insurance system. Because at least you need to add other tables like employees, clients, coverage, vehicles and drivers etc. And also the relationships between all these entities. Prerequisites In this tutorial we assume you have the following prerequisites: The MySQL database management system installed on your development machine, PHP installed on your system (both these first requirements are required by the back-end project), Node.js 8.9+ and NPM installed in your system. This is only required by your Angular project. You also need to have a working experience with PHP and the different functions that will be used to create the SQL connection, getting the GET and POST data and returning JSON data in your code. You need to be familiar with TypeScript, a superset of JavaScript that's used with Angular. A basic knowledge of Angular is preferable but not required since you'll go from the first step until your create a project that communicates with a PHP server. Also read PHP Image/File Upload Tutorial and Example [FormData and Angular 7 Front-End] Creating the PHP Application Let's start by creating a simple PHP script that connects to a MySQL database and listens to API requests then responds accordingly by either fetching and returning data from the SQL table or insert, update and delete data from the database. Create a folder for your project: $ mkdir angular-php-app $ cd angular-php-app $ mkdir backend You create the angular-php-app that will contain the full front-end and back-end projects. Next, you navigate inside it and create the backend folder that will contain a simple PHP script that implements a simple CRUD REST API against a MySQL database. Next, navigate into your backend project and create an api folder. $ cd backend $ mkdir api Inside the api folder, create the following files: $ cd api $ touch database.php $ touch read.php $ touch create.php $ touch update.php $ touch delete.php Open the backend/api/database.php file and add the following PHP code step by step: <?php header("Access-Control-Allow-Origin: *"); header("Access-Control-Allow-Methods: PUT, GET, POST, DELETE"); header("Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept"); These lines are used to add response headers such as CORS and the allowed methods (PUT, GET, DELETE and POST). Setting CORS to * will allow your PHP server to accept requests from another domain where the Angular 7 server is running from without getting blocked by the browser by reason of the Same Origin Policy. In development, you'll be running the PHP server from localhost:8080 port and Angular from localhost:4200 which are considered as two distinct domains. Next, add: define('DB_HOST', 'localhost'); define('DB_USER', 'root'); define('DB_PASS', 'YOUR_PASSWORD'); define('DB_NAME', 'mydb'); These variables hold the credentials that will be used to connect to the MySQL database and the name of the database. Note: Make sure you change them to your actual MySQL credentials. Also make sure you have created a database with a policies table that has two number and amount columns. Next, add: function connect() { $connect = mysqli_connect(DB_HOST ,DB_USER ,DB_PASS ,DB_NAME); if (mysqli_connect_errno($connect)) { die("Failed to connect:" . mysqli_connect_error()); } mysqli_set_charset($connect, "utf8"); return $connect; } $con = connect(); This will allow you to create a connection to the MySQL database using the mysqli extension. That's all for the database.php file Implementing the Read Operation Now, let's implement the read operation. Open the backend/api/read.php file and add the following code: <?php /** * Returns the list of policies. */ require 'database.php'; $policies = []; $sql = "SELECT id, number, amount FROM policies"; if($result = mysqli_query($con,$sql)) { $i = 0; while($row = mysqli_fetch_assoc($result)) { $policies[$i]['id'] = $row['id']; $policies[$i]['number'] = $row['number']; $policies[$i]['amount'] = $row['amount']; $i++; } echo json_encode($policies); } else { http_response_code(404); } This will fetch the list of policies from the database and return them as a JSON response. If there is an error it will return a 404 error. Implementing the Create Operation Let's now implement the create operation. Open the backend/api/create.php file and add the following code: <?php require 'database.php'; // Get the posted data. $postdata = file_get_contents("php://input"); if(isset($postdata) && !empty($postdata)) { // Extract the data. $request = json_decode($postdata); // Validate. if(trim($request->number) === '' || (float)$request->amount < 0) { return http_response_code(400); } // Sanitize. $number = mysqli_real_escape_string($con, trim($request->number)); $amount = mysqli_real_escape_string($con, (int)$request->amount); // Create. $sql = "INSERT INTO `policies`(`id`,`number`,`amount`) VALUES (null,'{$number}','{$amount}')"; if(mysqli_query($con,$sql)) { http_response_code(201); $policy = [ 'number' => $number, 'amount' => $amount, 'id' => mysqli_insert_id($con) ]; echo json_encode($policy); } else { http_response_code(422); } } Implementing the Update Operation Open the backend/api/update.php file and add the following code: <?php require 'database.php'; // Get the posted data. $postdata = file_get_contents("php://input"); if(isset($postdata) && !empty($postdata)) { // Extract the data. $request = json_decode($postdata); // Validate. if ((int)$request->id < 1 || trim($request->number) == '' || (float)$request->amount < 0) { return http_response_code(400); } // Sanitize. $id = mysqli_real_escape_string($con, (int)$request->id); $number = mysqli_real_escape_string($con, trim($request->number)); $amount = mysqli_real_escape_string($con, (float)$request->amount); // Update. $sql = "UPDATE `policies` SET `number`='$number',`amount`='$amount' WHERE `id` = '{$id}' LIMIT 1"; if(mysqli_query($con, $sql)) { http_response_code(204); } else { return http_response_code(422); } } Implementing the Delete Operation Open the backend/api/delete.php file and add the following code: <?php require 'database.php'; // Extract, validate and sanitize the id. $id = ($_GET['id'] !== null && (int)$_GET['id'] > 0)? mysqli_real_escape_string($con, (int)$_GET['id']) : false; if(!$id) { return http_response_code(400); } // Delete. $sql = "DELETE FROM `policies` WHERE `id` ='{$id}' LIMIT 1"; if(mysqli_query($con, $sql)) { http_response_code(204); } else { return http_response_code(422); } In all operations, we first require the database.php file for connecting to the MySQL database and then we implement the appropriate logic for the CRUD operation. Serving the PHP REST API Project You can next serve your PHP application using the built-in development server using the following command: $ php -S 127.0.0.1:8080 -t ./angular7-php-app/backend This will run a development server from the 127.0.0.1:8080 address. Creating the MySQL Database In your terminal, run the following command to start the mysql client: $ mysql -u root -p The client will prompt for the password that you configured when installing MySQL in your system. Next, run this SQL query to create a mydb database: mysql> create database mydb; Creating the policies SQL Table Next create the policies SQL table with two number and amount columns: mysql> create table policies( id int not null auto_increment, number varchar(20), amount float, primary key(id)); Now, you are ready to send GET, POST, PUT and DELETE requests to your PHP server running from the 127.0.0.1:8080 address. For sending test requests, you can use REST clients such as Postman or cURL before creating the Angular UI. Leave your server running and open a new terminal. Creating the Angular 7 Project Now that you've created the RESTful API with a PHP script, you can proceed to create your Angular 7 project. Read the second part: Angular 7|6 with PHP: Consuming a RESTful CRUD API with HttpClient and Forms Conclusion In this tutorial, you have created a PHP RESTful API that can be used to execute CRUD operations against a MySQL database to create, read, update and delete insurance policies. You have enabled CORS so you can use two domains localhost:8000 and localhost:4200 for respectively serving PHP and Angular 7 and being able to send requests from Angular to PHP without getting blocked by the Same Origin Policy rule in web browsers.

Upcoming Webinar Thurs 3/14: Web Application Security – Why You Should Review Yours

Please join Percona’s Information Security Architect, David Bubsy, as he presents his talk Web Application Security – Why You Should Review Yours on March 14th, 2019 at 6:00 AM PDT (UTC-7) / 9:00 AM EDT (UTC-4).

Register Now

In this talk, we take a look at the whole stack and I don’t just mean LAMP.

We’ll cover what an attack surface is and some areas you may look to in order to ensure that you can reduce it.

For instance, what’s an attack surface?

Acronym Hell, what do they mean?

Vulnerability Naming, is this media naming stupidity or driving the message home?

Detection, Prevention and avoiding the boy who cried wolf are some further examples.

Additionally, we’ll cover emerging technologies to keep an eye on or even implement yourself to help improve your security posture.

There will also be a live compromise demo (or backup video if something fails) that covers compromising a PCI compliant network structure to reach the database system. Through this compromise you can ultimately exploit multiple failures to gain bash shell access over the MySQL protocol.

Register for this webinar on web application security to learn more.

PMM’s Custom Queries in Action: Adding a Graph for InnoDB mutex waits

One of the great things about Percona Monitoring and Management (PMM) is its flexibility. An example of that is how one can go beyond the exporters to collect data. One approach to achieve that is using textfile collectors, as explained in  Extended Metrics for Percona Monitoring and Management without modifying the Code. Another method, which is the subject matter of this post, is to use custom queries.

While working on a customer’s contention issue I wanted to check the behaviour of InnoDB Mutexes over time. Naturally, I went straight to PMM and didn’t find a graph suitable for my needs. No graph, no problem! Luckily anyone can enhance PMM. So here’s how I made the graph I needed.

The final result will looks like this:

Custom Queries What is it?

Starting from the version 1.15.0, PMM provides user the ability to take a SQL SELECT statement and turn the resultset into a metric series in PMM. That is custom queries.

How do I enable that feature?

This feature is ON by default. You only need to edit the configuration file using YAML syntax

Where is the configuration file located?

Config file location is /usr/local/percona/pmm-client/queries-mysqld.yml by default. You can change it when adding mysql metrics via pmm-admin:

pmm-admin add mysql:metrics ... -- --queries-file-name=/usr/local/percona/pmm-client/query.yml

How often is data being collected?

The queries are executed at the LOW RESOLUTION level, which by default is every 60 seconds.

InnoDB Mutex monitoring

The method used to gather Mutex status is querying the PERFORMANCE SCHEMA, as explained here: https://dev.mysql.com/doc/refman/5.7/en/monitor-innodb-mutex-waits-performance-schema.html but intentionally removed the SUM_TIMER_WAIT > 0 condition, so the query used looks like this:

SELECT EVENT_NAME, COUNT_STAR, SUM_TIMER_WAIT FROM performance_schema.events_waits_summary_global_by_event_name WHERE EVENT_NAME LIKE 'wait/synch/mutex/innodb/%'

For this query to return data, some requirements need to be met:

  • The most important one: Performance Schema needs to be enabled
  • Consumers for “event_waits” enabled
  • Instruments for ‘wait/synch/mutex/innodb’ enabled.

If performance schema is enabled, the other two requirements are met by running these two queries:

update performance_schema.setup_instruments set enabled='YES' where name like 'wait/synch/mutex/innodb%'; update performance_schema.setup_consumers set enabled='YES' where name like 'events_waits%';

YAML Configuration File

This is where the magic happens. Explanation of the YAML syntax is covered in deep on the documentation: https://www.percona.com/doc/percona-monitoring-and-management/conf-mysql.html#pmm-conf-mysql-executing-custom-queries

The one used for this issue is:

--- mysql_global_status_innodb_mutex: query: "SELECT EVENT_NAME, COUNT_STAR, SUM_TIMER_WAIT FROM performance_schema.events_waits_summary_global_by_event_name WHERE EVENT_NAME LIKE 'wait/synch/mutex/innodb/%'" metrics: - EVENT_NAME: usage: "LABEL" description: "Name of the mutex" - COUNT_STAR: usage: "COUNTER" description: "Number of calls" - SUM_TIMER_WAIT: usage: "GAUGE" description: "Duration"

The key info is:

  • The metric name is mysql_global_status_innodb_mutex
  • Since EVENT_NAME is used as a label, it will be possible to have values per event

Remember that this should be in the queries-mysql.yml file. Full path /usr/local/percona/pmm-client/queries-mysqld.yml  inside the db node.

Once that is done, you will start to have those metrics available in Prometheus. Now, we have a graph to do!

Creating the graph in Grafana

Before jumping to grafana to add the graph, we need a proper Prometheus Query (A.K.A: PromQL). I came up with these two (one for the count_star, one for the sum_timer_wait):

topk(5, label_replace(rate(mysql_global_status_innodb_mutex_COUNT_STAR{instance="$host"}[$interval]), "mutex", "$2", "EVENT_NAME", "(.*)/(.*)" ) or label_replace(irate(mysql_global_status_innodb_mutex_COUNT_STAR{instance="$host"}[5m]), "mutex", "$2", "EVENT_NAME", "(.*)/(.*)" ))

and

topk(5, label_replace(rate(mysql_global_status_innodb_mutex_SUM_TIMER_WAIT{instance="$host"}[$interval]), "mutex", "$2", "EVENT_NAME", "(.*)/(.*)" ) or label_replace(irate(mysql_global_status_innodb_mutex_SUM_TIMER_WAIT{instance="$host"}[5m]), "mutex", "$2", "EVENT_NAME", "(.*)/(.*)" ))

These queries are basically: Return the rate values of each mutex event for a specific host. And make some regex to return only the name of the event, and discard whatever is before the last slash character.

Once we are good with our PromQL queries, we can go and add the graph.

Finally, I got the graph that I needed with a very small effort.

The dashboard is also published on the Grafana Labs Community dashboards site.

Summary

PMM’s collection of graphs and dashboard is quite complete, but it is also natural that there are specific metrics that might not be there. For those cases, you can count on the flexibility and ease usage of PMM to collect metrics and create custom graphs. So go ahead, embrace PMM, customize it, make it yours!

The JSON for this graph, so it can be imported easily, is:

{ "aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": "Prometheus", "fill": 0, "gridPos": { "h": 18, "w": 24, "x": 0, "y": 72 }, "id": null, "legend": { "alignAsTable": true, "avg": true, "current": false, "max": true, "min": true, "rightSide": false, "show": true, "sideWidth": 0, "sort": "avg", "sortDesc": true, "total": false, "values": true }, "lines": true, "linewidth": 2, "links": [], "nullPointMode": "null", "percentage": false, "pointradius": 0.5, "points": false, "renderer": "flot", "seriesOverrides": [ { "alias": "/Timer Wait/i", "yaxis": 2 } ], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [ { "expr": "topk(5, label_replace(rate(mysql_global_status_innodb_mutex_COUNT_STAR{instance=\"$host\"}[$interval]), \"mutex\", \"$2\", \"EVENT_NAME\", \"(.*)/(.*)\" )) or topk(5,label_replace(irate(mysql_global_status_innodb_mutex_COUNT_STAR{instance=\"$host\"}[5m]), \"mutex\", \"$2\", \"EVENT_NAME\", \"(.*)/(.*)\" ))", "format": "time_series", "interval": "$interval", "intervalFactor": 1, "legendFormat": "{{ mutex }} calls", "refId": "A", "hide": false }, { "expr": "topk(5, label_replace(rate(mysql_global_status_innodb_mutex_SUM_TIMER_WAIT{instance=\"$host\"}[$interval]), \"mutex\", \"$2\", \"EVENT_NAME\", \"(.*)/(.*)\" )) or topk(5, label_replace(irate(mysql_global_status_innodb_mutex_SUM_TIMER_WAIT{instance=\"$host\"}[5m]), \"mutex\", \"$2\", \"EVENT_NAME\", \"(.*)/(.*)\" ))", "format": "time_series", "interval": "$interval", "intervalFactor": 1, "legendFormat": "{{ mutex }} timer wait", "refId": "B", "hide": false } ], "thresholds": [], "timeFrom": null, "timeShift": null, "title": "InnoDB Mutex", "tooltip": { "shared": true, "sort": 2, "value_type": "individual" }, "transparent": false, "type": "graph", "xaxis": { "buckets": null, "mode": "time", "name": null, "show": true, "values": [] }, "yaxes": [ { "format": "short", "label": "", "logBase": 1, "max": null, "min": null, "show": true }, { "decimals": null, "format": "ns", "label": "", "logBase": 1, "max": null, "min": "0", "show": true } ], "yaxis": { "align": false, "alignLevel": null } }

How to Access Instagram api Using PHP/Lumen

We will create simple php Rest API wrapper to access user information through Instagram api. The Lumen is a popular php microrest framework.The Lumen framework help to create rest api or wrapper for existing rest api using php. The user must authorize client app to access information.

This lumen tutorial demonstrate, how to consume Instagram api using lumen. The Instagram is popular social sharing website.You can share image and videos using Instagram website or mobile application.We just access user information using Instagram api.

The pre-requisites is –

  • – You must have Instagram account, if not please create new one.
  • – Api token – You need to register your app within Instagram app developer section.
How to Register App with Instagram

The Instagram is providing functionality to create sandbox account for your application.You can register multiple app into single account.You can add your app with manage section of Instagram.The form would be look like below –

You can modify app information at any time using manage client section, After successfully registration of your app. You will get app client id.That will be use to get new authentication token for your application.The token will use to get information from Instagram using api.

How to Generate Client Token in Instagram

The Instagram API providing rest end point to generate token based on client id, Please keep in mind the client id and client token are two different thing.The below rest call use to get client token –

https://www.instagram.com/oauth/authorize/?client_id=client-app-id&redirect_uri=app-redirect-url&response_type=token

Whereas –

  • client-app-id : This is registered client app id.
  • app-redirect-url : This is the registered app redirect url which are given at the time of registration of app, You haven’t remember – Please find the app-redirect url from manage client section.

Above rest call will return app client token, that will use subsequent call to get information from Instagram. This token authorize to get information.

The above call is not working with client side and throw exception "implicit authentication", like generating token using browser.

By security reasons Instagram disables OAuth 2.0 client-side authentication by default. If your app hasn’t server side you should go to Manage Clients Security section and unset Disable implicit OAuth option. After saving all should work well.

Get user Information using Instagram API

We will create user information using /user end points, we just need to pass client token with rest call and get user information.We will make route information into web.php file.

$app->get('instagram_user_info', 'InstagramController@getUserInfo');

we will create InstagramController.php file into HTTP controller folder and add below method –

var $instagramAPI = "https://api.instagram.com/v1/"; private function _client($endpoint) { $client = new Client([ 'base_uri' => $endpoint, 'timeout' => 300, 'headers' => ['Content-Type' => 'application/json', "Accept" => "application/json"], 'http_errors' => false, 'verify' => false ]); return $client; } public function getUserInfo() { $res = array(); try { $response = $this->getUserInfoService(); return json_encode($response); } catch(Exception $ex) { return 'Unable to get instagram user info'; } } /** */ public function getUserInfoService() { $token = 'your instagram token'; $client = $this->_client($this->instagramAPI); try { $response = $client->get("users/self/?access_token="+$token)->getBody()->getContents(); return json_decode($response); } catch(Exception $ex) { Log::info($ex->getMessage()); return 'Unable to create instagram user info'; } return 'Unable to create instagram user info'; }

I hope, You have enjoyed this article, Please share and subscribe news letter.

The post How to Access Instagram api Using PHP/Lumen appeared first on Rest Api Example.

MySQL Community Awards 2019: Call for Nominations!

MySQL Community Awards take place every year, during the Percona Live Open Source Database Conference – this year in Austin, TX.

The MySQL Community Awards is a community-based initiative. The idea is to publicly recognize contributors to the MySQL ecosystem. The entire process of discussing, voting and awarding is controlled by an independent group of community members, typically past winners or their representatives, as well as known contributors.

It is a self-appointed, self-declared, self-making-up-the-rules-as-it-goes committee. It is also very aware of the importance of the community; a no-nonsense, non-political, adhering to tradition, self-criticizing committee.

The Call for Nominations is open. We are seeking the community’s assistance in nominating candidates in the following categories:

MySQL Community Awards: Community Contributor of the year 2019

This is a personal award; a winner would be a person who has made contribution to the MySQL ecosystem. This could be via development, advocating, blogging, speaking, supporting, etc.

MySQL Community Awards: Application of the year 2019

An application, project, product etc. which supports the MySQL ecosystem by either contributing code, complementing its behavior, supporting its use, etc. This could range from a one person open source project to a large scale social service.

MySQL Community Awards: Corporate Contributor of the year 2019

A company who made a contribution to the MySQL ecosystem. This might be a corporation which released major open source code; one that advocates for MySQL; one that help out community members by… anything.

For a list of previous winners, please see MySQL Hall of Fame.

Process of nomination and voting

Anyone can nominate anyone. When nominating, please make sure to provide a brief explanation on why the candidate is eligible to get the award. Make a good case!

The committee will review all nominations and vote; it typically takes two rounds of votes to pick the winners, and a lot of discussion.

There will be up to three winners in each category.

Methods of nomination:

  • Fill out this form
  • Send en email to mysql.community.awards [ at ] gmail.com
  • Assuming you can provide a reasonable description via twitter, tweet your nomination at #MySQLAwards.

Please submit your nominations no later than Friday, March 29 2019.

The committee

Members of the committee are:

  • René Cannaò
  • Frédéric Descamps
  • Santiago Lertora
  • Yoshinori Matsunobu
  • Simon J. Mudd
  • Shlomi Noach
  • Lixun Peng
  • Sveta Smirnova
  • Sugu Sougoumarane

Emily Slocombe and Agustín Gallego are acting as co-secretaries; we will be non-voting (except for breaking ties).

The committee communicates throughout the nomination and voting process to exchange views and opinions.

The awards

Awards are traditionally donated by some party whose identity remains secret. We are still looking for a sponsor, email mysql.community.awards [ at ] gmail.com if you would like to sponsor the award goblets

Support

This is a community effort; we ask for your support in spreading the word and of course in nominating candidates. Thanks!

HA for MySQL and MariaDB - Comparing Master-Master Replication to Galera Cluster

Galera replication is relatively new if compared to MySQL replication, which is natively supported since MySQL v3.23. Although MySQL replication is designed for master-slave unidirectional replication, it can be configured as an active master-master setup with bidirectional replication. While it is easy to set up, and some use cases might benefit from this “hack”, there are a number of caveats. On the other hand, Galera cluster is a different type of technology to learn and manage. Is it worth it?

In this blog post, we are going to compare master-master replication to Galera cluster.

Replication Concepts

Before we jump into the comparison, let’s explain the basic concepts behind these two replication mechanisms.

Generally, any modification to the MySQL database generates an event in binary format. This event is transported to the other nodes depending on the replication method chosen - MySQL replication (native) or Galera replication (patched with wsrep API).

MySQL Replication

The following diagrams illustrates the data flow of a successful transaction from one node to another when using MySQL replication:

The binary event is written into the master's binary log. The slave(s) via slave_IO_thread will pull the binary events from master's binary log and replicate them into its relay log. The slave_SQL_thread will then apply the event from the relay log asynchronously. Due to the asynchronous nature of replication, the slave server is not guaranteed to have the data when the master performs the change.

Ideally, MySQL replication will have the slave to be configured as a read-only server by setting read_only=ON or super_read_only=ON. This is a precaution to protect the slave from accidental writes which can lead to data inconsistency or failure during master failover (e.g., errant transactions). However, in a master-master active-active replication setup, read-only has to be disabled on the other master to allow writes to be processed simultaneously. The primary master must be configured to replicate from the secondary master by using the CHANGE MASTER statement to enable circular replication.

Galera Replication

The following diagrams illustrates the data replication flow of a successful transaction from one node to another for Galera Cluster:

The event is encapsulated in a writeset and broadcasted from the originator node to the other nodes in the cluster by using Galera replication. The writeset undergoes certification on every Galera node and if it passes, the applier threads will apply the writeset asynchronously. This means that the slave server will eventually become consistent, after agreement of all participating nodes in global total ordering. It is logically synchronous, but the actual writing and committing to the tablespace happens independently, and thus asynchronously on each node with a guarantee for the change to propagate on all nodes.

Avoiding Primary Key Collision

In order to deploy MySQL replication in master-master setup, one has to adjust the auto increment value to avoid primary key collision for INSERT between two or more replicating masters. This allows the primary key value on masters to interleave each other and prevent the same auto increment number being used twice on either of the node. This behaviour must be configured manually, depending on the number of masters in the replication setup. The value of auto_increment_increment equals to the number of replicating masters and the auto_increment_offset must be unique between them. For example, the following lines should exist inside the corresponding my.cnf:

Master1:

log-slave-updates auto_increment_increment=2 auto_increment_offset=1

Master2:

log-slave-updates auto_increment_increment=2 auto_increment_offset=2

Likewise, Galera Cluster uses this same trick to avoid primary key collisions by controlling the auto increment value and offset automatically with wsrep_auto_increment_control variable. If set to 1 (the default), will automatically adjust the auto_increment_increment and auto_increment_offset variables according to the size of the cluster, and when the cluster size changes. This avoids replication conflicts due to auto_increment. In a master-slave environment, this variable can be set to OFF.

The consequence of this configuration is the auto increment value will not be in sequential order, as shown in the following table of a three-node Galera Cluster:

Node auto_increment_increment auto_increment_offset Auto increment value Node 1 3 1 1, 4, 7, 10, 13, 16... Node 2 3 2 2, 5, 8, 11, 14, 17... Node 3 3 3 3, 6, 9, 12, 15, 18...

If an application performs insert operations on the following nodes in the following order:

  • Node1, Node3, Node2, Node3, Node3, Node1, Node3 ..

Then the primary key value that will be stored in the table will be:

  • 1, 6, 8, 9, 12, 13, 15 ..

Simply said, when using master-master replication (MySQL replication or Galera), your application must be able to tolerate non-sequential auto-increment values in its dataset.

For ClusterControl users, take note that it supports deployment of MySQL master-master replication with a limit of two masters per replication cluster, only for active-passive setup. Therefore, ClusterControl does not deliberately configure the masters with auto_increment_increment and auto_increment_offset variables.

Data Consistency

Galera Cluster comes with its flow-control mechanism, where each node in the cluster must keep up when replicating, or otherwise all other nodes will have to slow down to allow the slowest node to catch up. This basically minimizes the probability of slave lag, although it might still happen but not as significant as in MySQL replication. By default, Galera allows nodes to be at least 16 transactions behind in applying through variable gcs.fc_limit. If you want to do critical reads (a SELECT that must return most up to date information), you probably want to use session variable, wsrep_sync_wait.

Galera Cluster on the other hand comes with a safeguard to data inconsistency whereby a node will get evicted from the cluster if it fails to apply any writeset for whatever reasons. For example, when a Galera node fails to apply writeset due to internal error by the underlying storage engine (MySQL/MariaDB), the node will pull itself out from the cluster with the following error:

150305 16:13:14 [ERROR] WSREP: Failed to apply trx 1 4 times 150305 16:13:14 [ERROR] WSREP: Node consistency compromized, aborting..

To fix the data consistency, the offending node has to be re-synced before it is allowed to join the cluster. This can be done manually or by wiping out the data directory to trigger snapshot state transfer (full syncing from a donor).

MySQL master-master replication does not enforce data consistency protection and a slave is allowed to diverge e.g, replicate a subset of data or lag behind, which makes the slave inconsistent with the master. It is designed to replicate data in one flow - from master down to the slaves. Data consistency checks have to be performed manually or via external tools like Percona Toolkit pt-table-checksum or mysql-replication-check.

Conflict Resolution

Generally, master-master (or multi-master, or bi-directional) replication allows more than one member in the cluster to process writes. With MySQL replication, in case of replication conflict, the slave's SQL thread simply stops applying the next query until the conflict is resolved, either by manually skipping the replication event, fixing the offending rows or resyncing the slave. Simply said, there is no automatic conflict resolution support for MySQL replication.

Galera Cluster provides a better alternative by retrying the offending transaction during replication. By using wsrep_retry_autocommit variable, one can instruct Galera to automatically retry a failed transaction due to cluster-wide conflicts, before returning an error to the client. If set to 0, no retries will be attempted, while a value of 1 (the default) or more specifies the number of retries attempted. This can be useful to assist applications using autocommit to avoid deadlocks.

ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE Node Consensus and Failover

Galera uses Group Communication System (GCS) to check node consensus and availability between cluster members. If a node is unhealthy, it will be automatically evicted from the cluster after gmcast.peer_timeout value, default to 3 seconds. A healthy Galera node in "Synced" state is deemed as a reliable node to serve reads and writes, while others are not. This design greatly simplifies health check procedures from the upper tiers (load balancer or application).

In MySQL replication, a master does not care about its slave(s), while a slave only has consensus with its sole master via the slave_IO_thread process when replicating the binary events from master's binary log. If a master goes down, this will break the replication and an attempt to re-establish the link will be made every slave_net_timeout (default to 60 seconds). From the application or load balancer perspective, the health check procedures for replication slave must at least involve checking the following state:

  • Seconds_Behind_Master
  • Slave_IO_Running
  • Slave_SQL_Running
  • read_only variable
  • super_read_only variable (MySQL 5.7.8 and later)

In terms of failover, generally, master-master replication and Galera nodes are equal. They hold the same data set (albeit you can replicate a subset of data in MySQL replication, but that's uncommon for master-master) and share the same role as masters, capable of handling reads and writes simultaneously. Therefore, there is actually no failover from the database point-of-view due to this equilibrium. Only from the application side that would require failover to skip the unoperational nodes. Keep in mind that because MySQL replication is asynchronous, it is possible that not all of the changes done on the master will have propagated to the other master.

Node Provisioning

The process of bringing a node into sync with the cluster before replication starts, is known as provisioning. In MySQL replication, provisioning a new node is a manual process. One has to take a backup of the master and restore it over to the new node before setting up the replication link. For an existing replication node, if the master's binary logs have been rotated (based on expire_logs_days, default to 0 means no automatic removal), you may have to re-provision the node using this procedure. There are also external tools like Percona Toolkit pt-table-sync and ClusterControl to help you out on this. ClusterControl supports resyncing a slave with just two clicks. You have options to resync by taking a backup from the active master or an existing backup.

In Galera, there are two ways of doing this - incremental state transfer (IST) or state snapshot transfer (SST). IST process is the preferred method where only the missing transactions transfer from a donor's cache. SST process is similar to taking a full backup from the donor, it is usually pretty resource intensive. Galera will automatically determine which syncing process to trigger based on the joiner's state. In most cases, if a node fails to join a cluster, simply wipe out the MySQL datadir of the problematic node and start the MySQL service. Galera provisioning process is much simpler, it comes very handy when scaling out your cluster or re-introducing a problematic node back into the cluster.

Loosely Coupled vs Tightly Coupled

MySQL replication works very well even across slower connections, and with connections that are not continuous. It can also be used across different hardware, environment and operating systems. Most storage engines support it, including MyISAM, Aria, MEMORY and ARCHIVE. This loosely coupled setup allows MySQL master-master replication to work well in a mixed environment with less restriction.

Galera nodes are tightly-coupled, where the replication performance is as fast as the slowest node. Galera uses a flow control mechanism to control replication flow among members and eliminate any slave lag. The replication can be all fast or all slow on every node and is adjusted automatically by Galera. Thus, it's recommended to use uniform hardware specs for all Galera nodes, especially with respect to CPU, RAM, disk subsystem, network interface card and network latency between nodes in the cluster.

Conclusions

In summary, Galera Cluster is superior if compared to MySQL master-master replication due to its synchronous replication support with strong consistency, plus more advanced features like automatic membership control, automatic node provisioning and multi-threaded slaves. Ultimately, this depends on how the application interacts with the database server. Some legacy applications built for a standalone database server may not work well on a clustered setup.

Related resources  How to deploy and manage MySQL multi-master replication setups with ClusterControl 1.4  How to Deploy a Production-Ready MySQL or MariaDB Galera Cluster using ClusterControl  Galera Cluster for MySQL - Tutorial

To simplify our points above, the following reasons justify when to use MySQL master-master replication:

  • Things that are not supported by Galera:
    • Replication for non-InnoDB/XtraDB tables like MyISAM, Aria, MEMORY or ARCHIVE.
    • XA transactions.
    • Statement-based replication between masters (e.g, when bandwidth is very expensive).
    • Relying on explicit locking like LOCK TABLES statement.
    • The general query log and the slow query log must be directed to a table, instead of a file.
  • Loosely coupled setup where the hardware specs, software version and connection speed are significantly different on every master.
  • When you already have a MySQL replication chain and you want to add another active/backup master for redundancy to speed up failover and recovery time in case if one of the master is unavailable.
  • If your application can't be modified to work around Galera Cluster limitations and having a MySQL-aware load balancer like ProxySQL or MaxScale is not an option.

Reasons to pick Galera Cluster over MySQL master-master replication:

  • Ability to safely write to multiple masters.
  • Data consistency automatically managed (and guaranteed) across databases.
  • New database nodes easily introduced and synced.
  • Failures or inconsistencies automatically detected.
  • In general, more advanced and robust high availability features.
Tags:  MySQL MariaDB replication galera comparison high availability

Laravel 5.8 Tutorial: Build your First CRUD App with Laravel and MySQL (PHP 7.1+)

Throughout this tutorial for beginners you'll learn to use Laravel 5.8 - the latest version of one of the most popular PHP frameworks - to create a CRUD web application with a MySQL database from scratch and step by step starting with the installation of Composer (PHP package manager) to implementing and serving your application. Note: Laravel 5.8 is recently released and this tutorial is upgraded to the latest version. Laravel 5.8 New Features Let's start our tutorial by going through the most important features introduced in this version. The hasOneThrough Eloquent relationship. Better email validation, Auto-Discovery Of Model Policies provided that the model and policy follow standard Laravel naming conventions DynamoDB cache and session drivers, Added support for PHPUnit 8.0 for unit testing, Added support for Carbon 2.0, an easy to use PHP API extension for DateTime, Added support Pheanstalk 4.0: a pure PHP 5.3+ client for the beanstalkd workqueue, etc. The Laravel 5.8 version has also corrected numeroous bugs and introduced many improvements of the Artisan CLI. Check out the official docs for details features of Laravel 5.8 Prerequisites This tutorial assumes you have PHP and MySQL installed on your system. Follow the instructions for your operating system to install both of them. You also need to be familiar with Linux/macOS bash where we'll be executing the commands in this tutorial. Familiarly with PHP is required since Laravel is based on PHP. For development I will be using a Ubuntu 16.04 machine so the commands in this tutorial are targeting this system but you should be able to follow this tutorial in any operating system you use. Installing PHP 7.1 Laravel v5.8 requires PHP 7.1 or above so you need the latest version of PHP installed on your system. The process is straightforward on most systems. On Ubuntu, you can follow these instructions. First add the ondrej/php PPA which contains the latest version of PHP: $ sudo add-apt-repository ppa:ondrej/php $ sudo apt-get update Next, install PHP 7.1 using the following command: $ sudo apt-get install php7.1 If you are using Ubuntu 18.04, PHP 7.2 is included in the default Ubuntu repository for 18.04 so you should be able to install it using the following command: $ sudo apt-get install php This tutorial is tested with PHP 7.1 but you can also use more recent versions like PHP 7.2 or PHP 7.3 Installing the Required PHP 7.1 Modules Laravel requires a bunch of modules. You can install them using the following command: $ sudo apt-get install php7.1 php7.1-cli php7.1-common php7.1-json php7.1-opcache php7.1-mysql php7.1-mbstring php7.1-mcrypt php7.1-zip php7.1-fpm php7.1-xml Installing PHP Composer Let's start our journey by install Composer, The PHP package manager. Navigate in your home directory, then download the installer from the official website using curl: $ cd ~ $ curl -sS https://getcomposer.org/installer -o composer-setup.php You can then install composer globally on your system by using the following command: $ sudo php composer-setup.php --install-dir=/usr/local/bin --filename=composer As of this writing Composer 1.8 will be installed on your system. You can make sure your installation works as expected by running composer in your terminal: $ composer You should get the following output: ______ / ____/___ ____ ___ ____ ____ ________ _____ / / / __ \/ __ `__ \/ __ \/ __ \/ ___/ _ \/ ___/ / /___/ /_/ / / / / / / /_/ / /_/ (__ ) __/ / \____/\____/_/ /_/ /_/ .___/\____/____/\___/_/ /_/ Composer version 1.8.0 2018-12-03 10:31:16 Usage: command [options] [arguments] Options: -h, --help Display this help message -q, --quiet Do not output any message -V, --version Display this application version --ansi Force ANSI output --no-ansi Disable ANSI output -n, --no-interaction Do not ask any interactive question --profile Display timing and memory usage information --no-plugins Whether to disable plugins. -d, --working-dir=WORKING-DIR If specified, use the given directory as working directory. -v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug For more information check out this tutorial. If you've successfully installed Composer in your system, you are ready to create a Laravel 5.8 project. Installing and Creating a Laravel 5.8 Project In this section we'll introduce Laravel and then proceed it to install and create a Laravel 5.8 project. About Laravel Laravel docs describe it as: Laravel is a web application framework with expressive, elegant syntax. We believe development must be an enjoyable and creative experience to be truly fulfilling. Laravel attempts to take the pain out of development by easing common tasks used in the majority of web projects, such as: Simple, fast routing engine. Powerful dependency injection container. Multiple back-ends for session and cache storage. Expressive, intuitive database ORM. Database agnostic schema migrations. Robust background job processing. Real-time event broadcasting. Laravel is accessible, yet powerful, providing tools needed for large, robust applications. Generating a Laravel 5.8 project is easy and straightforward. In your terminal, run the following command: $ composer create-project --prefer-dist laravel/laravel laravel-first-crud-app This will install laravel/laravel v5.8.3. Note: Make sure you have PHP 7.1 installed on your system. Otherwise, composer will use Laravel 5.5 for your project. You can verify the installed version in your project using: $ cd laravel-first-crud-app $ php artisan -V Laravel Framework 5.8.19 Installing the Front-End Dependencies In your generated project, you can see that a package.json file is generated which includes many front-end libraries that can be used by your project: axios, bootstrap, cross-env, jquery, laravel-mix, lodash, popper.js, resolve-url-loader, sass, sass-loader, vue. Note: You can use your preferred libraries with Laravel not specifically the ones added to package.json. The package.json file in your Laravel project includes a few packages such as vue and axios to help you get started building your JavaScript application. It also includes bootstrap to help you get started with Bootstrap for styling your UI. It include Laravel Mix to help you compile your SASS files to plain CSS. You need to use npm to install the front-end dependencies: $ npm install After running this command a node_modules folder will be created and the dependencies will be installed into it. Note: You need to have Node.js and npm installed on your system before you can install the front-end dependencies. Creating a MySQL Database Let's now create a MySQL database that we'll use to persist dat ain our Laravel application. In your terminal, run the following command to run the mysql client: $ mysql -u root -p When prompted, enter the password for your MySQL server when you've installed it. Next, run the following SQL statement to create a db database: mysql> create database db; Open the .env file and update the credentials to access your MySQL database: DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=db DB_USERNAME=root DB_PASSWORD=****** You need to enter the database name, the username and password. At this point, you can run the migrate command to create your database and a bunch of SQL tables needed by Laravel: $ php artisan migrate Note: You can run the migrate command at any other points of your development to add other SQL tables in your database or to later your database if you need to add any changes later. Creating your First Laravel Model Laravel uses the MVC architectural pattern to organize your application in three decoupled parts: The Model which encapsulates the data access layer, The View which encapsulates the representation layer, Controller which encapsulates the code to control the application and communicates with the model and view layers. Wikipedia defines MVC as: Model–view–controller is an architectural pattern commonly used for developing user interfacesthat divides an application into three interconnected parts. This is done to separate internal representations of information from the ways information is presented to and accepted from the user. Now, let's create our first Laravel Model. In your terminal, run the following command: $ php artisan make:model Contact --migration This will create a Contact model and a migration file. In the terminal, we get an output similar to: Model created successfully. Created Migration: 2019_01_27_193840_create_contacts_table Open the database/migrations/xxxxxx_create_contacts_table migration file and update it accordingly: <?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateContactsTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('contacts', function (Blueprint $table) { $table->increments('id'); $table->timestamps(); $table->string('first_name'); $table->string('last_name'); $table->string('email'); $table->string('job_title'); $table->string('city'); $table->string('country'); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::dropIfExists('contacts'); } } We added the first_name, last_name, email, job_title, city and country fields in the contacts table. You can now create the contacts table in the database using the following command: $ php artisan migrate Now, let's look at our Contact model, which will be used to interact with the contacts database table. Open the app/Contact.php and update it: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Contact extends Model { protected $fillable = [ 'first_name', 'last_name', 'email', 'city', 'country', 'job_title' ]; } Creating the Controller and Routes After creating the model and migrated our database. Let's now create the controller and the routes for working with the Contact model. In your terminal, run the following command: $ php artisan make:controller ContactController --resource Laravel resource routing assigns the typical "CRUD" routes to a controller with a single line of code. For example, you may wish to create a controller that handles all HTTP requests for "photos" stored by your application. Using the make:controller Artisan command, we can quickly create such a controller: This command will generate a controller at app/Http/Controllers/PhotoController.php. The controller will contain a method for each of the available resource operations. Open the app/Http/Controllers/ContactController.php file. This is the initial content: <?php namespace App\Http\Controllers; use Illuminate\Http\Request; class ContactController extends Controller { /** * Display a listing of the resource. * * @return \Illuminate\Http\Response */ public function index() { // } /** * Show the form for creating a new resource. * * @return \Illuminate\Http\Response */ public function create() { // } /** * Store a newly created resource in storage. * * @param \Illuminate\Http\Request $request * @return \Illuminate\Http\Response */ public function store(Request $request) { // } /** * Display the specified resource. * * @param int $id * @return \Illuminate\Http\Response */ public function show($id) { // } /** * Show the form for editing the specified resource. * * @param int $id * @return \Illuminate\Http\Response */ public function edit($id) { // } /** * Update the specified resource in storage. * * @param \Illuminate\Http\Request $request * @param int $id * @return \Illuminate\Http\Response */ public function update(Request $request, $id) { // } /** * Remove the specified resource from storage. * * @param int $id * @return \Illuminate\Http\Response */ public function destroy($id) { // } } The ContactController class extends Controller class available from Laravel and defines a bunch of methods which will be used to do the CRUD operations against the Contact model. You can read the role of the method on the comment above it. Now we need to provide implementations for these methods. But before that, let's add routing. Open the routes/web.php file and update it accordingly: <?php Route::get('/', function () { return view('welcome'); }); Route::resource('contacts', 'ContactController'); Using the resource() static method of Route, you can create multiple routes to expose multiple actions on the resource. These routes are mapped to various ContactController methods which will need to implement in the next section: GET/contacts, mapped to the index() method, GET /contacts/create, mapped to the create() method, POST /contacts, mapped to the store() method, GET /contacts/{contact}, mapped to the show() method, GET /contacts/{contact}/edit, mapped to the edit() method, PUT/PATCH /contacts/{contact}, mapped to the update() method, DELETE /contacts/{contact}, mapped to the destroy() method. These routes are used to serve HTML templates and also as API endpoints for working with the Contact model. Note: If you want to create a controller that will only expose a RESTful API, you can use the apiResource method to exclude the routes that are used to serve the HTML templates: Route::apiResource('contacts', 'ContactController'); Implementing the CRUD Operations Let's now implement the controller methods alongside the views. C: Implementing the Create Operation and Adding a Form The ContactController includes the store() method that maps to the POST /contacts API endpoint which will be used to create a contact in the database and the create() that maps to the GET /contacts/create route which will be used to serve the HTML form used to submit the contact to POST /contacts API endpoint. Let's implement these two methods. Re-open the app/Http/Controllers/ContactController.php file and start by importing the Contact model: use App\Contact; Next, locate the store() method and update it accordingly: public function store(Request $request) { $request->validate([ 'first_name'=>'required', 'last_name'=>'required', 'email'=>'required' ]); $contact = new Contact([ 'first_name' => $request->get('first_name'), 'last_name' => $request->get('last_name'), 'email' => $request->get('email'), 'job_title' => $request->get('job_title'), 'city' => $request->get('city'), 'country' => $request->get('country') ]); $contact->save(); return redirect('/contacts')->with('success', 'Contact saved!'); } Next, locate the create() method and update it: public function create() { return view('contacts.create'); } The create() function makes use of the view() method to return the create.blade.php template which needs to be present in the resources/views folder. Before creating the create.blade.php template we need to create a base template that will be extended by the create template and all the other templates that will create later in this tutorial. In the resources/views folder, create a base.blade.php file: $ cd resources/views $ touch base.blade.php Open the resources/views/base.blade.php file and add the following blade template: <!DOCTYPE html> <html lang="en"> <head> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Laravel 5.8 & MySQL CRUD Tutorial</title> <link href="{{ asset('css/app.css') }}" rel="stylesheet" type="text/css" /> </head> <body> <div class="container"> @yield('main') </div> <script src="{{ asset('js/app.js') }}" type="text/js"></script> </body> </html> Now, let's create the create.blade.php template. First, create a contacts folder in the views folder: $ mkdir contacts Next, create the template $ cd contacts $ touch create.blade.php Open the resources/views/contacts/create.blade.php file and add the following code: @extends('base') @section('main') <div class="row"> <div class="col-sm-8 offset-sm-2"> <h1 class="display-3">Add a contact</h1> <div> @if ($errors->any()) <div class="alert alert-danger"> <ul> @foreach ($errors->all() as $error) <li>{{ $error }}</li> @endforeach </ul> </div><br /> @endif <form method="post" action="{{ route('contacts.store') }}"> @csrf <div class="form-group"> <label for="first_name">First Name:</label> <input type="text" class="form-control" name="first_name"/> </div> <div class="form-group"> <label for="last_name">Last Name:</label> <input type="text" class="form-control" name="last_name"/> </div> <div class="form-group"> <label for="email">Email:</label> <input type="text" class="form-control" name="email"/> </div> <div class="form-group"> <label for="city">City:</label> <input type="text" class="form-control" name="city"/> </div> <div class="form-group"> <label for="country">Country:</label> <input type="text" class="form-control" name="country"/> </div> <div class="form-group"> <label for="job_title">Job Title:</label> <input type="text" class="form-control" name="job_title"/> </div> <button type="submit" class="btn btn-primary-outline">Add contact</button> </form> </div> </div> </div> @endsection This is a screenshot of our create form! Fill out the form and click on the Add contact button to create a contact in the database. You should be redirected to /contacts route which doesn't have a view associated to it yet. R: Implementing the Read Operation and Getting Data Next, let's implement the read operation to get and display contacts data from our MySQL database. Go to the app/Http/Controllers/ContactController.php file, locate the index() method and update it: public function index() { $contacts = Contact::all(); return view('contacts.index', compact('contacts')); } Next, you need to create the the index template. Create a resources/views/contacts.index.blade.php file: $ touch index.blade.php Open the resources/views/contacts/index.blade.php file and add the following code: @extends('base') @section('main') <div class="row"> <div class="col-sm-12"> <h1 class="display-3">Contacts</h1> <table class="table table-striped"> <thead> <tr> <td>ID</td> <td>Name</td> <td>Email</td> <td>Job Title</td> <td>City</td> <td>Country</td> <td colspan = 2>Actions</td> </tr> </thead> <tbody> @foreach($contacts as $contact) <tr> <td>{{$contact->id}}</td> <td>{{$contact->first_name}} {{$contact->last_name}}</td> <td>{{$contact->email}}</td> <td>{{$contact->job_title}}</td> <td>{{$contact->city}}</td> <td>{{$contact->country}}</td> <td> <a href="{{ route('contacts.edit',$contact->id)}}" class="btn btn-primary">Edit</a> </td> <td> <form action="{{ route('contacts.destroy', $contact->id)}}" method="post"> @csrf @method('DELETE') <button class="btn btn-danger" type="submit">Delete</button> </form> </td> </tr> @endforeach </tbody> </table> <div> </div> @endsection U: Implementing the Update Operation Next, we need to implement the update operation. Go to the app/Http/Controllers/ContactController.php file, locate the edit($id) method and update it: public function edit($id) { $contact = Contact::find($id); return view('contacts.edit', compact('contact')); } Next, you need to implement the update() method: public function update(Request $request, $id) { $request->validate([ 'first_name'=>'required', 'last_name'=>'required', 'email'=>'required' ]); $contact = Contact::find($id); $contact->first_name = $request->get('first_name'); $contact->last_name = $request->get('last_name'); $contact->email = $request->get('email'); $contact->job_title = $request->get('job_title'); $contact->city = $request->get('city'); $contact->country = $request->get('country'); $contact->save(); return redirect('/contacts')->with('success', 'Contact updated!'); } Now, you need to add the edit template. Inside the resources/views/contacts/, create an edit.blade.php file: $ touch edit.blade.php Open the resources/views/contacts/edit.blade.php file and add this code: @extends('base') @section('main') <div class="row"> <div class="col-sm-8 offset-sm-2"> <h1 class="display-3">Update a contact</h1> @if ($errors->any()) <div class="alert alert-danger"> <ul> @foreach ($errors->all() as $error) <li>{{ $error }}</li> @endforeach </ul> </div> <br /> @endif <form method="post" action="{{ route('contacts.update', $contact->id) }}"> @method('PATCH') @csrf <div class="form-group"> <label for="first_name">First Name:</label> <input type="text" class="form-control" name="first_name" value={{ $contact->first_name }} /> </div> <div class="form-group"> <label for="last_name">Last Name:</label> <input type="text" class="form-control" name="last_name" value={{ $contact->last_name }} /> </div> <div class="form-group"> <label for="email">Email:</label> <input type="text" class="form-control" name="email" value={{ $contact->email }} /> </div> <div class="form-group"> <label for="city">City:</label> <input type="text" class="form-control" name="city" value={{ $contact->city }} /> </div> <div class="form-group"> <label for="country">Country:</label> <input type="text" class="form-control" name="country" value={{ $contact->country }} /> </div> <div class="form-group"> <label for="job_title">Job Title:</label> <input type="text" class="form-control" name="job_title" value={{ $contact->job_title }} /> </div> <button type="submit" class="btn btn-primary">Update</button> </form> </div> </div> @endsection U: Implementing the Delete Operation Finally, we'll proceed to implement the delete operation. Go to the app/Http/Controllers/ContactController.php file, locate the destroy() method and update it accordingly: public function destroy($id) { $contact = Contact::find($id); $contact->delete(); return redirect('/contacts')->with('success', 'Contact deleted!'); } You can notice that when we redirect to the /contacts route in our CRUD API methods, we also pass a success message but it doesn't appear in our index template. Let's change that! Go to the resources/views/contacts/index.blade.php file and add the following code: <div class="col-sm-12"> @if(session()->get('success')) <div class="alert alert-success"> {{ session()->get('success') }} </div> @endif </div> We also need to add a button to takes us to the create form. Add this code below the header: <div> <a style="margin: 19px;" href="{{ route('contacts.create')}}" class="btn btn-primary">New contact</a> </div> This is a screenshot of the page after we created a contact: Conclusion We've reached the end of this tutorial. We created a CRUD application with Laravel 5.8, PHP 7.1 and MySQL. Hope you enjoyed the tutorial and see you in the next one!

Group Replication – Consistent Reads Deep Dive

On previous posts about Group Replication consistency we:

  1. introduced consistency levels;
  2. explained how to configure the primary failover consistency;
  3. presented how to configure transaction consistency levels to achieve the consistency required by your applications.

In blog 3. we presented the consistency levels: EVENTUAL, BEFORE, AFTER and BEFORE_AND_AFTER; their scopes: SESSION, GLOBAL; and their context: whether they only impact the ongoing transaction or all concurrent transactions.…

How to Manage MySQL - for Oracle DBAs

Open source databases are quickly becoming mainstream, so migration from proprietary engines into open source engines is a kind of an industry trend now. It also means that we DBA’s often end up having multiple database backends to manage.

In the past few blog posts, my colleague Paul Namuag and I covered several aspects of migration from Oracle to Percona, MariaDB, and MySQL. The obvious goal for the migration is to get your application up and running more efficiently in the new database environment, however it’s crucial to assure that staff is ready to support it.

Related resources  Migration from Oracle Database to MariaDB - A Deep Dive  How to Migrate from Oracle to MySQL / Percona Server  Migrating from Oracle Database to MariaDB - What You Should Know  How to Migrate from Oracle DB to MariaDB  Basic Administration Comparison Between Oracle, MSSQL, MySQL, PostgreSQL

This blog covers the basic operations of MySQL with reference to similar tasks that you would perform daily in your Oracle environment. It provides you with a deep dive on different topics to save you time as you can relate to Oracle knowledge that you’ve already built over the years.

We will also talk about external command line tools that are missing in the default MySQL installation but are needed to perform daily operations efficiently. The open source version doesn’t come with the equivalent of Oracle Cloud Control for instance, so do checkout ClusterControl if you are looking for something similar.

In this blog, we are assuming you have a better knowledge of Oracle than MySQL and hence would like to know the correlation between the two. The examples are based on Linux platform however you can find many similarities in managing MySQL on Windows.

How do I connect to MySQL?

Let’s start our journey with a very (seemingly) basic task. Actually, this is a kind of task which can cause some confusion due to different login concepts in Oracle and MySQL.

The equivalent of sqlplus / as sysdba connection is “mysql” terminal command with a flag -uroot. In the MySQL world, the superuser is called root. MySQL database users (including root) are defined by the name and host from where it can connect.

The information about user and hosts from where it can connect is stored in mysql.user table. With the connection attempt, MySQL checks if the client host, username and password match the row in the metadata table.

This is a bit of a different approach than in Oracle where we have a user name and password only, but those who are familiar with Oracle Connection Manager might find some similarities.

You will not find predefined TNS entries like in Oracle. Usually, for an admin connection, we need user, password and -h host flag. The default port is 3306 (like 1521 in Oracle) but this may vary on different setups.

By default, many installations will have root access connection from any machine (root@’%’) blocked, so you have to log in to the server hosting MySQL, typically via ssh.

Type the following:

mysql -u root

When the root password is not set this is enough. If the password is required then you should add the flag -p.

mysql -u root -p

You are now logged in to the mysql client (the equivalent of sqlplus) and will see a prompt, typically 'mysql>'.

Is MySQL up and running?

You can use the mysql service startup script or mysqladmin command to find out if it is running. Then you can use the ps command to see if mysql processes are up and running. Another alternative can be mysqladmin, which is a utility that is used for performing administrative operations.

mysqladmin -u root -p status

On Debian:

/etc/init.d/mysql status

If you are using RedHat or Fedora then you can use the following script:

service mysqld status

Or

/etc/init.d/mysqld status

Or

systemctl status mysql.service

On MariaDB instances, you should look for the MariaDB service name.

systemctl status mariadb What’s in this database?

Like in Oracle, you can querythe metadata objects to get information about database objects.

It’s common to use some shortcuts here, commands that help you to list objects or get DDL of the objects.

show databases; use database_name; show tables; show table status; show index from table_name; show create table table_name;

Similar to Oracle you can describe the table:

desc table_name; Where is my data stored?

There is no dedicated internal storage like ASM in MySQL. All data files are placed in the regular OS mount points. With a default installation, you can find your data in:

/var/lib/mysql

The location is based on the variable datadir.

root@mysql-3:~# cat /etc/mysql/my.cnf | grep datadir datadir=/var/lib/mysql

You will see there a directory for each database.

Depending on the version and storage engine (yes there are a few here), the database’s directory may contain files of the format *.frm, which define the structure of each table within the database. For MyISAM tables, the data (*.MYD) and indexes (*.MYI) are stored within this directory also.

InnoDB tables are stored in InnoDB tablespaces. Each of which consists of one or more files, which are similar to Oracle tablespaces. In a default installation, all InnoDB data and indexes for all databases on a MySQL server are held in one tablespace, consisting of one file: /var/lib/mysql/ibdata1. In most setups, you don’t manage tablespaces like in Oracle. The best practice is to keep them with autoextend on and max size unlimited.

root@mysql-3:~# cat /etc/mysql/my.cnf | grep innodb-data-file-path innodb-data-file-path = ibdata1:100M:autoextend

InnoDB has log files, which are the equivalent of Oracle redo logs, allowing automatic crash recovery. By default there are two log files: /var/lib/mysql/ib_logfile0 and /var/lib/mysql/ib_logfile1. Undo data is held within the tablespace file.

root@galera-3:/var/lib/mysql# ls -rtla | grep logfile -rw-rw---- 1 mysql mysql 268435456 Dec 15 00:59 ib_logfile1 -rw-rw---- 1 mysql mysql 268435456 Mar 6 11:45 ib_logfile0 Where is the metadata information?

There are no dba_*, user_*, all_* type of views but MySQL has internal metadata views.

Information_schema is defined in the SQL 2003 standard and is implemented by other major databases, e.g. SQL Server, PostgreSQL.

Since MySQL 5.0, the information_schema database has been available, containing data dictionary information. The information was actually stored in the external FRM files. Finally, after many years .frm files are gone in version 8.0. The metadata is still visible in the information_schema database but uses the InnoDB storage engine.

To see all actual views contained in the data dictionary within the mysql client, switch to information_schema database:

use information_schema; show tables;

You can find additional information in the MySQL database,which contains information about db, event (MySQL jobs), plugins, replication, database, users etc.

The number of views depends on the version and vendor.

ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE Select * from v$session

Oracle’s select * from v$session is represented here with the command SHOW PROCESSLIST which shows the list of threads.

mysql> SHOW PROCESSLIST; +---------+------------------+------------------+--------------------+---------+--------+--------------------+------------------+-----------+---------------+ | Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | +---------+------------------+------------------+--------------------+---------+--------+--------------------+------------------+-----------+---------------+ | 1 | system user | | NULL | Sleep | 469264 | wsrep aborter idle | NULL | 0 | 0 | | 2 | system user | | NULL | Sleep | 469264 | NULL | NULL | 0 | 0 | | 3 | system user | | NULL | Sleep | 469257 | NULL | NULL | 0 | 0 | | 4 | system user | | NULL | Sleep | 469257 | NULL | NULL | 0 | 0 | | 6 | system user | | NULL | Sleep | 469257 | NULL | NULL | 0 | 0 | | 16 | maxscale | 10.0.3.168:5914 | NULL | Sleep | 5 | | NULL | 4 | 4 | | 59 | proxysql-monitor | 10.0.3.168:6650 | NULL | Sleep | 7 | | NULL | 0 | 0 | | 81 | proxysql-monitor | 10.0.3.78:62896 | NULL | Sleep | 6 | | NULL | 0 | 0 | | 1564 | proxysql-monitor | 10.0.3.78:25064 | NULL | Sleep | 3 | | NULL | 0 | 0 | | 1822418 | cmon | 10.0.3.168:41202 | information_schema | Sleep | 0 | | NULL | 0 | 8 | | 1822631 | cmon | 10.0.3.168:43254 | information_schema | Sleep | 4 | | NULL | 1 | 1 | | 1822646 | cmon | 10.0.3.168:43408 | information_schema | Sleep | 0 | | NULL | 464 | 464 | | 2773260 | backupuser | localhost | mysql | Query | 0 | init | SHOW PROCESSLIST | 0 | 0 | +---------+------------------+------------------+--------------------+---------+--------+--------------------+------------------+-----------+---------------+ 13 rows in set (0.00 sec)

It is based on information stored in the information_schema.processlist view. The view requires to have the PROCESS privilege. It can also help you to check if you are running out of the maximum number of processes.

Where is an alert log?

The error log can be found in my.cnf or via show variables command.

mysql> show variables like 'log_error'; +---------------+--------------------------+ | Variable_name | Value | +---------------+--------------------------+ | log_error | /var/lib/mysql/error.log | +---------------+--------------------------+ 1 row in set (0.00 sec) Where is the list of the users and their permissions?

The information about users is stored in the mysql.user table, while the grants are stored in several places including the mysql.user, mysql.tables_priv,

MySQL user access is defined in:

mysql.columns_priv, mysql.tables_priv, mysql.db,mysql.user

The preferable way to list grants is to use pt-grants, the tool from Percona toolkit (a must-have for every MySQL DBA).

pt-show-grants --host localhost --user root --ask-pass

Alternatively, you can use the following query (created by Calvaldo)

SELECT CONCAT("`",gcl.Db,"`") AS 'Database(s) Affected', CONCAT("`",gcl.Table_name,"`") AS 'Table(s) Affected', gcl.User AS 'User-Account(s) Affected', IF(gcl.Host='%','ALL',gcl.Host) AS 'Remote-IP(s) Affected', CONCAT("GRANT ",UPPER(gcl.Column_priv)," (",GROUP_CONCAT(gcl.Column_name),") ", "ON `",gcl.Db,"`.`",gcl.Table_name,"` ", "TO '",gcl.User,"'@'",gcl.Host,"';") AS 'GRANT Statement (Reconstructed)' FROM mysql.columns_priv gcl GROUP BY CONCAT(gcl.Db,gcl.Table_name,gcl.User,gcl.Host) /* SELECT * FROM mysql.columns_priv */ UNION /* [Database.Table]-Specific Grants */ SELECT CONCAT("`",gtb.Db,"`") AS 'Database(s) Affected', CONCAT("`",gtb.Table_name,"`") AS 'Table(s) Affected', gtb.User AS 'User-Account(s) Affected', IF(gtb.Host='%','ALL',gtb.Host) AS 'Remote-IP(s) Affected', CONCAT( "GRANT ",UPPER(gtb.Table_priv)," ", "ON `",gtb.Db,"`.`",gtb.Table_name,"` ", "TO '",gtb.User,"'@'",gtb.Host,"';" ) AS 'GRANT Statement (Reconstructed)' FROM mysql.tables_priv gtb WHERE gtb.Table_priv!='' /* SELECT * FROM mysql.tables_priv */ UNION /* Database-Specific Grants */ SELECT CONCAT("`",gdb.Db,"`") AS 'Database(s) Affected', "ALL" AS 'Table(s) Affected', gdb.User AS 'User-Account(s) Affected', IF(gdb.Host='%','ALL',gdb.Host) AS 'Remote-IP(s) Affected', CONCAT( 'GRANT ', CONCAT_WS(',', IF(gdb.Select_priv='Y','SELECT',NULL), IF(gdb.Insert_priv='Y','INSERT',NULL), IF(gdb.Update_priv='Y','UPDATE',NULL), IF(gdb.Delete_priv='Y','DELETE',NULL), IF(gdb.Create_priv='Y','CREATE',NULL), IF(gdb.Drop_priv='Y','DROP',NULL), IF(gdb.Grant_priv='Y','GRANT',NULL), IF(gdb.References_priv='Y','REFERENCES',NULL), IF(gdb.Index_priv='Y','INDEX',NULL), IF(gdb.Alter_priv='Y','ALTER',NULL), IF(gdb.Create_tmp_table_priv='Y','CREATE TEMPORARY TABLES',NULL), IF(gdb.Lock_tables_priv='Y','LOCK TABLES',NULL), IF(gdb.Create_view_priv='Y','CREATE VIEW',NULL), IF(gdb.Show_view_priv='Y','SHOW VIEW',NULL), IF(gdb.Create_routine_priv='Y','CREATE ROUTINE',NULL), IF(gdb.Alter_routine_priv='Y','ALTER ROUTINE',NULL), IF(gdb.Execute_priv='Y','EXECUTE',NULL), IF(gdb.Event_priv='Y','EVENT',NULL), IF(gdb.Trigger_priv='Y','TRIGGER',NULL) ), " ON `",gdb.Db,"`.* TO '",gdb.User,"'@'",gdb.Host,"';" ) AS 'GRANT Statement (Reconstructed)' FROM mysql.db gdb WHERE gdb.Db != '' /* SELECT * FROM mysql.db */ UNION /* User-Specific Grants */ SELECT "ALL" AS 'Database(s) Affected', "ALL" AS 'Table(s) Affected', gus.User AS 'User-Account(s) Affected', IF(gus.Host='%','ALL',gus.Host) AS 'Remote-IP(s) Affected', CONCAT( "GRANT ", IF((gus.Select_priv='N')&(gus.Insert_priv='N')&(gus.Update_priv='N')&(gus.Delete_priv='N')&(gus.Create_priv='N')&(gus.Drop_priv='N')&(gus.Reload_priv='N')&(gus.Shutdown_priv='N')&(gus.Process_priv='N')&(gus.File_priv='N')&(gus.References_priv='N')&(gus.Index_priv='N')&(gus.Alter_priv='N')&(gus.Show_db_priv='N')&(gus.Super_priv='N')&(gus.Create_tmp_table_priv='N')&(gus.Lock_tables_priv='N')&(gus.Execute_priv='N')&(gus.Repl_slave_priv='N')&(gus.Repl_client_priv='N')&(gus.Create_view_priv='N')&(gus.Show_view_priv='N')&(gus.Create_routine_priv='N')&(gus.Alter_routine_priv='N')&(gus.Create_user_priv='N')&(gus.Event_priv='N')&(gus.Trigger_priv='N')&(gus.Create_tablespace_priv='N')&(gus.Grant_priv='N'), "USAGE", IF((gus.Select_priv='Y')&(gus.Insert_priv='Y')&(gus.Update_priv='Y')&(gus.Delete_priv='Y')&(gus.Create_priv='Y')&(gus.Drop_priv='Y')&(gus.Reload_priv='Y')&(gus.Shutdown_priv='Y')&(gus.Process_priv='Y')&(gus.File_priv='Y')&(gus.References_priv='Y')&(gus.Index_priv='Y')&(gus.Alter_priv='Y')&(gus.Show_db_priv='Y')&(gus.Super_priv='Y')&(gus.Create_tmp_table_priv='Y')&(gus.Lock_tables_priv='Y')&(gus.Execute_priv='Y')&(gus.Repl_slave_priv='Y')&(gus.Repl_client_priv='Y')&(gus.Create_view_priv='Y')&(gus.Show_view_priv='Y')&(gus.Create_routine_priv='Y')&(gus.Alter_routine_priv='Y')&(gus.Create_user_priv='Y')&(gus.Event_priv='Y')&(gus.Trigger_priv='Y')&(gus.Create_tablespace_priv='Y')&(gus.Grant_priv='Y'), "ALL PRIVILEGES", CONCAT_WS(',', IF(gus.Select_priv='Y','SELECT',NULL), IF(gus.Insert_priv='Y','INSERT',NULL), IF(gus.Update_priv='Y','UPDATE',NULL), IF(gus.Delete_priv='Y','DELETE',NULL), IF(gus.Create_priv='Y','CREATE',NULL), IF(gus.Drop_priv='Y','DROP',NULL), IF(gus.Reload_priv='Y','RELOAD',NULL), IF(gus.Shutdown_priv='Y','SHUTDOWN',NULL), IF(gus.Process_priv='Y','PROCESS',NULL), IF(gus.File_priv='Y','FILE',NULL), IF(gus.References_priv='Y','REFERENCES',NULL), IF(gus.Index_priv='Y','INDEX',NULL), IF(gus.Alter_priv='Y','ALTER',NULL), IF(gus.Show_db_priv='Y','SHOW DATABASES',NULL), IF(gus.Super_priv='Y','SUPER',NULL), IF(gus.Create_tmp_table_priv='Y','CREATE TEMPORARY TABLES',NULL), IF(gus.Lock_tables_priv='Y','LOCK TABLES',NULL), IF(gus.Execute_priv='Y','EXECUTE',NULL), IF(gus.Repl_slave_priv='Y','REPLICATION SLAVE',NULL), IF(gus.Repl_client_priv='Y','REPLICATION CLIENT',NULL), IF(gus.Create_view_priv='Y','CREATE VIEW',NULL), IF(gus.Show_view_priv='Y','SHOW VIEW',NULL), IF(gus.Create_routine_priv='Y','CREATE ROUTINE',NULL), IF(gus.Alter_routine_priv='Y','ALTER ROUTINE',NULL), IF(gus.Create_user_priv='Y','CREATE USER',NULL), IF(gus.Event_priv='Y','EVENT',NULL), IF(gus.Trigger_priv='Y','TRIGGER',NULL), IF(gus.Create_tablespace_priv='Y','CREATE TABLESPACE',NULL) ) ) ), " ON *.* TO '",gus.User,"'@'",gus.Host,"' REQUIRE ", CASE gus.ssl_type WHEN 'ANY' THEN "SSL " WHEN 'X509' THEN "X509 " WHEN 'SPECIFIED' THEN CONCAT_WS("AND ", IF((LENGTH(gus.ssl_cipher)>0),CONCAT("CIPHER '",CONVERT(gus.ssl_cipher USING utf8),"' "),NULL), IF((LENGTH(gus.x509_issuer)>0),CONCAT("ISSUER '",CONVERT(gus.ssl_cipher USING utf8),"' "),NULL), IF((LENGTH(gus.x509_subject)>0),CONCAT("SUBJECT '",CONVERT(gus.ssl_cipher USING utf8),"' "),NULL) ) ELSE "NONE " END, "WITH ", IF(gus.Grant_priv='Y',"GRANT OPTION ",""), "MAX_QUERIES_PER_HOUR ",gus.max_questions," ", "MAX_CONNECTIONS_PER_HOUR ",gus.max_connections," ", "MAX_UPDATES_PER_HOUR ",gus.max_updates," ", "MAX_USER_CONNECTIONS ",gus.max_user_connections, ";" ) AS 'GRANT Statement (Reconstructed)' FROM mysql.user gus; How to create a mysql user

The ‘create user’ procedure is similar to Oracle. The simplest example could be:

CREATE user 'username'@'hostname' identified by 'password'; GRANT privilege_name on *.* TO 'username'@'hostname';

The option to grant and create in one line with:

GRANT privilege_name ON *.* TO 'username'@'hostname' identified by 'password';

has been removed in MySQL 8.0.

How do I start and stop MySQL?

You can stop and start MySQL with the service.

The actual command depends on the Linux distribution and the service name.

Below you can find an example with the service name mysqld.

Ubuntu /etc/init.d/mysqld start /etc/init.d/mysqld stop /etc/init.d/mysqld restart RedHat/Centos service mysqld start service mysqld stop service mysqld restart systemctl start mysqld.service systemctl stop mysqld.service systemctl restart mysqld.service Where is the MySQL Server Configuration data?

The configuration is stored in the my.cnf file.

Until version 8.0, any dynamic setting change that should remain after a restart required a manual update of the my.cnf file. Similar to Oracle’s scope=both, you can change values using the persistent option.

mysql> SET PERSIST max_connections = 1000; mysql> SET @@PERSIST.max_connections = 1000;

For older versions use:

mysql> SET GLOBAL max_connections = 1000; $ vi /etc/mysql/my.cnf SET GLOBAL max_connections = 1000; How do I backup MySQL?

There are two ways to execute a mysql backup.

For smaller databases or smaller selective backups, you can use the mysqldump command.

Database backup with mysqldump (logical backup): mysqldump -uuser -p --databases db_name --routines --events --single-transaction | gzip > db_name_backup.sql.gz xtrabackup, mariabackup (hot binary backup)

The preferable method is to use xtrabackup or mariabackup, external tools to run hot binary backups.

Oracle offers hot binary backup in the paid version called MySQL Enterprise Edition.

mariabackup --user=root --password=PASSWORD --backup --target-dir=/u01/backups/ Stream backup to other server

Start a listener on the external server on the preferable port (in this example 1984)

nc -l 1984 | pigz -cd - | pv | xbstream -x -C /u01/backups

Run backup and transfer to external host

innobackupex --user=root --password=PASSWORD --stream=xbstream /var/tmp | pigz | pv | nc external_host.com 1984 Copy user permission

It’s often needed to copy user permission and transfer them to the other servers.

The recommended way to do this is to use pt-show-grants.

pt-show-grants > /u01/backups How do I restore MySQL? Logical backup restore

MySQLdump creates the SQL file, which can be executed with the source command.

To keep the log file of the execution, use the tee command.

mysql> tee dump.log mysql> source mysqldump.sql Binary backup restore (xtrabackup/mariabackup)

To restore of MySQL from the binary backup you need to first restore the files and then apply the log files.

You can compare this process to restore and recover in Oracle.

xtrabackup --copy-back --target-dir=/var/lib/data innobackupex --apply-log --use-memory=[values in MB or GB] /var/lib/data

Hopefully, these tips give a good overview of how to perform basic administrative tasks.

Tags:  oracle MySQL management migration

FOSSASIA Summit 2019 with MySQL

MySQL Community with an APAC MySQL team are going to be part of the FOSSASIA Summit this week. We are a Bronze sponsor with a MySQL booth in the exhibition area. We are also having several talks in the Database track on Saturday, March 16, please see some of them below together with the details about the show:

  • Name: FOSSASIA Summit 2019
  • Place: Singapore
  • Date: March 14-17, 2019
  • MySQL talks in Database Track on Mar 16 in Training Room 2-1:
    • "Replication: What's New in MySQL 8.0" by Narendra S Chauhan, the Principal Technical Staff @Replication Development team. The talk is scheduled for 10:55-11:20.
    • "Upgrading to MySQL 8.0+, a more automated experience" by Nisha Gopalakrishnan, the Principal Technical Staff @MySQL Server Development team. The talk is scheduled for 11:25-11:50.
    • "Performance Schema - A great insight of running MySQL Server" by Mayank Prasad, the Principal Technical Staff @MySQL Development team. The talk is scheduled for 14:00-14:25
    • "Docker Compose Setup for MySQL InnoDB Cluster" by Balasubramanian Kandasamy, the Senior Software Development Manager in MySQL Release Engineering team. Talk is scheduled for 14:30-14:55.
    • "Utilize and take advantage of Optimizer Features in MySQL 8.0" by Amit Bhattacharya, the Senior SW Development Manager @MySQL Optimizer, InnoDB, Runtime & Server Testing team. Talk is scheduled for 15:00-15:25.
    • "MySQL NDB Cluster: Set up a shared nothing high availability cluster in 15 minutes" by Saroj Tripathy, the QA Architect for MySQL Cluster team. Talk is scheduled for 15:45-16:10.
    • "Using JSON in MySQL to get the best of both worlds (JSON + SQL)" by Chaithra Gopalareddy, the Senior Principal Member Technical Staff @MySQL Development. Talk is scheduled for 16:15-16:40.
    • "Improving Database Security with MySQL 8.0" by Harin Nalin Vadodaria, the Principal MTS with MySQL engineering team. Talk is scheduled for 16:45-17:10.
    • and more in the Database track...

We are looking forward to meeting & talking to you @FOSSASIA Summit 2019!!!

Pages