Planet MySQL

What May Cause MySQL ERROR 1213

Probably all of us, MySQL users, DBAs and developers had seen error 1213 more than once, in one context or the other:
mysql> select * from t1;
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transactionThe first thing that comes to mind in this case is: "OK, we have InnoDB deadlock, let's check the details", followed by the SHOW ENGINE INNODB STATUS check, like this:
mysql> show engine innodb status\G
*************************** 1. row ***************************
  Type: InnoDB
  Name:
Status:
=====================================
2018-12-08 17:41:11 0x7f2f8b8db700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 12 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 59 srv_active, 0 srv_shutdown, 14824 srv_idle
srv_master_thread log flush and writes: 14882
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 326
OS WAIT ARRAY INFO: signal count 200
RW-shared spins 0, rounds 396, OS waits 195
RW-excl spins 0, rounds 120, OS waits 4
RW-sx spins 0, rounds 0, OS waits 0
Spin rounds per wait: 396.00 RW-shared, 120.00 RW-excl, 0.00 RW-sx
------------
TRANSACTIONS
------------
Trx id counter 14960
Purge done for trx's n:o < 14954 undo n:o < 0 state: running but idle
History list length 28
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 421316960193880, not started
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 421316960192752, not started
0 lock struct(s), heap size 1136, 0 row lock(s)
--------
FILE I/O
--------
...Now, what if you get the output like the one above? Without any LATEST DETECTED DEADLOCK section? I've seen people wondering how is it even possible and trying to find some suspicious bug somewhere...

Do not be in a hurry - time to recall that there are actually at least 4 quite common reasons to get error 2013 in modern (5.5+) version of MySQL, MariaDB and Friends:
  1. InnoDB deadlock happened
  2. Metadata deadlock happened
  3. If you are lucky enough to use Galera cluster, Galera conflict happened
  4. Deadlock happened in some other storage engine (for example, MyRocks)
I am not lucky enough to use MySQL's group replication yet, but I know that conflicts there are also possible. I am just not sure if error 1213 is also reported in that case. Feel free to check with a test case similar to the one I've used for Galera below.

I also suspect deadlocks with other engines are also possible. As a bonus point, I'll demonstrate the deadlock with MyRocks also.

Let's reproduce these 3 cases one by one and check how to get more information on them. In all cases it's enough to have at most 2 InnoDB tables with just two rows:
mysql> show create table t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `id` int(11) NOT NULL,
  `c1` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

mysql> show create table t2\G
*************************** 1. row ***************************
       Table: t2
Create Table: CREATE TABLE `t2` (
  `id` int(11) NOT NULL,
  `c1` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

mysql> select * from t1;
+----+------+
| id | c1   |
+----+------+
|  1 |    1 |
|  2 |    2 |
+----+------+
2 rows in set (0.00 sec)

mysql> select * from t2;
+----+------+
| id | c1   |
+----+------+
|  1 |    1 |
|  2 |    2 |
+----+------+
2 rows in set (0.00 sec)We'll need two sessions, surely.

InnoDB DeadlockWith InnoDB and tables above it's really easy to end up with a deadlock. In the first session execute the following:
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from t1 where id = 1 for update;
+----+------+
| id | c1   |
+----+------+
|  1 |    1 |
+----+------+
1 row in set (0.00 sec)In the second session execute:
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from t1 where id = 2 for update;
+----+------+
| id | c1   |
+----+------+
|  2 |    2 |
+----+------+
1 row in set (0.02 sec)Now in the first session try to access the row with id=2 asking for incompatible lock:
mysql> select * from t1 where id = 2 for update;This statement hangs waiting for a lock (up to innodb_lock_wait_timeout seconds). Try to access the row with id=1 asking for incompatible lock in the second session, and you'll get the deadlock error:
mysql> select * from t1 where id = 1 for update;
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transactionat this moment SELECT in the first transaction returns data:
+----+------+
| id | c1   |
+----+------+
|  2 |    2 |
+----+------+
1 row in set (5.84 sec)It's that simple, one table and two rows is enough. We can get the details in the output of SHOW ENGINE INNODB STATUS:
...
------------------------
LATEST DETECTED DEADLOCK
------------------------
2018-12-08 18:32:59 0x7f2f8b8db700
*** (1) TRANSACTION:
TRANSACTION 15002, ACTIVE 202 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 3 lock struct(s), heap size 1136, 2 row lock(s)
MySQL thread id 8, OS thread handle 139842181244672, query id 8545 localhost root statistics
select * from t1 where id = 2 for update
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 94 page no 3 n bits 72 index PRIMARY of table `test`.`t1` trx id 15002 lock_mode X locks rec but not gap waiting
*** (2) TRANSACTION:
TRANSACTION 15003, ACTIVE 143 sec starting index read
mysql tables in use 1, locked 1
3 lock struct(s), heap size 1136, 2 row lock(s)
MySQL thread id 9, OS thread handle 139842181510912, query id 8546 localhost root statistics
select * from t1 where id = 1 for update
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 94 page no 3 n bits 72 index PRIMARY of table `test`.`t1` trx id 15003 lock_mode X locks rec but not gap
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 94 page no 3 n bits 72 index PRIMARY of table `test`.`t1` trx id 15003 lock_mode X locks rec but not gap waiting
*** WE ROLL BACK TRANSACTION (2)
------------
TRANSACTIONS
------------
...In the case above I've used Percona Server  5.7.24-26 (why not). Details of output may vary depending on version (and bugs it has :).  If you use MariaDB 5.5+, in case of InnoDB deadlock special innodb_deadlocks status variable is also incremented.

Metadata DeadlockUnlike with InnoDB deadlocks, chances that you've seen deadlocks with metadata locks involved are low. One may spend notable time trying to reproduce such a deadlock, but (as usual) quck check of MySQL bugs database may help to find an easy to reproduce case. I mean Bug #65890 - "Deadlock that is not a deadlock with transaction and lock tables".

So, let's try the following scenario with two sessions and out InnoDB tables, t1 and t2. In one session:
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from t2 for update;
+----+------+
| id | c1   |
+----+------+
|  1 |    1 |
|  2 |    2 |
+----+------+
2 rows in set (0.00 sec)In another session:
mysql> lock tables t1 write, t2 write;It hangs, waiting as long as lock_wait_timeout. We can check what happens with metadata locks using performance_schema.metadata_locks table (as we use MySQL or Percona Server 5.7+, more on setup, alternatives for MariaDB etc here and there). In the first session:
mysql> select * from performance_schema.metadata_locks;
+-------------+--------------------+----------------+-----------------------+----------------------+---------------+-------------+--------+-----------------+----------------+
| OBJECT_TYPE | OBJECT_SCHEMA      | OBJECT_NAME    | OBJECT_INSTANCE_BEGIN | LOCK_TYPE            | LOCK_DURATION | LOCK_STATUS | SOURCE | OWNER_THREAD_ID | OWNER_EVENT_ID |
+-------------+--------------------+----------------+-----------------------+----------------------+---------------+-------------+--------+-----------------+----------------+
| TABLE       | test               | t2             |       139841686765904 | SHARED_WRITE         | TRANSACTION   | GRANTED     |        |              45 |           2850 || GLOBAL      | NULL               | NULL           |       139841688088672 | INTENTION_EXCLUSIVE  | STATEMENT     | GRANTED     |        |              46 |            205 |
| SCHEMA      | test               | NULL           |       139841688088912 | INTENTION_EXCLUSIVE  | TRANSACTION   | GRANTED     |        |              46 |            205 |
| TABLE       | test               | t1             |       139841688088992 | SHARED_NO_READ_WRITE | TRANSACTION   | GRANTED     |        |              46 |            207 |
| TABLE       | test               | t2             |       139841688089072 | SHARED_NO_READ_WRITE | TRANSACTION   | PENDING     |        |              46 |            208 |
| TABLE       | performance_schema | metadata_locks |       139841686219040 | SHARED_READ          | TRANSACTION   | GRANTED     |        |              45 |           3003 |
+-------------+--------------------+----------------+-----------------------+----------------------+---------------+-------------+--------+-----------------+----------------+
6 rows in set (0.00 sec)As soon as we try this in the first session:
mysql> select * from t1;
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transactionwe get the same deadlock error 1213 and LOCK TABLES in the second session completes. We can find nothing about this deadlock in the output of SHOW ENGINE INNODB STATUS (as shared at the beginning of this post). I am also not aware about any status variables to count metadata deadlocks.

You can find some useful information about metadata deadlocks in the manual.

Galera ConflictFor simplicity I'll use MariaDB 10.1.x and simple 2 nodes setup on the same box as I described here. I'll start first node as a new cluster and create tables for this test:
openxs@ao756:~/dbs/maria10.1$ bin/mysqld_safe --defaults-file=/home/openxs/galera/mynode1.cnf --wsrep-new-cluster &
[1] 13022
openxs@ao756:~/dbs/maria10.1$ 181208 20:40:52 mysqld_safe Logging to '/tmp/mysql-node1.err'.
181208 20:40:52 mysqld_safe Starting mysqld daemon with databases from /home/openxs/galera/node1

openxs@ao756:~/dbs/maria10.1$ bin/mysql  --socket=/tmp/mysql-node1.sock test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.1.34-MariaDB Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [test]> drop table t1, t2;
ERROR 1051 (42S02): Unknown table 'test.t2'
MariaDB [test]> create table t1(id int, c1 int, primary key(id));
Query OK, 0 rows affected (0.29 sec)

MariaDB [test]> create table t2(id int, c1 int, primary key(id));
Query OK, 0 rows affected (0.22 sec)

MariaDB [test]> insert into t1 values (1,1), (2,2);
Query OK, 2 rows affected (0.07 sec)
Records: 2  Duplicates: 0  Warnings: 0

MariaDB [test]> insert into t2 values (1,1), (2,2);
Query OK, 2 rows affected (0.18 sec)
Records: 2  Duplicates: 0  Warnings: 0Then I'll start second node, make sure it joined the cluster and has the same data:
openxs@ao756:~/dbs/maria10.1$ bin/mysqld_safe --defaults-file=/home/openxs/galera/mynode2.cnf &
[2] 15110
openxs@ao756:~/dbs/maria10.1$ 181208 20:46:11 mysqld_safe Logging to '/tmp/mysql-node2.err'.
181208 20:46:11 mysqld_safe Starting mysqld daemon with databases from /home/openxs/galera/node2

openxs@ao756:~/dbs/maria10.1$ bin/mysql --socket=/tmp/mysql-node2.sock test     Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.1.34-MariaDB Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  MariaDB [test]> show status like 'wsrep_cluster%';
+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id    | 4                                    |
| wsrep_cluster_size       | 2                                    |
| wsrep_cluster_state_uuid | b1d227b1-0211-11e6-8ce0-3644ad2b03dc |
| wsrep_cluster_status     | Primary                              |
+--------------------------+--------------------------------------+
4 rows in set (0.04 sec)

MariaDB [test]> select * from t2;
+----+------+
| id | c1   |
+----+------+
|  1 |    1 |
|  2 |    2 |
+----+------+
2 rows in set (0.02 sec)Now we are ready to try to provoke Galera conflict. For this we have to try to update the same data in transactions on two different nodes. In one session connected to node1:
MariaDB [test]> select @@wsrep_node_name;
+-------------------+
| @@wsrep_node_name |
+-------------------+
| node1             |
+-------------------+
1 row in set (0.00 sec)

MariaDB [test]> start transaction;
Query OK, 0 rows affected (0.00 sec)

MariaDB [test]> update test.t1 set c1 = 5 where id=1;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

In another session connected to other node:

MariaDB [test]> select @@wsrep_node_name;
+-------------------+
| @@wsrep_node_name |
+-------------------+
| node2             |
+-------------------+
1 row in set (0.00 sec)

MariaDB [test]> start transaction;
Query OK, 0 rows affected (0.00 sec)

MariaDB [test]> update test.t1 set c1 = 6 where id=1;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0 Now in the first we can COMMIT successfully:
MariaDB [test]> commit;
Query OK, 0 rows affected (0.12 sec)But if we try to COMMIT in the second:
MariaDB [test]> commit;
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transactionWe get that same error 1213 about the deadlock. Surely you'll see nothing about this deadlock in INNODB STATUS output, as it was NOT an InnoDB deadlock, but Galera conflict. Check these status variables on the node2:
MariaDB [test]> show status like 'wsrep_local%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_bf_aborts      | 1                                    |
| wsrep_local_cached_downto  | 75                                   |
| wsrep_local_cert_failures  | 0                                    |
| wsrep_local_commits        | 0                                    |
| wsrep_local_index          | 0                                    |
| wsrep_local_recv_queue     | 0                                    |
| wsrep_local_recv_queue_avg | 0.000000                             |
| wsrep_local_recv_queue_max | 1                                    |
| wsrep_local_recv_queue_min | 0                                    |
| wsrep_local_replays        | 0                                    |
| wsrep_local_send_queue     | 0                                    |
| wsrep_local_send_queue_avg | 0.000000                             |
| wsrep_local_send_queue_max | 1                                    |
| wsrep_local_send_queue_min | 0                                    |
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
| wsrep_local_state_uuid     | b1d227b1-0211-11e6-8ce0-3644ad2b03dc |
+----------------------------+--------------------------------------+
17 rows in set (0.01 sec)If wsrep_local_bf_aborts > 0, you had conflicts and local transaction was rolled back to prevent them. We can see that remote one wins, on node2:
MariaDB [test]> select * from t1;
+----+------+
| id | c1   |
+----+------+
|  1 |    5 |
|  2 |    2 |
+----+------+
2 rows in set (0.00 sec)To summarize, in Galera "first commit wins" and local transaction involved in conflict is always a looser. You can get a lot of information about conflicts in the error log if you enable conflict logging features through wsrep_log_conflicts and cert.log_conflicts. See this fine manual for details.

MyRocks DeadlockWe can easily check how deadlocks are processed by MyRocks by just loading the plugin for the engine, converting tables to MyRocks and trying the same InnoDB scenario with the same Percona Server we used initially. But first, if you use Percona binaries you have to install a separate package:
openxs@ao756:~$ dpkg -l | grep rocksdb
openxs@ao756:~$ sudo apt-get install percona-server-rocksdb-5.7
[sudo] password for openxs:
Reading package lists... Done
Building dependency tree
...
Unpacking percona-server-rocksdb-5.7 (5.7.24-26-1.trusty) ...
Setting up percona-server-rocksdb-5.7 (5.7.24-26-1.trusty) ...


 * This release of Percona Server is distributed with RocksDB storage engine.
 * Run the following script to enable the RocksDB storage engine in Percona Server:

        ps-admin --enable-rocksdb -u <mysql_admin_user> -p[mysql_admin_pass] [-S <socket>] [-h <host> -P <port>]Percona's manual has a lot more details and relies on separate ps-admin script, but basically you have to INSTALL PLUGINs like this (check script's code):
mysql> INSTALL PLUGIN ROCKSDB SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.86 sec)

mysql> INSTALL PLUGIN ROCKSDB_CFSTATS SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.06 sec)

mysql> INSTALL PLUGIN ROCKSDB_DBSTATS SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.08 sec)

mysql> INSTALL PLUGIN ROCKSDB_PERF_CONTEXT SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.05 sec)

mysql> INSTALL PLUGIN ROCKSDB_PERF_CONTEXT_GLOBAL SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.06 sec)

mysql> INSTALL PLUGIN ROCKSDB_CF_OPTIONS SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.05 sec)

mysql> INSTALL PLUGIN ROCKSDB_GLOBAL_INFO SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.05 sec)

mysql> INSTALL PLUGIN ROCKSDB_COMPACTION_STATS SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.05 sec)

mysql> INSTALL PLUGIN ROCKSDB_DDL SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.06 sec)

mysql> INSTALL PLUGIN ROCKSDB_INDEX_FILE_MAP SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.05 sec)

mysql> INSTALL PLUGIN ROCKSDB_LOCKS SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.05 sec)

mysql> INSTALL PLUGIN ROCKSDB_TRX SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.05 sec)

mysql> INSTALL PLUGIN ROCKSDB_DEADLOCK SONAME 'ha_rocksdb.so';
Query OK, 0 rows affected (0.06 sec)Then check that the engine is there and convert tables:
mysql> show engines;
+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+
| Engine             | Support | Comment                                                                    | Transactions | XA   | Savepoints |
+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+
| ROCKSDB            | YES     | RocksDB storage engine                                                     | YES          | YES  | YES        |
...

mysql> alter table t1 engine=rocksdb;
Query OK, 2 rows affected (0.64 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> alter table t2 engine=rocksdb;
Query OK, 2 rows affected (0.58 sec)
Records: 2  Duplicates: 0  Warnings: 0Now we are ready to try the same InnoDB scenario. Just note that lock wait timeout for MyRocks is defined by the rocksdb_lock_wait_timeout that is small by default, 1 second, do you have have to increase it first. You also have to set rocksdb_deadlock_detect to ON (as it's OFF by default):
mysql> set global rocksdb_lock_wait_timeout=50;
Query OK, 0 rows affected (0.00 sec)

mysql> set global rocksdb_deadlock_detect=ON;
Query OK, 0 rows affected (0.00 sec)

mysql> \r
Connection id:    14
Current database: test

mysql> start transaction;
Query OK, 0 rows affected (0.02 sec)

mysql> select * from t1 where id = 1 for update;
+----+------+
| id | c1   |
+----+------+
|  1 |    1 |
+----+------+
1 row in set (0.00 sec) Then in the second session:
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from t1 where id = 2 for update;
+----+------+
| id | c1   |
+----+------+
|  2 |    2 |
+----+------+
1 row in set (0.00 sec)In the first:
mysql> select * from t1 where id = 2 for update;and in the second we can get deadlock error:
mysql> select * from t1 where id = 1 for update;
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
mysql> show global status like '%deadlock%';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| rocksdb_row_lock_deadlocks | 1     |
+----------------------------+-------+
1 row in set (0.00 sec)Note that MyRocks has status variable to count deadlocks. Note that Percona Server still does NOT seem to support SHOW ENGINE ROCKSDB TRANSACTION STATUS statement available upstream:
mysql> show engine rocksdb transaction status\G
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'transaction status' at line 1I was not able to find a bug about this (sorry if I missed it), and just reported new task to Percona's JIRA: PS-5114 - "Add support for SHOW ENGINE ROCKSDB TRANSACTION STATUS".

That's probably more than enough for single blog post (that is mostly NOT about bugs). One day I'll refresh my knowledge of MyRocks etc and maybe write more about deadlocks troubleshooting there.

Do not be surprised if you can not find anything in INNODB STATUS when you get error 1213, just proceed with further steps. There are other reasons to explore. Venice hides a lot early in the morning...
To summarize, do not be surprised that after you got MySQL error 1213 you see no information about recent InnoDB deadlock - there are at least 3 more reasons for this error to be reported, as explained above. You should know your configuration and use several other commands and sources of information to pinpoint what exactly happened and why.

Understanding THL, Events and Storage: Part 1

When Tungsten Replicator extracts data, the information that has been extracted is written down into the Tungsten History Log, or THL. These files are in a specific format and they are used to store all of the extracted information in a format that can easily be used to recreate and generate data in a target.

Each transaction from the source is written into the THL as an event, so within a single THL file there will be one or more events stored. For each event, we record information about the overall transaction, as well as then information about the transaction itself. That event can contain one or more statements, or rows, or both. Because we don’t want to get an ever increasing single file, the replicator will also divide up the THL into multiple files to tmake the data easier to manage.

We’ll get down into the details soon, until then, let’s start by looking at the basics of the THL, files and sequence numbers and how to select.

The simplest way to look at the files is first of all to start with the thl command. This provides a simple interface first of all into looking at the THL data and then also understand the contents.

Let’s start by getting a list of the THL files and the events which we can do with the index command:

$ thl index LogIndexEntry thl.data.0000000001(0:295) LogIndexEntry thl.data.0000000002(296:591)

This shows us that there are two files, each containing 295 and 296 events.

We can look inside using the list command to thl. This supports a number of different selection mechanisms, first you can select a single item:

$ thl list -seqno 1 SEQ# = 1 / FRAG# = 0 (last frag) - TIME = 2018-12-06 12:44:10.0 - EPOCH# = 0 - EVENTID = mysql-bin.000108:0000000000001574;-1 - SOURCEID = ubuntu - METADATA = [mysql_server_id=1;dbms_type=mysql;tz_aware=true;strings=utf8;service=alpha;shard=msg;tungsten_filter_columnname=true;tungsten_filter_primarykey=true;tungsten_filter_enumtostring=true] - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00'] - SQL(0) = - ACTION = INSERT - SCHEMA = msg - TABLE = msg

We can also specify a range using either -high or -to and -low and -from. If you specify only one of these, then it assumes you mean from the start or end. For example:

$ thl list -to 100

Will list all events from the start up to and including event 100. While:

$ thl list -low 45

Lists all events from 45 until the end.

You can also be explicit:

$ thl list -from 45 -to 60

Will list events from 45 to 60 inclusive.

Finally, you can use two special options, -first and -last which will show you the first (Surprise!) and last events. You can also supply an optional number. So:

$ thl list -first

Shows the first event, while:

$ thl list -first 10

Shows the first 10 events.

This last option can be really useful when diagnosing an issue because it means we can look at the last events capture by the replicator without having to find the event IDs and work it out.

Up to now I’ve focused on the events, but there’s a critical element to THL that also needs to be considered when thinking about replication of data and this is the data and the files themselves.

This is important to consider because of the file and THL load generated – if you have a busy system then you will generate a lot of events and that, in turn, will generate a lot of THL that needs to be stored. On a very busy system, it’s possible to fill up a disk very quickly with THL.

You can see the files by looking into the directory where they are stored within your installation, by default the thl/SERVICE directory. For example:

$ ll /opt/continuent/thl/alpha/ total 896 drwxrwxr-x 2 mc mc   4096 Dec  6 12:44 ./ drwxrwxr-x 3 mc mc   4096 Dec  6 12:43 ../ -rw-rw-r-- 1 mc mc      0 Dec  6 12:43 disklog.lck -rw-rw-r-- 1 mc mc 669063 Dec  6 12:44 thl.data.0000000001 -rw-rw-r-- 1 mc mc 236390 Dec  6 12:45 thl.data.0000000002

You can see that these files corresponding to the output of thl index (fortunately), and you can also see that the two files are different in size. THL is automatically managed by the replicator – it creates new files on a configurable boundary, but also automatically clears those files away.

We’ll cover that in a future session, but for the moment I’m going to admit that I skipped something important earlier. These two THL files actually contain two copies of the exact same set of data. The difference is that the second file uses a new feature that will come in a forthcoming release which is compression of the THL.

In this case, we are talking about 295 row-based events, but thes second THL file is almost a third of the size of the first. That’s quite a saving for something that normally uses up a lot of space.

If we look at the true output of thl index, we can see that the file is indeed compressed:

$ thl index LogIndexEntry thl.data.0000000001(0:295) LogIndexEntry thl.data.0000000002(296:591) - COMPRESSED

For installations where you are paying for your disk storage in a cloud environment, reducing your THL overhead by less than half is fairly significant. We’ve done a lot of testing and consistently got compression ratios of 2:1 or higher.

We’ve also implemented encryption at this level too, which means you get on disk encryption without the need for an encrypted filesystem, and with compression as well you get secure and efficient storage of that THL which should lower your storage and I/O overhead as well as the storage costs.

MySQL 8 and The FRM Drop… How To Recover Table DDL

… or what I should keep in mind in case of disaster

To retrieve and maintain in SQL format the definition of all tables in a database, is a best practice that we all should adopt. To have that under version control is also another best practice to keep in mind.

While doing that may seem redundant, it can become a life saver in several situations. From the need to review what has historically changed in a table, to knowing who changed what and why… to when you need to recover your data and have your beloved MySQL instance not start…

But let’s be honest, only a few do the right thing, and even fewer keep that information up to date. Given that’s the case, what can we do when we have the need to discover/recover the table structure?

From the beginning, MySQL has used some external files to describe its internal structure.

For instance, if I have a schema named windmills and a table named wmillAUTOINC1, on the file system I will see this:

-rw-r-----. 1 mysql mysql     8838 Mar 14 2018 wmillAUTOINC1.frm -rw-r-----. 1 mysql mysql   131072 Mar 14 2018 wmillAUTOINC1.ibd

The ibd file contains the data, while the frm file contains the structure information.

Putting aside ANY discussion about if this is safe, if it’s transactional and more… when we’ve experienced some major crash and data corruption this approach has been helpful. Being able to read from the frm file was the easiest way to get the information we need.
Simple tools like DBSake made the task quite trivial, and allowed us to script table definition when needed to run long, complex tedious data recovery:

[root@master1 windmills]# /opt/tools/dbsake frmdump wmillAUTOINC1.frm -- -- Table structure for table `wmillAUTOINC1` -- Created with MySQL Version 5.7.20 -- CREATE TABLE `wmillAUTOINC1` ( `id` bigint(11) NOT NULL AUTO_INCREMENT, `uuid` char(36) COLLATE utf8_bin NOT NULL, `millid` smallint(6) NOT NULL, `kwatts_s` int(11) NOT NULL, `date` date NOT NULL, `location` varchar(50) COLLATE utf8_bin NOT NULL, `active` tinyint(2) NOT NULL DEFAULT '1', `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `strrecordtype` char(3) COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`), KEY `IDX_millid` (`millid`,`active`), KEY `IDX_active` (`id`,`active`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin ROW_FORMAT=DYNAMIC;

Of course, if the frm file was also corrupt, then we could try to get the information from the ibdata dictionary. If that is corrupted too (trust me I’ve seen all of these situations) … well a last resource was hoping the customer has a recent table definition stored somewhere, but as mentioned before, we are not so diligent, are we?

Now, though, in MySQL8 we do not have FRM files, they were dropped. Even more interesting is that we do not have the same dictionary, most of the things that we knew have changed, including the dictionary location. So what can be done?

Well Oracle have moved the FRM information—and more—to what is called Serialized Dictionary Information (SDI), the SDI is written INSIDE the ibd file, and represents the redundant copy of the information contained in the data dictionary.

The SDI is updated/modified by DDL operations on tables that reside in that tablespace. This is it: if you have one file per table normally, then you will have in that file ONLY the SDI for that table, but if you have multiple tables in a tablespace, the SDI information will refer to ALL of the tables.

To extract this information from the IBD files, Oracle provides a utility called ibd2sdi. This application parses the SDI information and reports a JSON file that can be easily manipulated to extract and build the table definition.

One exception is represented by Partitioned tables. The SDI information is contained ONLY in the first partition, and if you drop it, it is moved to the next one. I will show that later.

But let’s see how it works. In the next examples I will look for the table’s name, attributes, and datatype starting from the dictionary tables.

To obtain the info I will do this:

/opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/mysql.ibd |jq '.[]?|.[]?|.dd_object?|("------------------------------------"?,"TABLE NAME = ",.name?,"****",(.columns?|.[]?|(.name?,.column_type_utf8?)))'

The result will be something like:

"------------------------------------" "TABLE NAME = " "tables" "****" "id" "bigint(20) unsigned" "schema_id" "bigint(20) unsigned" "name" "varchar(64)" "type" "enum('BASE TABLE','VIEW','SYSTEM VIEW')" "engine" "varchar(64)" "mysql_version_id" "int(10) unsigned" "row_format" "enum('Fixed','Dynamic','Compressed','Redundant','Compact','Paged')" "collation_id" "bigint(20) unsigned" "comment" "varchar(2048)" <snip> "------------------------------------" "TABLE NAME = " "tablespaces" "****" "id" "bigint(20) unsigned" "name" "varchar(259)" "options" "mediumtext" "se_private_data" "mediumtext" "comment" "varchar(2048)" "engine" "varchar(64)" "DB_TRX_ID" "" "DB_ROLL_PTR" ""

I cut the output for brevity, but if you run the above command yourself you’ll be able to see that this retrieves the information for ALL the tables residing in the IBD.

The other thing I hope you noticed is that I am NOT parsing ibdata, but mysql.ibd. Why? Because the dictionary was moved out from ibdata and is now in mysql.ibd.

Look what happens if I try to parse ibdata:

[root@master1 ~]# /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/ibdata1 |jq '.' [INFO] ibd2sdi: SDI is empty.

Be very careful here to not mess up your mysql.ibd file.

Now what can I do to get information about my wmillAUTOINC1 table in MySQL8?

That is quite simple:

/opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/windmills/wmillAUTOINC.ibd |jq '.' [ "ibd2sdi", { "type": 1, "id": 1068, "object": { "mysqld_version_id": 80013, "dd_version": 80013, "sdi_version": 1, "dd_object_type": "Table", "dd_object": { "name": "wmillAUTOINC", "mysql_version_id": 80011, "created": 20180925095853, "last_altered": 20180925095853, "hidden": 1, "options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=1;row_type=2;stats_auto_recalc=0;stats_sample_pages=0;", "columns": [ { "name": "id", "type": 9, "is_nullable": false, "is_zerofill": false, "is_unsigned": false, "is_auto_increment": true, "is_virtual": false, "hidden": 1, "ordinal_position": 1, "char_length": 11, "numeric_precision": 19, "numeric_scale": 0, "numeric_scale_null": false, "datetime_precision": 0, "datetime_precision_null": 1, "has_no_default": false, "default_value_null": false, "srs_id_null": true, "srs_id": 0, "default_value": "AAAAAAAAAAA=", "default_value_utf8_null": true, "default_value_utf8": "", "default_option": "", "update_option": "", "comment": "", "generation_expression": "", "generation_expression_utf8": "", "options": "interval_count=0;", "se_private_data": "table_id=1838;", "column_key": 2, "column_type_utf8": "bigint(11)", "elements": [], "collation_id": 83, "is_explicit_collation": false }, <SNIP> "indexes": [ { "name": "PRIMARY", "hidden": false, "is_generated": false, "ordinal_position": 1, "comment": "", "options": "flags=0;", "se_private_data": "id=2261;root=4;space_id=775;table_id=1838;trx_id=6585972;", "type": 1, "algorithm": 2, "is_algorithm_explicit": false, "is_visible": true, "engine": "InnoDB", <Snip> ], "foreign_keys": [], "partitions": [], "collation_id": 83 } } }, { "type": 2, "id": 780, "object": { "mysqld_version_id": 80011, "dd_version": 80011, "sdi_version": 1, "dd_object_type": "Tablespace", "dd_object": { "name": "windmills/wmillAUTOINC", "comment": "", "options": "", "se_private_data": "flags=16417;id=775;server_version=80011;space_version=1;", "engine": "InnoDB", "files": [ { "ordinal_position": 1, "filename": "./windmills/wmillAUTOINC.ibd", "se_private_data": "id=775;" } ] } } } ]

The JSON will contains:

  • A section describing the DB object at high level
  • Array of columns and related information
  • Array of indexes
  • Partition information (not here but in the next example)
  • Table space information

That is a lot more detail compared to what we had in the FRM, and it is quite relevant and interesting information as well.

Once you have extracted the SDI, any JSON parser tool script can generate the information for the SQL DDL.

I mention partitions, so let’s look at this a bit more, given they can be tricky.

As mentioned, the SDI information is present ONLY in the first partition. All other partitions hold ONLY the tablespace information. Given that, then the first thing to do is to identify which partition is the first… OR simply try to access all partitions, and when you are able to get the details, extract them.

The process is the same:

[root@master1 ~]# /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170301.ibd |jq '.' [ "ibd2sdi", { "type": 1, "id": 1460, "object": { "mysqld_version_id": 80013, "dd_version": 80013, "sdi_version": 1, "dd_object_type": "Table", "dd_object": { "name": "wmillAUTOINCPART", "mysql_version_id": 80013, "created": 20181125110300, "last_altered": 20181125110300, "hidden": 1, "options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=1;row_type=2;stats_auto_recalc=0;stats_sample_pages=0;", "columns": [<snip> "schema_ref": "windmills", "se_private_id": 18446744073709552000, "engine": "InnoDB", "last_checked_for_upgrade_version_id": 80013, "comment": "", "se_private_data": "autoinc=31080;version=2;", "row_format": 2, "partition_type": 7, "partition_expression": "to_days(`date`)", "partition_expression_utf8": "to_days(`date`)", "default_partitioning": 1, "subpartition_type": 0, "subpartition_expression": "", "subpartition_expression_utf8": "", "default_subpartitioning": 0, ], <snip> "foreign_keys": [], "partitions": [ { "name": "PT20170301", "parent_partition_id": 18446744073709552000, "number": 0, "se_private_id": 1847, "description_utf8": "736754", "engine": "InnoDB", "comment": "", "options": "", "se_private_data": "autoinc=0;version=0;", "values": [ { "max_value": false, "null_value": false, "list_num": 0, "column_num": 0, "value_utf8": "736754" } ],

The difference, as you can see, is that the section related to partitions and sub partitions will be filled with all the details you might need to recreate the partitions.

We will have:

  • Partition type
  • Partition expression
  • Partition values
  • …more

Same for sub partitions.

Now again see what happens if I parse the second partition:

[root@master1 ~]# /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170401.ibd |jq '.' [ "ibd2sdi", { "type": 2, "id": 790, "object": { "mysqld_version_id": 80011, "dd_version": 80011, "sdi_version": 1, "dd_object_type": "Tablespace", "dd_object": { "name": "windmills/wmillAUTOINCPART#P#PT20170401", "comment": "", "options": "", "se_private_data": "flags=16417;id=785;server_version=80011;space_version=1;", "engine": "InnoDB", "files": [ { "ordinal_position": 1, "filename": "./windmills/wmillAUTOINCPART#P#PT20170401.ibd", "se_private_data": "id=785;" } ] } } } ]

I will get only the information about the tablespace, not the table.

As promised let me show you now what happens if I delete the first partition, and the second partition becomes the first:

(root@localhost) [windmills]>alter table wmillAUTOINCPART drop partition PT20170301; Query OK, 0 rows affected (1.84 sec) Records: 0 Duplicates: 0 Warnings: 0 [root@master1 ~]# /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170401.ibd |jq '.'|more [ "ibd2sdi", { "type": 1, "id": 1461, "object": { "mysqld_version_id": 80013, "dd_version": 80013, "sdi_version": 1, "dd_object_type": "Table", "dd_object": { "name": "wmillAUTOINCPART", "mysql_version_id": 80013, "created": 20181129130834, "last_altered": 20181129130834, "hidden": 1, "options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=1;row_type=2;stats_auto_recalc=0;stats_sample_pages=0;", "columns": [ { "name": "id", "type": 9, "is_nullable": false, "is_zerofill": false, "is_unsigned": false, "is_auto_increment": true, "is_virtual": false, "hidden": 1, "ordinal_position": 1,

As I mentioned before, each DDL updates the SDI, and here we go: I will have all the information on what’s NOW the FIRST partition. Please note the value of the attribute “created” between the first time I queried the other partition, and the one that I have now:

/opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170301.ibd        "created": 20181125110300, /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170401.ibd        "created": 20181129130834,

To be clear the second created is NOW (PT20170401) from when I dropped the other partition (PT20170301).

Conclusions

In the end, this solution is definitely more powerful than the FRM files. It will allow us to parse the file and identify the table definition more easily, providing us with much more detail and information.

The problems will arise if and when the IBD file becomes corrupt.

As for the manual:  For InnoDB, an SDI record requires a single index page, which is 16KB in size by default. However, SDI data is compressed to reduce the storage footprint.

By which it means that for each table I have a page, if I associate record=table. Which means that in case of IBD corruption I should (likely) be able to read those pages. Unless I have bad (very bad) luck.

I still wonder how the dimension of an IBD affects the SDI retrieval, but given I have not tried it yet I will have to let you know.

As an aside, I am working on a script to facilitate the generation of the SQL, it’s not yet ready but you can find it here

Last note but keep this in mind! It is stated in the manual but in a hidden place and in small letters:
DDL operations take longer due to writing to storage, undo logs, and redo logs instead of .frm files.

References

https://stedolan.github.io/jq/

https://dev.mysql.com/doc/refman/8.0/en/ibd2sdi.html

https://dev.mysql.com/doc/refman/8.0/en/serialized-dictionary-information.html

https://dev.mysql.com/doc/refman/8.0/en/data-dictionary-limitations.html


Photo by chuttersnap on Unsplash

Migrate MySQL data to RDS/Aurora Using Xtrabackup

In this blog, I will provide a step by step procedure to migrate from on-premise MySQL to Amazon RDS/Aurora using Percona-xtrabackup

Both RDS and Aurora is a DBAAS provided by Amazon. To know more on DBAAS you can view our presentation here.

When you are having a database in size of few GB, it would be very convenient to take a logical backup using a logical backup tool such as Mysqldump or Mydumper and restore it Amazon RDS/Aurora easily. But this is not the case when you are having a data size of a few hundred GB or TB, Where the logical backup and restore is very painful and time-consuming. To overcome this we can use Xtrabackup ( open source hot backup ), along with S3 to get the required instance in RDS/Aurora

AWS started supporting Xtrabackup restore for Aurora MySQL long time ago and is Supporting RDS from the last few months back, Both follow the same procedure, hence am using RDS for the purpose of restore, since Aurora is on costlier side.

Production Use Case:

Recently I had come across a situation of restoring xtrabackup to Aurora of data size 300GB and it worked like a charm and the data was absolutely fine and also data was restored in 2 hours.

Before going into the demo, first, let us see the limitations in this method

Limitations:

  • Only 5.6 is supported source and destination should be in 5.6
  • Only support for the whole instance (partial backups at database and table is not supported)
  • With RDS the size is limited to 6TB
  • Database objects and user accounts are not migrated properly (will see the workaround here)

The below command to stream the backup directly to S3 bucket, if you are low on local backup space streaming directly avoids the usage of local storage, also make sure to have a strong network pipe.

# xtrabackup --user=root --password='S3cret!123' --backup --stream=tar | gzip -c | s3cmd put - s3://kabileshs3/xtrabackup56.tar.gz

RDS/Aurora also supports the creation of instance from a tar compressed format and also the S3 footprint can be reduced by compressing the backup, Ensure the compressed backup size should not exceed 5TB, this is the max object size supported by S3.

Backup is now uploaded to s3

Creating RDS instance:

Go to RDS console as below

Click “Restore a database instance from Amazon S3”

Select engine to be used with the restored instance, Am using MySQL, You can select Aurora based on your use-case.

Next Specify source backup details ie, S3 and its details

Create a new IAM role for RDS to access the s3 bucket to make the access and restore of the backup

Now the last and the final step proceed to Specify the DB details such as the instance size, engine version as shown in the below screenshot.

Next choose the backup retention, VPC and security group for accessing the DB instance.

Now click next your instance is getting ready

It took around some 20 mins to get instance ready

In the above restore we have just used data, from sysbench so it shouldn’t be an issue, let’s check the data consistency, By checking the count between the source instance and the restored RDS instance as shown below.

 

Data restore on RDS

 

 

Data on the On prem instance

 

Hope you have enjoyed and learned, Also share your much-valued feedback.

Image Courtesy: Photo by Gareth Davies on Unsplash

Slow MySQL Start Time in GTID mode? Binary Log File Size May Be The Issue

Have you been experiencing slow MySQL startup times in GTID mode? We recently ran into this issue on one of our MySQL hosting deployments and set out to solve the problem. In this blog, we break down the issue that could be slowing down your MySQL restart times, how to debug for your deployment, and what you can do to decrease your start time and improve your understanding of GTID-based replication.

How We Found The Problem

We were investigating slow MySQL startup times on a low-end, disk-based MySQL 5.7.21 deployment which had GTID mode enabled. The system was part of a master-slave pair and was under a moderate write load. When restarting during a scheduled maintenance, we noticed that the database server took 5-10 minutes to start up and begin accepting connections. That kind of delay didn’t make sense, so we set out to investigate.

Debugging Your Slow MySQL Start Time

We used the popular Percona tool pt-ioprofile to see what the database was doing. pt-ioprofile is a very important utility in Percona’s popular toolkit that’s used to debug MySQL issues, and you can see the complete list of features in their documentation. The pt-ioprofile tool uses strace and lsof to watch a process’s I/O and print out a table of files and I/O activity.

So, we started MySQL, waited for the mysqld process to get spawned, and started pt-ioprofile to see what the issue might be:

# pt-ioprofile --profile-process mysqld --run-time 200 Tue Oct 9 15:42:24 UTC 2018 Tracing process ID 18677 total      pread       read     pwrite      write      fsync  fdatasync       open      close   getdents      lseek      fcntl filename ... 216.550641   0.000000  216.550565   0.000000   0.000000   0.000000   0.000000   0.000015   0.000040   0.000000   0.000021   0.000000 /mysql_data/binlogs/mysql-bin.000014 ... What’s Slowing Your MySQL Restart?

On running this multiple times, we observed the following:

  • The mysqld process was spending most of its time reading the latest binary log file. This was the case even when the server had been stopped gracefully and there was no need for a crash recovery, etc.
  • The server also spent a considerable amount of time loading the InnoDB data files, but that time was much smaller compared to the time spent reading the latest binary log file.
  • If the server was restarted again immediately, this subsequent restart would be much faster.
  • Since a database shutdown flushes the binary log and creates a new one at startup, we did an additional experiment – before shutting down the server, we flushed the binary logs. The subsequent server start was fast again.

These observations clearly pointed to the fact that MySQL was spending a lot of time reading the latest binary log file. If the file was small, like it would be when the log file was flushed before a shutdown, the startup was fast.

Slow MySQL Start Time in GTID? Your Binary Log File Size May Be The IssueClick To Tweet Understanding Binlog GTID Recovery

As it turns out, in order to populate the values of gtid_executed and gtid_purged, the MySQL server must parse the binary log files.

Here’s the summary the MySQL 5.7 documentation method recommendation based on a FALSE or TRUE reading:

When binlog_gtid_simple_recovery = FALSE:

To compute gtid_executed:

  • Iterate binary log files from the newest, stopping at the first file that has a Previous_gtids_log_event entry.
  • Consume all GTIDs from Previous_gtids_log_event and Gtid_log_events from this binary log file, and store this GTID set internally. It is referred to as gtids_in_binlog.
  • Value of gtid_executed is computed as the union of gtids_in_binlog and the GTIDs in the mysql.gtid_executed table.

This process can be very time consuming if there are a large number of binary log files without GTIDs, for example, created when gtid_mode=OFF.

Similarly, to compute gtid_purged:

  • Iterate binary log files from the oldest to the newest, stopping at the first binary log that contains either a nonempty Previous_gtids_log_event (has at least one GTID), or that has at least one Gtid_log_event.
  • Read Previous_gtids_log_event from this file. Compute the internal variable gtids_in_binlog_not_purged as this GTID set subtracted from gtids_in_binlog.
  • Value of gtid_purged is set to gtid_executed, minus gtids_in_binlog_not_purged.

So, this forms the basis of our understanding on how things used to work in older versions. However, certain optimizations can be made when binlog_gtid_simple_recovery is TRUE. This is the case we are interested in:

When binlog_gtid_simple_recovery = TRUE:

(Note, this is the default in MySQL 5.7.7 and later)

  • Read just the oldest and the newest binary log files.
  • Compute gtid_purged from the Previous_gtids_log_event or Gtid_log_event found in the oldest binary log file.
  • Compute gtid_executed from the Previous_gtids_log_event or Gtid_log_event found in newest binary log file.
  • Thus, only two binary log files are read during the server restart or when purging binary logs.

So, for MySQL versions 5.7.7 and above, the latest and the old binary log files are always read during system startup to correctly initialize GTID system variables. Reading the oldest binary log file isn’t as expensive since the event MySQL is looking for, Previous_gtids_log_event, is always the first event in a binary log file.

However, in order to correctly compute gtid_executed, the server must read through the entire latest binary log file and collect all the events in that file. So, the system startup time becomes directly proportional to the size of the latest binary log file.

Note that the situation is even worse when binlog_gtid_simple_recovery is FALSE. Since it is no longer the default option in recent releases, it is not much of a concern.

How To Resolve Your Slow Start Time

Having understood the cause of the issue we were running into, the solution we decided on was fairly obvious – reduce the size of the binary log files. The default size of binary log files is 1GB. It takes time to parse through a file of this size during startup, so it makes sense to decrease the value of max_binlog_size to a lower value.

If decreasing the size of the binary log file is not an option, then flushing the binary log files just before a maintenance shutdown of the mysqld process can help to decrease the binlog GTID recovery times.

Floats and money

A very common and oft-repeated programmer’s wisdom is “Don’t use floats for currency”. This always made a lot of sense to me. The idea is that floats are imprecise, due to the way they are stored.

Effectively floats are stored not by just their digits, but by a formula. This formula can’t accurately represent every fraction. The most popular way to demonstrate this is by firing up your Javascript developer console and run:

> 0.3 - 0.2 0.09999999999999998

The answer you’ll get is not 0.1, it’s 0.09999999999999998. According to many, this is easy proof why floats are bad for money.

It’s also not really something you want to put on an invoice. Some suggestions to avoid floats include:

  • Generally when dealing with money, use specialized data stores. MySQL for example has the DECIMAL field type.
  • When serializing currency values, in for example JSON, pass them around as a string, not a number.

Another very common advice is, when doing math based on currency values just forget about the floating point and count cents instead of Euros, Yen or Dollarydoos.

So instead of $ 5.00, you would simply multiply by 100 and use 500, which fits in an integer.

This is a pretty good way to deal with simple invoicing, where for the most part you’re just adding numbers.

Fractions

With these wisdoms in hand I stepped into a problem that is a bit more complex. I needed to do calculations with fractions for financial analysis.

I’ll keep my example simple, but lets say that I needed to find out what 31% of $498.09 is.

The result of this can’t be expressed as a decimal number. On your JS console it might look something like this:

154.40789999999998

This is not just due to the fact that this is a floating point number, the other is simply that this number can’t be expressed without a fraction at all. The 9’s repeat infinitely.

But this led me to think, why do I really care about this really precise number. The part I’m really interested in is $154.41, so I can just round the result and get a string:

> (input * 0.31).toFixed(2); '154.41'

For my purposes, this was the right answer. Money isn’t expressed as fractions (beyond cents), and it’s rounded in a lot of situations.

This led me to my next realization. The 0.3 - 0.2 example was really a strawman. I can just round it to get the number I really wanted:

> (0.3 - 0.2).toFixed(2); '0.10'

This was the correct answer all along. The fact that we needed to first get 0.09999999999999998 as an intermediate answer is kind of irrelevant, because the difference is so little, and we would have to round anyway before display.

A better example

This last exercise just made me realize a few things:

  • 0.3 - 0.2 is not a good example of why using floating points are bad. $0.00000000000000002 is not a significant difference.
  • Unless you are comfortable writing your financial reports with formulas instead of numbers, rounding is unavoidable.
  • Everybody rounds, regardless of the number system. If you use floating point math or fixed point math, you actually suffer from the exact same problem.

So I wanted to find out if there are examples where using floating-point math does become an issue. Where are its limits?

One way I thought of thinking about this, is by trying to come up with a formula where this $0.00000000000000002 difference compound into a dollar. Just multiplying it by $10,000,000,000,000,000 doesn’t seem reasonable, because that number is (currently) too high to show up in a financial calculation.

I had trouble finding good examples to demonstrate this issue, and I’m not really smart enough to come up with an idea of my own. But even if I knew of a great example one thought kept lingering: The inaccuracy problem with floats also exists in other number systems that don’t use floats, such as base10 (fixed point) math. With fixed point math you basically pick a precision (for example, 15 digits) and rounding errors beyond that precision can also show up in certain complex calculations.

Edit

I got an excellent example when asking around from @scurker, the author of the currency.js project.

His example was this:

(17.954 + .001).toFixed(2); // => "17.96"

Even though this calculation should result in 17.955, which should round to 17.95, because .001 cannot be exactly represented by floating point numbers, the total ends up being 17.955000000000002, which rounds up.

So this is a super clear example of a very small difference that results in a significant change. I couldn’t think of this myself, but this makes it super clear to me why fixed-point math makes sense.

A few deductions

I don’t know if this is correct, but it feels correct:

  1. Floating point math and base10 (fixed point) math have similar problems with inaccuracies and rounding.
  2. These problems show up for different calculations. The result of some calculations cannot be precisely expressed with floating point math, and the result of other calculations cannot be precisely expressed with fixed-point math.
  3. Insignificant differences can show up as significant ones when rounding.

What I don’t know is around what scales floating point math can yield significant inaccurate results. It seems simpler to predict this with fixed-point math.

I think this predictability makes people feel safer doing financial calculations.

Using fixed point math in Javascript

At the start of this article I mentioned the solution to just multiply currencies by 100 and count cents. Realistically this is pretty useless for anything moderately complex, because you limit your precision to 2 decimals.

If you feel that you have a use-case where multiplying by 100 is enough precision, you are probably better off using floating points and simple rounding.

To make fixed-point math work for more complex calculations, you’ll need more significant digits, and thus multiply by larger numbers than 100. When researching this I read a great article about math in COBOL, which mentions that IBM’s fixed point decimal type can take a maximum of 18 digits.

If we take 99 billion dollar as the maximum we want to be able to represent, this means that of those 18 digits, we need 11 digits before and 7 digits after the period.

This means that every dollar value has to be multiplied by 107 (10,000,000,000).

It also means that the highest number we ever need is 1018.

This number is unfortunately is higher than Javascript’s MAX_SAFE_INTEGER, which is 253 - 1.

When working with integers in Javascript above this MAX_SAFE_INTEGER, Javascript will automatically convert to floating point numbers, which is what we were trying to avoid.

How would you solve this?

Right now the established way is to use a library that handles this for you. Some examples are:

These types of libraries handle these limitations by (for example) saving every digit of a large number separately in an array, and do a more ‘manual’ work with arithmetic operations. For example, 100 might be stored as [1, 0, 0].

What’s nice about these libraries is that they convert to and from strings for your view layer and they remember where the (fixed) point in your number is, so you don’t have to divide by 107 before displaying them.

These libraries might be a bit slow though. Other programming languages might have built-in features or native modules for this. PHP for example has a GMP module. Java has java.math.BigDecimal. Python is the clear winner here and just has transparent native support for arbitrary-size integers.

Ecmascript’s bigint

If you are running a new version of Firefox, Chrome or Node.js, you might already have support for a new Javascript type: bigint.

“bigint” is what it says on the tin, a type for big integers. These new numbers don’t automatically change to floats (or regular numbers) unless explicitly asked to and don’t have a maximum size, other than the limits of your computer.

Fire up your console and try it out:

> 2n ** 4018n 345559293868361148033634086220357006310511518965634896 781063867660175128163210875399825054057683275825569877 366688103085859060257763900293951810763555135397260025 513754935661856790168017882157679551094532977446472616 042388243502396227453171398675239956933209718890900049 505574416105487362960435374577513568425052733645175171 906768737683088460692573004685433967970094147567776587 968895982288887330931441539049188139650108989228586553 935232570575822085182349014973765774199297568789676530 263475320588045820967408959142772241203662510003840153 216689755054872558169600469901596210120146006100009125 686963654367743426461088529641862232649573994058654349 928433398965809337177780993779838202644442347395592695 424707614261233861675702825586749608541245212697821731 388883399113457099839192961355454810013138358212990578 801843850594265754972263518207657158345091261082252767 041012083237074622662749814648746881195156284049648490 077841431766658997246737814625191378320649854621233155 380171485533227675173253997097680556286020862942055252 836773511266514852999631351440845847321663378684304777 829482829776191764290318406962537489950747560352629539 406073440415550052435127500261152332947273952037770952 8239494112463913222144n

Another interesting thing about these numbers is that they always round:

> 5n / 2n 2n

Using the bigint type is going to be a much faster than the npm libraries. The only drawback is that the bigint doesn’t have a decimal point, so I’m forced to multiply every number by the precision I want. This will make my code potentially harder to read.

But with all these nice new features, I still haven’t found a satisfying answer to my original question: under what conditions do floating point numbers break down for financial calculations?

Disclaimer

I was nervous publishing this, because some of the underlying technology and math goes over my head. I can’t really vouch for any of this being 100% accurate.

Comments

@psihius says:

The thing about this is that you do not run into this issue often when dealing with single invoice or calculation. Where the errors start to creep up is running calculations like calculating the amount of tax on all transactions. Or when you are dealing in wholesale businesses where the price of a unit can be a fraction of a cent. Or dealing with currency exchange rates - my personal experience - those are 5 digits, and you have to deal with the fact that a 1000 transactions can have an offset of a 1000 cents if handled wrong.

@DaveCTurner says:

I needed to find out what 31% of $498.09 is… The result of this can’t be expressed as a decimal number.

31% of $498.09 is $154.4079 exactly.

If you see errors or want to leave comments, feel free to edit this article on github. Just add your comment right above this paragraph, or you can make edits anywhere in this article if you feel comfortable doing so.

MySQL & MariaDB Query Caching with ProxySQL & ClusterControl

Queries have to be cached in every heavily loaded database, there is simply no way for a database to handle all traffic with reasonable performance. There are various mechanisms in which a query cache can be implemented. Starting from the MySQL query cache, which used to work just fine for mostly read-only, low concurrency workloads and which has no place in high concurrent workloads (to the extent that Oracle removed it in MySQL 8.0), to external key-value stores like Redis, memcached or CouchBase.

The main problem with using an external dedicated data store (as we would not recommend to use MySQL query cache to anyone) is that this is yet another datastore to manage. It is yet another environment to maintain, scaling issues to handle, bugs to debug and so on.

Related resources  ProxySQL for ClusterControl  Choosing a Database Proxy for MySQL and MariaDB  SQL Firewalling Made Easy with ClusterControl & ProxySQL  How to Cluster Your ProxySQL Load Balancers

So why not kill two birds with one stone by leveraging your proxy? The assumption here is that you are using a proxy in your production environment, as it helps load balance queries across instances, and mask the underlying database topology by provide a simple endpoint to applications. ProxySQL is a great tool for the job, as it can additionally function as a caching layer. In this blog post, we’ll show you how to cache queries in ProxySQL using ClusterControl.

How Query Cache Works in ProxySQL?

First of all, a bit of a background. ProxySQL manages traffic through query rules and it can accomplish query caching using the same mechanism. ProxySQL stores cached queries in a memory structure. Cached data is evicted using time-to-live (TTL) setting. TTL can be defined for each query rule individually so it is up to the user to decide if query rules are to be defined for each individual query, with distinct TTL or if she just needs to create a couple of rules which will match the majority of the traffic.

There are two configuration settings that define how a query cache should be used. First, mysql-query_cache_size_MB which defines a soft limit on the query cache size. It is not a hard limit so ProxySQL may use slightly more memory than that, but it is enough to keep the memory utilization under control. Second setting you can tweak is mysql-query_cache_stores_empty_result. It defines if an empty result set is cached or not.

ProxySQL query cache is designed as a key-value store. The value is the result set of a query and the key is composed from concatenated values like: user, schema and query text. Then a hash is created off that string and that hash is used as the key.

Setting up ProxySQL as a Query Cache Using ClusterControl

As the initial setup, we have a replication cluster of one master and one slave. We also have a single ProxySQL.

This is by no means a production-grade setup as we would have to implement some sort of high availability for the proxy layer (for example by deploying more than one ProxySQL instance, and then keepalived on top of them for floating Virtual IP), but it will be more than enough for our tests.

First, we are going to verify the ProxySQL configuration to make sure query cache settings are what we want them to be.

256 MB of query cache should be about right and we want to cache also the empty result sets - sometimes a query which returns no data still have to do a lot of work to verify there’s nothing to return.

Next step is to create query rules which will match the queries you want to cache. There are two ways to do that in ClusterControl.

Manually Adding Query Rules

First way requires a bit more manual actions. Using ClusterControl you can easily create any query rule you want, including query rules that do the caching. First, let’s take a look at the list of the rules:

At this point, we have a set of query rules to perform the read/write split. The first rule has an ID of 100. Our new query rule has to be processed before that one so we will use lower rule ID. Let’s create a query rule which will do the caching of queries similar to this one:

SELECT DISTINCT c FROM sbtest8 WHERE id BETWEEN 5041 AND 5140 ORDER BY c

There are three ways of matching the query: Digest, Match Digest and Match Pattern. Let’s talk a bit about them here. First, Match Digest. We can set here a regular expression that will match a generalized query string that represents some query type. For example, for our query:

SELECT DISTINCT c FROM sbtest8 WHERE id BETWEEN 5041 AND 5140 ORDER BY c

The generic representation will be:

SELECT DISTINCT c FROM sbtest8 WHERE id BETWEEN ? AND ? ORDER BY c

As you can see, it stripped the arguments to the WHERE clause therefore all queries of this type are represented as a single string. This option is quite nice to use because it matches whole query type and, what’s even more important, it’s stripped off any whitespaces. This makes it so much easier to write a regular expression as you don’t have to account for weird line breaks, whitespaces at the beginning or end of the string and so on.

Digest is basically a hash that ProxySQL calculates over the Match Digest form.

Finally, Match Pattern matches against full query text, as it was sent by the client. In our case, the query will have a form of:

SELECT DISTINCT c FROM sbtest8 WHERE id BETWEEN 5041 AND 5140 ORDER BY c

We are going to use Match Digest as we want all of those queries to be covered by the query rule. If we wanted to cache just that particular query, a good option would be to use Match Pattern.

The regular expression that we use is:

SELECT DISTINCT c FROM sbtest[0-9]+ WHERE id BETWEEN \? AND \? ORDER BY c

We are matching literally the exact generalized query string with one exception - we know that this query hit multiple tables therefore we added a regular expression to match all of them.

Once this is done, we can see if the query rule is in effect or not.

We can see that ‘Hits’ are increasing which means that our query rule is being used. Next, we’ll look at another way to create a query rule.

Using ClusterControl to Create Query Rules

ProxySQL has a useful functionality of collecting statistics of the queries it routed. You can track data like execution time, how many times a given query was executed and so on. This data is also present in ClusterControl:

What is even better, if you point on a given query type, you can create a query rule related to it. You can also easily cache this particular query type.

As you can see, some of the data like Rule IP, Cache TTL or Schema Name are already filled. ClusterControl will also fill data based on which matching mechanism you decided to use. We can easily use either hash for a given query type or we can use Match Digest or Match Pattern if we would like to fine-tune the regular expression (for example doing the same as we did earlier and extending the regular expression to match all the tables in sbtest schema).

This is all you need to easily create query cache rules in ProxySQL. Download ClusterControl to try it today.

Tags:  MySQL MariaDB query cache proxysql query caching load balancer load balancing

Shinguz: UNDO logs in InnoDB system tablespace ibdata1

We see sometimes at customers that they have very big InnoDB system tablespace files (ibdata1) although they have set innodb_file_per_table = 1.

So we want to know what else is stored in the InnoDB system tablespace file ibdata1 to see what we can do against this unexpected growth.

First let us check the size of the ibdata1 file:

# ll ibdata1 -rw-rw---- 1 mysql mysql 109064486912 Dez 5 19:10 ibdata1

The InnoDB system tablespace is about 101.6 Gibyte in size. This is exactly 6'656'768 InnoDB blocks of 16 kibyte block size.

So next we want to analyse the InnoDB system tablespace ibdata1 file. For this we can use the tool innochecksum:

# innochecksum --page-type-summary ibdata1 Error: Unable to lock file:: ibdata1 fcntl: Resource temporarily unavailable

But... the tool innochecksum throughs an error. It seems like it is not allowed to analyse the InnoDB system tablespace with a running database. So then let us stop the database first and try it again. Now we get a useful output:

# innochecksum --page-type-summary ibdata1 File::ibdata1 ================PAGE TYPE SUMMARY============== #PAGE_COUNT PAGE_TYPE =============================================== 349391 Index page 5.25% 6076813 Undo log page 91.29% 18349 Inode page 0.28% 174659 Insert buffer free list page 2.62% 36639 Freshly allocated page 0.55% 405 Insert buffer bitmap 0.01% 98 System page 1 Transaction system page 1 File Space Header 404 Extent descriptor page 0.01% 0 BLOB page 8 Compressed BLOB page 0 Other type of page ------------------------------------------------------- 6656768 Pages total 100.00% =============================================== Additional information: Undo page type: 3428 insert, 6073385 update, 0 other Undo page state: 1 active, 67 cached, 249 to_free, 1581634 to_purge, 0 prepared, 4494862 other

So we can see that about 91% (about 92 Gibyte) of the InnoDB system tablespace ibdata1 blocks are used by InnoDB UNDO log pages. To avoid growing of ibdata1 you have to create a database instance with separate InnoDB UNDO tablespaces: Undo Tablespaces.

Taxonomy upgrade extras:  undo innodb tablespace ibdata1

Shinguz: UNDO logs in InnoDB system tablespace ibdata1

We see sometimes at customers that they have very big InnoDB system tablespace files (ibdata1) although they have set innodb_file_per_table = 1.

So we want to know what else is stored in the InnoDB system tablespace file ibdata1 to see what we can do against this unexpected growth.

First let us check the size of the ibdata1 file:

# ll ibdata1 -rw-rw---- 1 mysql mysql 109064486912 Dez 5 19:10 ibdata1

The InnoDB system tablespace is about 101.6 Gibyte in size. This is exactly 6'656'768 InnoDB blocks of 16 kibyte block size.

So next we want to analyse the InnoDB system tablespace ibdata1 file. For this we can use the tool innochecksum:

# innochecksum --page-type-summary ibdata1 Error: Unable to lock file:: ibdata1 fcntl: Resource temporarily unavailable

But... the tool innochecksum throughs an error. It seems like it is not allowed to analyse the InnoDB system tablespace with a running database. So then let us stop the database first and try it again. Now we get a useful output:

# innochecksum --page-type-summary ibdata1 File::ibdata1 ================PAGE TYPE SUMMARY============== #PAGE_COUNT PAGE_TYPE =============================================== 349391 Index page 5.25% 6076813 Undo log page 91.29% 18349 Inode page 0.28% 174659 Insert buffer free list page 2.62% 36639 Freshly allocated page 0.55% 405 Insert buffer bitmap 0.01% 98 System page 1 Transaction system page 1 File Space Header 404 Extent descriptor page 0.01% 0 BLOB page 8 Compressed BLOB page 0 Other type of page ------------------------------------------------------- 6656768 Pages total 100.00% =============================================== Additional information: Undo page type: 3428 insert, 6073385 update, 0 other Undo page state: 1 active, 67 cached, 249 to_free, 1581634 to_purge, 0 prepared, 4494862 other

So we can see that about 91% (about 92 Gibyte) of the InnoDB system tablespace ibdata1 blocks are used by InnoDB UNDO log pages. To avoid growing of ibdata1 you have to create a database instance with separate InnoDB UNDO tablespaces: Undo Tablespaces.

Taxonomy upgrade extras:  undo innodb tablespace ibdata1

Shinguz: UNDO logs in InnoDB system tablespace ibdata1

We see sometimes at customers that they have very big InnoDB system tablespace files (ibdata1) although they have set innodb_file_per_table = 1.

So we want to know what else is stored in the InnoDB system tablespace file ibdata1 to see what we can do against this unexpected growth.

First let us check the size of the ibdata1 file:

# ll ibdata1 -rw-rw---- 1 mysql mysql 109064486912 Dez 5 19:10 ibdata1

The InnoDB system tablespace is about 101.6 Gibyte in size. This is exactly 6'656'768 InnoDB blocks of 16 kibyte block size.

So next we want to analyse the InnoDB system tablespace ibdata1 file. For this we can use the tool innochecksum:

# innochecksum --page-type-summary ibdata1 Error: Unable to lock file:: ibdata1 fcntl: Resource temporarily unavailable

But... the tool innochecksum throughs an error. It seems like it is not allowed to analyse the InnoDB system tablespace with a running database. So then let us stop the database first and try it again. Now we get a useful output:

# innochecksum --page-type-summary ibdata1 File::ibdata1 ================PAGE TYPE SUMMARY============== #PAGE_COUNT PAGE_TYPE =============================================== 349391 Index page 5.25% 6076813 Undo log page 91.29% 18349 Inode page 0.28% 174659 Insert buffer free list page 2.62% 36639 Freshly allocated page 0.55% 405 Insert buffer bitmap 0.01% 98 System page 1 Transaction system page 1 File Space Header 404 Extent descriptor page 0.01% 0 BLOB page 8 Compressed BLOB page 0 Other type of page ------------------------------------------------------- 6656768 Pages total 100.00% =============================================== Additional information: Undo page type: 3428 insert, 6073385 update, 0 other Undo page state: 1 active, 67 cached, 249 to_free, 1581634 to_purge, 0 prepared, 4494862 other

So we can see that about 91% (about 92 Gibyte) of the InnoDB system tablespace ibdata1 blocks are used by InnoDB UNDO log pages. To avoid growing of ibdata1 you have to create a database instance with separate InnoDB UNDO tablespaces: Undo Tablespaces.

Taxonomy upgrade extras:  undo innodb tablespace ibdata1

Shinguz: UNDO logs in InnoDB system tablespace ibdata1

We see sometimes at customers that they have very big InnoDB system tablespace files (ibdata1) although they have set innodb_file_per_table = 1.

So we want to know what else is stored in the InnoDB system tablespace file ibdata1 to see what we can do against this unexpected growth.

First let us check the size of the ibdata1 file:

# ll ibdata1 -rw-rw---- 1 mysql mysql 109064486912 Dez 5 19:10 ibdata1

The InnoDB system tablespace is about 101.6 Gibyte in size. This is exactly 6'656'768 InnoDB blocks of 16 kibyte block size.

So next we want to analyse the InnoDB system tablespace ibdata1 file. For this we can use the tool innochecksum:

# innochecksum --page-type-summary ibdata1 Error: Unable to lock file:: ibdata1 fcntl: Resource temporarily unavailable

But... the tool innochecksum throughs an error. It seems like it is not allowed to analyse the InnoDB system tablespace with a running database. So then let us stop the database first and try it again. Now we get a useful output:

# innochecksum --page-type-summary ibdata1 File::ibdata1 ================PAGE TYPE SUMMARY============== #PAGE_COUNT PAGE_TYPE =============================================== 349391 Index page 5.25% 6076813 Undo log page 91.29% 18349 Inode page 0.28% 174659 Insert buffer free list page 2.62% 36639 Freshly allocated page 0.55% 405 Insert buffer bitmap 0.01% 98 System page 1 Transaction system page 1 File Space Header 404 Extent descriptor page 0.01% 0 BLOB page 8 Compressed BLOB page 0 Other type of page ------------------------------------------------------- 6656768 Pages total 100.00% =============================================== Additional information: Undo page type: 3428 insert, 6073385 update, 0 other Undo page state: 1 active, 67 cached, 249 to_free, 1581634 to_purge, 0 prepared, 4494862 other

So we can see that about 91% (about 92 Gibyte) of the InnoDB system tablespace ibdata1 blocks are used by InnoDB UNDO log pages. To avoid growing of ibdata1 you have to create a database instance with separate InnoDB UNDO tablespaces: Undo Tablespaces.

Taxonomy upgrade extras:  undo innodb tablespace ibdata1

Nondeterministic Functions in MySQL (i.e. rand) Can Surprise You

Working on a test case with sysbench, I encountered this:

mysql> select * from sbtest1 where id = round(rand()*10000, 0); +------+--------+-------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ | id | k | c | pad | +------+--------+-------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ | 179 | 499871 | 09833083632-34593445843-98203182724-77632394229-31240034691-22855093589-98577647071-95962909368-34814236148-76937610370 | 62233363025-41327474153-95482195752-11204169522-13131828192 | | 1606 | 502031 | 81212399253-12831141664-41940957498-63947990218-16408477860-15124776228-42269003436-07293216458-45216889819-75452278174 | 25423822623-32136209218-60113604068-17409951653-00581045257 | +------+--------+-------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ 2 rows in set (0.30 sec)

I was really surprised. First, and the most important, id is a primary key and the rand() function should produce just one value. How come it returns two rows? Second, why is the response time 0.30 sec? That seems really high for a primary key access.

Looking further:

CREATE TABLE `sbtest1` ( `id` int(11) NOT NULL AUTO_INCREMENT, `k` int(11) NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k_1` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=1000001 DEFAULT CHARSET=latin1 mysql> explain select * from sbtest1 where id = round(rand()*10000, 0); +----+-------------+---------+------------+------+---------------+------+---------+------+--------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+---------+------------+------+---------------+------+---------+------+--------+----------+-------------+ | 1 | SIMPLE | sbtest1 | NULL | ALL | NULL | NULL | NULL | NULL | 986400 | 10.00 | Using where | +----+-------------+---------+------------+------+---------------+------+---------+------+--------+----------+-------------+

So it is a primary key, but MySQL does not use an index, and it returns two rows. Is this a bug?

Deterministic vs nondeterministic functions

Turned out it is not a bug at all. It is pretty logical behavior from MySQL, but it is not what we would expect. First, why a full table scan? Well, rand() is nondeterministic function. That means we do not know what it will return ahead of time, and actually that is exactly the purpose of rand() – to return a random value. In this case, it is only logical to evaluate the function for each row, each time, and compare the results. i.e. in our case

  1. Read row 1, get the value of id, evaluate the value of RAND(), compare
  2. Proceed using the same algorithm with the remaining rows.

In other words, as the value of rand() is not known (not evaluated) beforehand, so we can’t use an index.

And in this case – rand() function – we have another interesting consequence. For larger tables with an auto_increment primary key, the probability of matching the rand() value and the auto_increment value is higher, so we can get multiple rows back. In fact, if we read the whole table from the beginning and keep comparing the auto_inc sequence with “the roll of the dice”, we can get many rows back.

That behavior is totally counter-intuitive. Nevertheless, to me, it’s also the only correct behavior.

We expect to have the rand() function evaluated before running the query.  This can actually be achieved by assigning rand() to a variable:

mysql> set @id=round(rand()*10000, 0); select @id; select * from sbtest1 where id = @id; Query OK, 0 rows affected (0.00 sec) +------+ | @id | +------+ | 6068 | +------+ 1 row in set (0.00 sec) +------+--------+-------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ | id | k | c | pad | +------+--------+-------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ | 6068 | 502782 | 84971025350-12845068791-61736600622-38249796467-85706778555-74134284808-24438972515-17848828748-86869270666-01547789681 | 17507194006-70651503059-23792945260-94159543806-65683812344 | +------+--------+-------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ 1 row in set (0.00 sec) mysql> explain select * from sbtest1 where id = @id; +----+-------------+---------+------------+-------+---------------+---------+---------+-------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+---------+------------+-------+---------------+---------+---------+-------+------+----------+-------+ | 1 | SIMPLE | sbtest1 | NULL | const | PRIMARY | PRIMARY | 4 | const | 1 | 100.00 | NULL | +----+-------------+---------+------------+-------+---------------+---------+---------+-------+------+----------+-------+ 1 row in set, 1 warning (0.01 sec)

This would meet our expectations.

There are (at least) two bug reports filed, with very interesting discussion:

  1. rand() used in scalar functions returns multiple rows
  2. SELECT on PK with ROUND(RAND()) give wrong errors
Other databases

I wanted to see how it works in other SQL databases. In PostgreSQL, the behavior is exactly the same as MySQL:

postgres=# select * from t2 where id = cast(random()*10000 as int); id | c ------+--------- 4093 | asdasda 9378 | asdasda (2 rows) postgres=# select * from t2 where id = cast(random()*10000 as int); id | c ------+--------- 5988 | asdasda 6674 | asdasda (2 rows) postgres=# explain select * from t2 where id = cast(random()*10000 as int); QUERY PLAN -------------------------------------------------------------------- Seq Scan on t2 (cost=0.00..159837.60 rows=1 width=12) Filter: (id = ((random() * '10000'::double precision))::integer) (2 rows)

And SQLite seems different, evaluating the random() function beforehand:

sqlite> select * from t2 where id = cast(abs(CAST(random() AS REAL))/92233720368547 as int); 16239|asdsadasdsa sqlite> select * from t2 where id = cast(abs(CAST(random() AS REAL))/92233720368547 as int); 32910|asdsadasdsa sqlite> select * from t2 where id = cast(abs(CAST(random() AS REAL))/92233720368547 as int); 58658|asdsadasdsa sqlite> explain select * from t2 where id = cast(abs(CAST(random() AS REAL))/92233720368547 as int); addr opcode p1 p2 p3 p4 p5 comment ---- ------------- ---- ---- ---- ------------- -- ------------- 0 Init 0 12 0 00 Start at 12 1 OpenRead 0 30182 0 2 00 root=30182 iDb=0; t2 2 Function0 0 0 3 random(0) 00 r[3]=func(r[0]) 3 Cast 3 69 0 00 affinity(r[3]) 4 Function0 0 3 2 abs(1) 01 r[2]=func(r[3]) 5 Divide 4 2 1 00 r[1]=r[2]/r[4] 6 Cast 1 68 0 00 affinity(r[1]) 7 SeekRowid 0 11 1 00 intkey=r[1]; pk 8 Copy 1 5 0 00 r[5]=r[1] 9 Column 0 1 6 00 r[6]=t2.c 10 ResultRow 5 2 0 00 output=r[5..6] 11 Halt 0 0 0 00 12 Transaction 0 0 2 0 01 usesStmtJournal=0 13 Int64 0 4 0 92233720368547 00 r[4]=92233720368547 14 Goto 0 1 0 00

Conclusion

Be careful when using MySQL nondeterministic functions in  a “where” condition – rand() is the most interesting example – as their behavior may surprise you. Many people believe this to be a bug that should be fixed. Let me know in the comments: do you think it is a bug or not (and why)? I would also be interested to know how it works in other, non-opensource databases (Microsoft SQL Server, Oracle, etc)

PS: Finally, I’ve got a “clever” idea – what if I “trick” MySQL by using the deterministic keyword…

MySQL stored functions: deterministic vs not deterministic

So, I wanted to see how it works with MySQL stored functions if they are assigned “deterministic” and “not deterministic” keywords. First, I wanted to “trick” mysql and pass the deterministic to the stored function but use rand() inside. Ok, this is not what you really want to do!

DELIMITER $$ CREATE FUNCTION myrand() RETURNS INT DETERMINISTIC BEGIN RETURN round(rand()*10000, 0); END$$ DELIMITER ;

From MySQL manual about MySQL stored routines we can read:

Assessment of the nature of a routine is based on the “honesty” of the creator: MySQL does not check that a routine declared DETERMINISTIC is free of statements that produce nondeterministic results. However, misdeclaring a routine might affect results or affect performance. Declaring a nondeterministic routine as DETERMINISTIC might lead to unexpected results by causing the optimizer to make incorrect execution plan choices. Declaring a deterministic routine as NONDETERMINISTIC might diminish performance by causing available optimizations not to be used.

The result is interesting:

mysql> select myrand(); +----------+ | myrand() | +----------+ | 4202 | +----------+ 1 row in set (0.00 sec) mysql> select myrand(); +----------+ | myrand() | +----------+ | 7548 | +----------+ 1 row in set (0.00 sec) mysql> explain select * from t2 where id = myrand()\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: NULL partitions: NULL type: NULL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: NULL filtered: NULL Extra: Impossible WHERE noticed after reading const tables 1 row in set, 1 warning (0.00 sec) mysql> show warnings; +-------+------+--------------------------------------------------------------------------------+ | Level | Code | Message | +-------+------+--------------------------------------------------------------------------------+ | Note | 1003 | /* select#1 */ select '2745' AS `id`,'asasdas' AS `c` from `test`.`t2` where 0 | +-------+------+--------------------------------------------------------------------------------+ 1 row in set (0.00 sec) mysql> select * from t2 where id = 4202; +------+---------+ | id | c | +------+---------+ | 4202 | asasdas | +------+---------+ 1 row in set (0.00 sec) mysql> select * from t2 where id = 2745; +------+---------+ | id | c | +------+---------+ | 2745 | asasdas | +------+---------+ 1 row in set (0.00 sec)

So MySQL optimizer detected the problem (somehow).

If I use the NOT DETERMINISTIC keyword, then MySQL works the same as when using the rand() function:

DELIMITER $$ CREATE FUNCTION myrand2() RETURNS INT NOT DETERMINISTIC BEGIN RETURN round(rand()*10000, 0); END$$ DELIMITER ; mysql> explain select * from t2 where id = myrand2()\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: t2 partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 262208 filtered: 10.00 Extra: Using where 1 row in set, 1 warning (0.00 sec)

 


Photo by dylan nolte on Unsplash

Create Animated React Apps With React Spring

One thing that is pivotal to creating great interactive web applications is animations. Animations add life to your applications and improve the overall user experience.

In this tutorial, we'll be looking at how to create simple yet lovely animations in your React apps using an npm package called react-spring, specifically, the Spring component of the package.

React Spring is a great animation package that has been endorsed by some of the React core team including Dan Abramov, and is generally considered one of the best animation packages for React apps out there. It utilises spring like physics in its core animations making it easy to configure. In this tutorial we'll be focused on the Spring component which is one of react-spring's easier to use and more flexible components.

With Spring we are able to:

  • Manipulate values(numbers)of any sort from measurement units to actual data.

  • Manipulate HTML attributes

  • Manipulate SVG paths

  • Adjust CSS

And much more!

Springs are cumulative, meaning they'll remember all values passed to them. Let's look at how we can get started with Springs, complete with an example making use of the newly announced React Hooks.

Examples

What We'll Build

We'll be building a simple sliding and fading animation to show you how easily you can achieve animations.

https://codesandbox.io/s/github/austinroy/react\-spring\-demo

Setting Up

We'll be setting up our React environment with create-react-app which will also generate some boilerplate code that will allow us to get started. To install it run

npm install -g create-react-app

Now you can use it to create your app. Run:

create-react-app react-spring-demo

A folder named react-spring-demo will be created containing our project. Cd into that directory and install our primary dependency, the react-spring package by running:

yarn add react-spring

You will notice we're using yarn as the package manager for this project as it is the default package manager used by create-react-app. Make sure to have yarn installed by running

npm install -g yarn

We are now set up to go, let's create our first animated page.

Animating Styles

Spring can be used to animate styles, to do this, we'll use it to animate the transition into a newly loaded page. To do this we'll wrap the jsx context of App.js in a Spring component.

The Spring component will take two props, from and to which represents the values to be interpolated by our animation.

In our case we want to create the effect of a page dropping down from above and fading in. To do this, we'll set the initial top margin of the page elements to be a negative value and bring it to 0 during the animation creating a dropping motion. To create the fade in effect, we'll set the initial value of the opacity to 0 and bring that value to 1 at the end of the animation. Luckily for us, the boilerplate generated by create-react-app has the perfect background to show this effect at work so we won't need to change it for now.

This is what it will look like in our App.js file:

App.js // import React, { Component } from 'react'; import logo from './logo.svg'; import './App.css'; import { Spring } from 'react-spring'; class App extends Component { render() { return ( <Spring from={{ opacity: 0, marginTop: -1000 }} to={{ opacity: 1, marginTop: 0 }}> { props => ( <div className="App" style={ props }> <div > <header className="App-header" > <img src={logo} className="App-logo" alt="logo" /> <p> Edit <code>src/App.js</code> and save to reload. </p> <a className="App-link" href="https://reactjs.org" target="_blank" rel="noopener noreferrer" > Learn React </a> </header> </div> </div> ) } </Spring> ); } } export default App;

Now fire up your application by running this command.

yarn start

Your browser will open up and you should see the page load with the contents having the desired drop and fade in animations.

Nice, isn't it? You can use spring to create any even more style animations adjusting a variety of styles. It is however advised to stick to animating opacity and translations to keep your app light.

Animating innerText

Animating styles is great but we can also use Spring to animate the value of contents shown on the screen. To show this, we'll be creating a counter that starts at 0 and ends at 10 using Spring. As expected, from will hold our initial value and to will hold the final value to be displayed.

Under the src directory, create a folder called components and in it a file called Counter.jsx. Add the following code to Countrt.jsx :

//src/Counter.jsx import React from 'react'; import { Spring } from'react-spring'; const counter = () => ( <Spring from={{ number: 0 }} to={{ number: 10 }} {props => <div>{props.number.toFixed()}</div>} </Spring> ) export default counter;

Now import our counter into App.js and add it under the header element to render it in our app.

//App.js ... import Counter from './components/Counter'; class App extends Component { render() { return ( <Spring from={{ opacity: 0, marginTop: -1000 }} to={{ opacity: 1, marginTop: 0 }}> { props => ( <div className="App" style={ props }> <div > <header className="App-header" > <img src={logo} className="App-logo" alt="logo" /> <p> Edit <code>src/App.js</code> and save to reload. </p> <a className="App-link" href="https://reactjs.org" target="_blank" rel="noopener noreferrer" > Learn React </a> <Counter /> </header> </div> </div> ) } </Spring> ); } } export default App;

Opening up your browser you will notice the counter just under the Learn React text, like this: Just one catch, our animation is happening so soon that we are missing most of it as it occurs while our initial page is animating into visibility. Luckily, we can delay our animation by adding a delay prop which will be equated to a value in milliseconds, this is the amount of time our animation will wait before starting. Adding a 1 second delay, the counter function will now look like this:

const counter = () => ( <Spring from={{ number: 0 }} to={{ number: 10 }} delay= '1000'> {props => <div>{props.number.toFixed()}</div>} </Spring> )

Checking the browser, the counter now starts after the page animations are finished. Another method we can use to add this delay is through the config prop which we'll come to when discussing the Spring configurations shortly.

Spring config

As mentioned before,Springs are physics based. This means we don't have to manually deal with durations and curves. This is great as it takes away some of the heavy math we may have to cover. However, we can still adjust the behaviour of our Spring by tweaking it's tension, friction, delays, mass and other behaviour through the config prop.

Don't wish to deal with this but still want to adjust your animations? Don't worry, react-spring comes with some inbuilt presets that we can use to tweak our Springs. All we have to do is import config from the react-spring package and feed them to the config prop of the Spring. Before we get confused about which config is which, let's take a look at an example.

import React from 'react'; import { Spring, config } from'react-spring'; const counter = () => ( <Spring from={{ number: 0 }} to={{ number: 10 }} delay= '1000' config = { config.molasses }> {props => <div>{props.number.toFixed()}</div>} </Spring> ) export default counter;

In the example above, we've used the molasses preset which is a high tension, high friction preset provided by react-spring . The presets typically define the tension and friction properties of out Spring. These presets include molasses, default, slow, stiff and wobbly.

While the presets only define the tension and friction, you can manually configure other properties of the Spring animation which include but are not limited to delay, mass, velocity and duration. For a full list of properties you can configure, along with other options that can be passed as props, check out this page.

Usage With Hooks

The React team recently introduced React Hooks, this allows us to create functional components that can permanently store data and cause effects, basically adding state to functional components. Hooks are currently only available in React 16.7 alpha as we await a stable release. To use hooks you will need to upgrade to the 16.7 alpha versions of react and react-dom.

To do this, run the following commands:

yarn remove react-dom && yarn add react-dom@16.7.0-alpha.0 yarn remove react && yarn add react@16.7.0-alpha.0

We can use hooks out of the box with react-spring which exports a hook called useSpring. This hook allows us to define and update data and will generally consist of the same values you would pass as props and useSpring will turn it into animated data. To showcase this, let's look at how we can have more text rendered after our previous animations are done animating.

Here's how we can do that, let's create a new component file called Hooks.jsx and add the following code.

//Hooks.jsx import React from 'react'; import { useSpring, animated } from 'react-spring'; const HookedComponent = () => { const [props] = useSpring({ opacity: 1, color: 'white', from: { opacity: 0 }, delay: '2000' }) return <animated.div style={props}>This text Faded in Using hooks</animated.div> } export default HookedComponent;

We pass the spring settings as an object of arguments to useSpring which will then pass these values to the animated element that then creates our animated spring. We've set our delay to 2000ms to ensure the text from our hooked component fades in after the counter is finished.

Now let's import this into App.js and use the HookedComponent in our app. After cleaning up some of the initial boilerplate code from `create-react-app`, it should end up looking like this: Fire up your final application an see the magic.

Conclusion

You now have the tools to get started using react-spring. While it's the easiest to use component of react-spring, Spring provides a simple yet effective means of animating React applications while taking away a huge amount of the workload from the developer.

Here's the CodeSandbox:

https://codesandbox.io/s/github/austinroy/react\-spring\-demo

You can build on Spring by making use of react-spring's other components such as Transition which animates component lifecycles and Trail which animates the first element of an array and has the rest follow it in a natural trail. Overall, react-spring is a great package with a variety of options depending on your animation needs.

Upcoming Events with MySQL!

We are happy to confirm our attendance at another two shows which are going to happen in a couple of days. They are OpenSource Conference in Fukuoka & IT.Tage in Frankfurt, Germany. Please find details below:

  • Name: OpenSource Conference Fukuoka
  • Place: Fukuoka, Japan
  • Date: December 8, 2018
  • Where to find us:
    • at the MySQL booth in the expo area
    • at the MySQL talk on: "State of Dolphin" given by Yoshiaki Yamasaki, the MySQL Senior Sales Consultant Asia Pacific and Japan. See the official schedule on conference website for the timing and place.
  • Name: IT.Tage 2018
  • Place: Frankfurt, Germany
  • Date: December 10-13, 2018
  • Where to find us:
    • at the shared Oracle booth in the expo area
    • at the MySQL talk on: "MySQL 8-MySQL as a Document Store Database" given by Carsten Thalheimer, the MySQL Principal Sales Consultant. Talk is scheduled for Dec 12 @2:30-3:15pm in the Database track

We are looking forward to meeting you there!

 

 

 

 

 

 

 

 

 

Laravel Stripe Payment Gateway Integration Tutorial With Example

Laravel Stripe Payment Gateway Integration Tutorial With Example is today’s leading topic. We will use the Cashier package to integrate Stripe Payment Gateway in Laravel. The demand for Saas based platform is increasing day by day and nowadays building a subscription-based system is universal. So to make it easy on the backend like Laravel, we do need some packages that can help us build scalable, securable, and reliable web applications. Laravel Cashier makes it very simple to integrate Payment Gateway like Stripe. So let us see how we can integrate stripe in Laravel on Subscription based system.

Laravel Stripe Payment Gateway Integration

Let us first install and configure Laravel 5.7

Step 1: Install and configure Laravel 5.7

Type the following command.

laravel new stripesub

Go to the project folder and open the project in an editor. I am using VSCode.

cd stripesub && code .

Install the js dependencies using the following command.

npm install

Install the Cashier package for Stripe dependencies.

composer require laravel/cashier

Also, create an authentication scaffold.

php artisan make:auth Step 2: Create an essential database migrations

Before using Cashier, we’ll also need to prepare the database. We need to add several columns to your users’ table and create new subscriptions and plans table to hold all of our customer’s subscriptions.

So, first go to the create_users_table.php and add the following schema.

$table->string('stripe_id')->nullable()->collation('utf8mb4_bin'); $table->string('card_brand')->nullable(); $table->string('card_last_four', 4)->nullable(); $table->timestamp('trial_ends_at')->nullable();

Your schema now looks like this.

public function up() { Schema::create('users', function (Blueprint $table) { $table->increments('id'); $table->string('name'); $table->string('email')->unique(); $table->timestamp('email_verified_at')->nullable(); $table->string('password'); $table->string('stripe_id')->nullable()->collation('utf8mb4_bin'); $table->string('card_brand')->nullable(); $table->string('card_last_four', 4)->nullable(); $table->timestamp('trial_ends_at')->nullable(); $table->rememberToken(); $table->timestamps(); }); }

Now, create two more migrations files.

php artisan make:migration create_subscriptions_table php artisan make:migration create_plans_table

Now, write the schema on both of the files.

First, add the following schema inside the create_subscriptions_table.php file.

public function up() { Schema::create('subscriptions', function (Blueprint $table) { $table->increments('id'); $table->unsignedInteger('user_id'); $table->string('name'); $table->string('stripe_id')->collation('utf8mb4_bin'); $table->string('stripe_plan'); $table->integer('quantity'); $table->timestamp('trial_ends_at')->nullable(); $table->timestamp('ends_at')->nullable(); $table->timestamps(); }); }

Write the following schema inside the create_plans_table.php file.

public function up() { Schema::create('plans', function (Blueprint $table) { $table->increments('id'); $table->string('name'); $table->string('slug')->unique(); $table->string('stripe_plan'); $table->float('cost'); $table->text('description')->nullable(); $table->timestamps(); }); }

Save the file and go to the terminal and migrate tables.

php artisan migrate

Now, add two plans manually on the plans table like this.

 

Step 3: Get and Set Stripe API Keys

First, you need to create an account on Stripe. You can do it on here. Then after login to your account, you will find a Test dashboard.

Now, click on the Developers link on the left sidebar. In the submenu, you can find the API keys item. Click on that item, and you will be redirected to this page.

 

Here, you can find the Publishable Key or Stripe Key and Secret Key.

You need to add these keys inside the .env file in our project.

// .env STRIPE_KEY=your key here STRIPE_SECRET=your secret here

Finally, you should configure your Stripe key in your services.php configuration file. You can retrieve your Stripe API keys from the Stripe control panel.

// services.php 'stripe' => [ 'model' => App\User::class, 'key' => env('STRIPE_KEY'), 'secret' => env('STRIPE_SECRET'), ],

Also, you need to add the Billable Trait inside the User.php model.

// User.php use Laravel\Cashier\Billable; class User extends Authenticatable { use Billable; } Step 4: Create Plans on Stripe Dashboard

You can create a plan through Laravel but that will take some time, and our motto is to accept the payment through stripe. So we will create the plans manually on Stripe Dashboard.

First, go to the Billing >> Products link and right now, there are no products available. So we need to create two products. Here products mean plans. So we will create two plans.

  1. Basic
  2. Professional

Create and assign the values same as we have assigned the values on the Plans table in our Laravel application. After creating two plans, your products look like this.

 

Step 5: Display Plans on Frontend

First, define the routes for our application inside the routes >> web.php file.

// web.php Route::group(['middleware' => 'auth'], function() { Route::get('/home', 'HomeController@index')->name('home'); Route::get('/plans', 'PlanController@index')->name('plans.index'); });

We have taken the auth middleware to protect the routes related to payment and home.

Now, create the Plan.php model and PlanController.php file.

php artisan make:model Plan php artisan make:controller PlanController

Define the index method inside the PlanController.

// PlanController.php use App\Plan; public function index() { $plans = Plan::all(); return view('plans.index', compact('plans')); }

Now, inside the resources >> views folder, create one folder called plans and inside that folder, create one file called index.blade.php file. Write the following code.

@extends('layouts.app') @section('content') <div class="container"> <div class="row justify-content-center"> <div class="col-md-12"> <div class="card"> <div class="card-header">Plans</div> <div class="card-body"> <ul class="list-group"> @foreach($plans as $plan) <li class="list-group-item clearfix"> <div class="pull-left"> <h5>{{ $plan->name }}</h5> <h5>${{ number_format($plan->cost, 2) }} monthly</h5> <h5>{{ $plan->description }}</h5> <a href="" class="btn btn-outline-dark pull-right">Choose</a> </div> </li> @endforeach </ul> </div> </div> </div> </div> </div> @endsection

Now, register one user on Laravel application and go to this URL: http://stripesub.test/plans

You will find the plans like this.

 

Step 6: Show the plan

So, when the user chooses the plan, we need to redirect the user to a particular plan page.

Define one more route inside the routes >> web.php file.

// web.php Route::group(['middleware' => 'auth'], function() { Route::get('/home', 'HomeController@index')->name('home'); Route::get('/plans', 'PlanController@index')->name('plans.index'); Route::get('/plan/{plan}', 'PlanController@show')->name('plans.show'); });

Now, by default, RouteModelBinding works with the ID of the model. But we will not pass the ID to show the particular plan instead we will pass the slug. So we need to define it inside the Plan.php model.

<?php // Plan.php namespace App; use Illuminate\Database\Eloquent\Model; class Plan extends Model { protected $fillable = [ 'name', 'slug', 'stripe_plan', 'cost', 'description' ]; public function getRouteKeyName() { return 'slug'; } }

Here, we have defined the function called getRouteKeyName. So based on this function, now we can fetch the record based on the slug and not based on the ID. That is why we have taken the slug field as a unique field in the database.

Now, define the show() function inside the PlanController.php file.

// PlanController.php public function show(Plan $plan, Request $request) { return view('plans.show', compact('plan')); }

Next step is to create a view file called show.blade.php inside the resources >> views >> plans folder. Add the following code inside the show.blade.php file.

@extends('layouts.app') @section('content') <div class="container"> <div class="row justify-content-center"> <div class="col-md-12"> <div class="card"> <div class="card-header">{{ $plan->name }}</div> <div class="card-body"> </div> </div> </div> </div> </div> @endsection

Now, we will display the Payment Form on this page.

Step 7: Display the Payment Form

For this example, I am using Card Element. You can find it on official Stripe Documentation. It is securely collect sensitive card details using Elements, our pre-built UI components.

 

Now, we need to include the External JS files in our project. For that, we need to modify the resources >> views >> layouts >> app.blade.php file.

<!DOCTYPE html> <html lang="{{ str_replace('_', '-', app()->getLocale()) }}"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- CSRF Token --> <meta name="csrf-token" content="{{ csrf_token() }}"> <title>{{ config('app.name', 'Laravel') }}</title> <!-- Fonts --> <link rel="dns-prefetch" href="//fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css?family=Nunito" rel="stylesheet" type="text/css"> <!-- Styles --> <link href="{{ asset('css/app.css') }}" rel="stylesheet"> </head> <body> <div id="app"> <nav class="navbar navbar-expand-md navbar-light navbar-laravel"> <div class="container"> <a class="navbar-brand" href="{{ url('/') }}"> {{ config('app.name', 'Laravel') }} </a> <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="{{ __('Toggle navigation') }}"> <span class="navbar-toggler-icon"></span> </button> <div class="collapse navbar-collapse" id="navbarSupportedContent"> <!-- Left Side Of Navbar --> <ul class="navbar-nav mr-auto"> </ul> <!-- Right Side Of Navbar --> <ul class="navbar-nav ml-auto"> <!-- Authentication Links --> @guest <li class="nav-item"> <a class="nav-link" href="{{ route('login') }}">{{ __('Login') }}</a> </li> <li class="nav-item"> @if (Route::has('register')) <a class="nav-link" href="{{ route('register') }}">{{ __('Register') }}</a> @endif </li> @else <li class="nav-item"> <a class="nav-link" href="{{ route('plans.index') }}">{{ __('Plans') }}</a> </li> <li class="nav-item dropdown"> <a id="navbarDropdown" class="nav-link dropdown-toggle" href="#" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false" v-pre> {{ Auth::user()->name }} <span class="caret"></span> </a> <div class="dropdown-menu dropdown-menu-right" aria-labelledby="navbarDropdown"> <a class="dropdown-item" href="{{ route('logout') }}" onclick="event.preventDefault(); document.getElementById('logout-form').submit();"> {{ __('Logout') }} </a> <form id="logout-form" action="{{ route('logout') }}" method="POST" style="display: none;"> @csrf </form> </div> </li> @endguest </ul> </div> </div> </nav> <main class="py-4"> @yield('content') </main> </div> <!-- Scripts --> <script src="{{ asset('js/app.js') }}"></script> @yield('scripts') </body> </html>

Here, we have defined one navigation link called plans and also add one section for scripts. So, now we can add the Javascript per pagewise using @yield directive.

Next step is to write the Card Element inside the show.blade.php file.

@extends('layouts.app') @section('content') <div class="container"> <div class="row justify-content-center"> <div class="col-md-12"> <div class=""> <p>You will be charged ${{ number_format($plan->cost, 2) }} for {{ $plan->name }} Plan</p> </div> <div class="card"> <form action="#" method="post" id="payment-form"> @csrf <div class="form-group"> <div class="card-header"> <label for="card-element"> Enter your credit card information </label> </div> <div class="card-body"> <div id="card-element"> <!-- A Stripe Element will be inserted here. --> </div> <!-- Used to display form errors. --> <div id="card-errors" role="alert"></div> <input type="hidden" name="plan" value="{{ $plan->id }}" /> </div> </div> <div class="card-footer"> <button class="btn btn-dark" type="submit">Pay</button> </div> </form> </div> </div> </div> </div> @endsection @section('scripts') <script src="https://js.stripe.com/v3/"></script> <script> // Create a Stripe client. var stripe = Stripe('{{ env("STRIPE_KEY") }}'); // Create an instance of Elements. var elements = stripe.elements(); // Custom styling can be passed to options when creating an Element. // (Note that this demo uses a wider set of styles than the guide below.) var style = { base: { color: '#32325d', lineHeight: '18px', fontFamily: '"Helvetica Neue", Helvetica, sans-serif', fontSmoothing: 'antialiased', fontSize: '16px', '::placeholder': { color: '#aab7c4' } }, invalid: { color: '#fa755a', iconColor: '#fa755a' } }; // Create an instance of the card Element. var card = elements.create('card', {style: style}); // Add an instance of the card Element into the `card-element` <div>. card.mount('#card-element'); // Handle real-time validation errors from the card Element. card.addEventListener('change', function(event) { var displayError = document.getElementById('card-errors'); if (event.error) { displayError.textContent = event.error.message; } else { displayError.textContent = ''; } }); // Handle form submission. var form = document.getElementById('payment-form'); form.addEventListener('submit', function(event) { event.preventDefault(); stripe.createToken(card).then(function(result) { if (result.error) { // Inform the user if there was an error. var errorElement = document.getElementById('card-errors'); errorElement.textContent = result.error.message; } else { // Send the token to your server. stripeTokenHandler(result.token); } }); }); // Submit the form with the token ID. function stripeTokenHandler(token) { // Insert the token ID into the form so it gets submitted to the server var form = document.getElementById('payment-form'); var hiddenInput = document.createElement('input'); hiddenInput.setAttribute('type', 'hidden'); hiddenInput.setAttribute('name', 'stripeToken'); hiddenInput.setAttribute('value', token.id); form.appendChild(hiddenInput); // Submit the form form.submit(); } </script> @endsection

Here, we have used the Card element and Javascript to display the form.

Now, add the link to this page inside the index.blade.php file.

// index.blade.php <a href="{{ route('plans.show', $plan->slug) }}" class="btn btn-outline-dark pull-right">Choose</a>

So, it will be redirected to the page like this.

 

So, here we need to enter the following details. You can enter the details like below inputs.

  1. Card Number: 4242 4242 4242 4242
  2. Expiration Date: 10/21 or whatever you like from the future of today’s date
  3. CVC: whatever you like
  4. Zipcode: whatever you like

These are dummy details, but these details generally used in sandbox account to check the application.

Right now nothing will happen because we need to define the form action to store the data inside the database tables. So let us define the post route.

Step 8: Save the subscriptions and accept payment

Define the final route inside the web.php file.

// web.php Route::group(['middleware' => 'auth'], function() { Route::get('/home', 'HomeController@index')->name('home'); Route::get('/plans', 'PlanController@index')->name('plans.index'); Route::get('/plan/{plan}', 'PlanController@show')->name('plans.show'); Route::post('/subscription', 'SubscriptionController@create')->name('subscription.create'); });

We have defined the POST route for subscriptions. Now, create SubscriptionController using the following command.

php artisan make:controller SubscriptionController

Now, add a POST route inside the show.blade.php file when we post the data to the server.

// show.blade.php <form action="{{ route('subscription.create') }}" method="post" id="payment-form">

Inside that controller, we need to define one function called create().

<?php // SubscriptionController.php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Plan; class SubscriptionController extends Controller { public function create(Request $request, Plan $plan) { $plan = Plan::findOrFail($request->get('plan')); $request->user() ->newSubscription('main', $plan->stripe_plan) ->create($request->stripeToken); return redirect()->route('home')->with('success', 'Your plan subscribed successfully'); } }

Here, we are creating the subscription for registered User and charge him.

First, we have to fetch the plan according to the id. Then we need to pass that plan to the newSubscription() function and create a subscription.

So, here we have used the Billable trait’s subscribedToPlan() method and pass the first parameter plan and the second parameter main.

We are creating a new Subscription, if the payment made successfully then, it would redirect to the HomePage with a  success message.

Write the following code inside the home.blade.php file.

@extends('layouts.app') @section('content') <div class="container"> <div class="row justify-content-center"> <div class="col-md-8"> @if(session()->get('success')) <div class="alert alert-success"> {{ session()->get('success') }} </div> @endif <div class="card"> <div class="card-header">Dashboard</div> <div class="card-body"> @if (session('status')) <div class="alert alert-success" role="alert"> {{ session('status') }} </div> @endif Welcome to the Dashboard </div> </div> </div> </div> </div> @endsection

So, if your all of the configurations are right, then you should go to any of the plans and try to subscribe to the plan. If you are redirecting to the homepage, then you are almost done.

You can see that one database entry inside the subscriptions table is there.

 

Your user’s table also has been updated as you can see some field’s details is updated.

You can check the Stripe Dashboard as well. As we have subscribed the 50$ Professional Plan.

 

Step 9: Security Tweaks

Now, we need to keep in mind one thing that if the user is already subscribed to one plan, then we need the user to prevent to choose that plan. So, we need to add one condition on the choose button.

So, add the condition inside the index.blade.php file.

@if(!auth()->user()->subscribedToPlan($plan->stripe_plan, 'main')) <a href="{{ route('plans.show', $plan->slug) }}" class="btn btn-outline-dark pull-right">Choose</a> @endif

Also, we need to add the condition inside the PlanController.php file’s show() function.

// PlanController.php public function show(Plan $plan, Request $request) { if($request->user()->subscribedToPlan($plan->stripe_plan, 'main')) { return redirect()->route('home')->with('success', 'You have already subscribed the plan'); } return view('plans.show', compact('plan')); }

Also, we need to do the same thing inside the SubscriptionController’s create() method.

// SubscriptionController.php public function create(Request $request, Plan $plan) { if($request->user()->subscribedToPlan($plan->stripe_plan, 'main')) { return redirect()->route('home')->with('success', 'You have already subscribed the plan'); } $plan = Plan::findOrFail($request->get('plan')); $request->user() ->newSubscription('main', $plan->stripe_plan) ->create($request->stripeToken); return redirect()->route('home')->with('success', 'Your plan subscribed successfully'); }

Save the file, and now you are good to go. I am putting this Laravel Stripe Payment Gateway Integration Tutorial’s whole code on Github.

Finally, Laravel Stripe Payment Gateway Integration Tutorial article is over. There are still so many things that we can do with the project. But for basic understanding, this is enough. Thanks.

Github Code

The post Laravel Stripe Payment Gateway Integration Tutorial With Example appeared first on AppDividend.

MySQL on Docker: Multiple Delayed Replication Slaves for Disaster Recovery with Low RTO

Delayed replication allows a replication slave to deliberately lag behind the master by at least a specified amount of time. Before executing an event, the slave will first wait, if necessary, until the given time has passed since the event was created on the master. The result is that the slave will reflect the state of the master some time back in the past. This feature is supported since MySQL 5.6 and MariaDB 10.2.3. It can come in handy in case of accidental data deletion, and should be part of your disaster recovery plan.

The problem when setting up a delayed replication slave is how much delay we should put on. Too short of time and you risk the bad query getting to your delayed slave before you can get to it, thus wasting the point of having the delayed slave. Optionally, you can have your delayed time to be so long that it take hours for your delayed slave to catch up to where the master was at the time of the error.

Luckily with Docker, process isolation is its strength. Running multiple MySQL instances is pretty convenient with Docker. It allows us to have multiple delayed slaves within a single physical host to improve our recovery time and save hardware resources. If you think a 15-minute delay is too short, we can have another instance with 1-hour delay or 6-hour for an even older snapshot of our database.

In this blog post, we are going to deploy multiple MySQL delayed slaves on one single physical host with Docker, and show some recovery scenarios. The following diagram illustrates our final architecture that we want to build:

Our architecture consists of an already deployed 2-node MySQL Replication running on physical servers (blue) and we would like to set up another three MySQL slaves (green) with following behaviour:

  • 15 minutes delay
  • 1 hour delay
  • 6 hours delay

Take note that we are going to have 3 copies of the exact same data on the same physical server. Ensure our Docker host has the storage required, so do allocate sufficient disk space beforehand.

MySQL Master Preparation

Firstly, login to the master server and create the replication user:

mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%' IDENTIFIED BY 'YlgSH6bLLy';

Then, create a PITR-compatible backup on the master:

$ mysqldump -uroot -p --flush-privileges --hex-blob --opt --master-data=1 --single-transaction --skip-lock-tables --skip-lock-tables --triggers --routines --events --all-databases | gzip -6 -c > mysqldump_complete.sql.gz

If you are using ClusterControl, you can make a PITR-compatible backup easily. Go to Backups -> Create Backup and pick "Complete PITR-compatible" under the "Dump Type" dropdown:

Finally, transfer this backup to the Docker host:

$ scp mysqldump_complete.sql.gz root@192.168.55.200:~

This backup file will be used by the MySQL slave containers during the slave bootstrapping process, as shown in the next section.

Delayed Slave Deployment

Prepare our Docker container directories. Create 3 directories (mysql.conf.d, datadir and sql) for every MySQL container that we are going to launch (you can use loop to simplify the commands below):

$ mkdir -p /storage/mysql-slave-15m/mysql.conf.d $ mkdir -p /storage/mysql-slave-15m/datadir $ mkdir -p /storage/mysql-slave-15m/sql $ mkdir -p /storage/mysql-slave-1h/mysql.conf.d $ mkdir -p /storage/mysql-slave-1h/datadir $ mkdir -p /storage/mysql-slave-1h/sql $ mkdir -p /storage/mysql-slave-6h/mysql.conf.d $ mkdir -p /storage/mysql-slave-6h/datadir $ mkdir -p /storage/mysql-slave-6h/sql

"mysql.conf.d" directory will store our custom MySQL configuration file and will be mapped into the container under /etc/mysql.conf.d. "datadir" is where we want Docker to store the MySQL data directory, which maps to /var/lib/mysql of the container and "sql" directory stores our SQL files - backup files in .sql or .sql.gz format to stage the slave before replicating and also .sql files to automate the replication configuration and startup.

15-minute Delayed Slave

Prepare the MySQL configuration file for our 15-minute delayed slave:

$ vim /storage/mysql-slave-15m/mysql.conf.d/my.cnf

And add the following lines:

[mysqld] server_id=10015 binlog_format=ROW log_bin=binlog log_slave_updates=1 gtid_mode=ON enforce_gtid_consistency=1 relay_log=relay-bin expire_logs_days=7 read_only=ON

** The server-id value we used for this slave is 10015.

Next, under /storage/mysql-slave-15m/sql directory, create two SQL files, one to RESET MASTER (1reset_master.sql) and another one to establish the replication link using CHANGE MASTER statement (3setup_slave.sql).

Create a text file 1reset_master.sql and add the following line:

RESET MASTER;

Create a text file 3setup_slave.sql and add the following lines:

CHANGE MASTER TO MASTER_HOST = '192.168.55.171', MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'YlgSH6bLLy', MASTER_AUTO_POSITION = 1, MASTER_DELAY=900; START SLAVE;

MASTER_DELAY=900 is equal to 15 minutes (in seconds). Then copy the backup file taken from our master (that has been transferred into our Docker host) to the "sql" directory and renamed it as 2mysqldump_complete.sql.gz:

$ cp ~/mysqldump_complete.tar.gz /storage/mysql-slave-15m/sql/2mysqldump_complete.tar.gz

The final look of our "sql" directory should be something like this:

$ pwd /storage/mysql-slave-15m/sql $ ls -1 1reset_master.sql 2mysqldump_complete.sql.gz 3setup_slave.sql

Take note that we prefix the SQL filename with an integer to determine the execution order when Docker initializes the MySQL container.

Once everything is in place, run the MySQL container for our 15-minute delayed slave:

$ docker run -d \ --name mysql-slave-15m \ -e MYSQL_ROOT_PASSWORD=password \ --mount type=bind,source=/storage/mysql-slave-15m/datadir,target=/var/lib/mysql \ --mount type=bind,source=/storage/mysql-slave-15m/mysql.conf.d,target=/etc/mysql/mysql.conf.d \ --mount type=bind,source=/storage/mysql-slave-15m/sql,target=/docker-entrypoint-initdb.d \ mysql:5.7

** The MYSQL_ROOT_PASSWORD value must be the same as the MySQL root password on the master.

The following lines are what we are looking for to verify if MySQL is running correctly and connected as a slave to our master (192.168.55.171):

$ docker logs -f mysql-slave-15m ... 2018-12-04T04:05:24.890244Z 0 [Note] mysqld: ready for connections. Version: '5.7.24-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL) 2018-12-04T04:05:25.010032Z 2 [Note] Slave I/O thread for channel '': connected to master 'rpl_user@192.168.55.171:3306',replication started in log 'FIRST' at position 4

You can then verify the replication status with following statement:

$ docker exec -it mysql-slave-15m mysql -uroot -p -e 'show slave status\G' ... Slave_IO_Running: Yes Slave_SQL_Running: Yes SQL_Delay: 900 Auto_Position: 1 ...

At this point, our 15-minute delayed slave container is replicating correctly and our architecture is looking something like this:

1-hour Delayed Slave

Prepare the MySQL configuration file for our 1-hour delayed slave:

$ vim /storage/mysql-slave-1h/mysql.conf.d/my.cnf

And add the following lines:

[mysqld] server_id=10060 binlog_format=ROW log_bin=binlog log_slave_updates=1 gtid_mode=ON enforce_gtid_consistency=1 relay_log=relay-bin expire_logs_days=7 read_only=ON

** The server-id value we used for this slave is 10060.

Next, under /storage/mysql-slave-1h/sql directory, create two SQL files, one to RESET MASTER (1reset_master.sql) and another one to establish the replication link using CHANGE MASTER statement (3setup_slave.sql).

Create a text file 1reset_master.sql and add the following line:

RESET MASTER;

Create a text file 3setup_slave.sql and add the following lines:

CHANGE MASTER TO MASTER_HOST = '192.168.55.171', MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'YlgSH6bLLy', MASTER_AUTO_POSITION = 1, MASTER_DELAY=3600; START SLAVE;

MASTER_DELAY=3600 is equal to 1 hour (in seconds). Then copy the backup file taken from our master (that has been transferred into our Docker host) to the "sql" directory and renamed it as 2mysqldump_complete.sql.gz:

$ cp ~/mysqldump_complete.tar.gz /storage/mysql-slave-1h/sql/2mysqldump_complete.tar.gz

The final look of our "sql" directory should be something like this:

$ pwd /storage/mysql-slave-1h/sql $ ls -1 1reset_master.sql 2mysqldump_complete.sql.gz 3setup_slave.sql

Take note that we prefix the SQL filename with an integer to determine the execution order when Docker initializes the MySQL container.

Once everything is in place, run the MySQL container for our 1-hour delayed slave:

$ docker run -d \ --name mysql-slave-1h \ -e MYSQL_ROOT_PASSWORD=password \ --mount type=bind,source=/storage/mysql-slave-1h/datadir,target=/var/lib/mysql \ --mount type=bind,source=/storage/mysql-slave-1h/mysql.conf.d,target=/etc/mysql/mysql.conf.d \ --mount type=bind,source=/storage/mysql-slave-1h/sql,target=/docker-entrypoint-initdb.d \ mysql:5.7

** The MYSQL_ROOT_PASSWORD value must be the same as the MySQL root password on the master.

The following lines are what we are looking for to verify if MySQL is running correctly and connected as a slave to our master (192.168.55.171):

$ docker logs -f mysql-slave-1h ... 2018-12-04T04:05:24.890244Z 0 [Note] mysqld: ready for connections. Version: '5.7.24-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL) 2018-12-04T04:05:25.010032Z 2 [Note] Slave I/O thread for channel '': connected to master 'rpl_user@192.168.55.171:3306',replication started in log 'FIRST' at position 4

You can then verify the replication status with following statement:

$ docker exec -it mysql-slave-1h mysql -uroot -p -e 'show slave status\G' ... Slave_IO_Running: Yes Slave_SQL_Running: Yes SQL_Delay: 3600 Auto_Position: 1 ...

At this point, our 15-minute and 1-hour MySQL delayed slave containers are replicating from the master and our architecture is looking something like this:

6-hour Delayed Slave

Prepare the MySQL configuration file for our 6-hour delayed slave:

$ vim /storage/mysql-slave-15m/mysql.conf.d/my.cnf

And add the following lines:

[mysqld] server_id=10006 binlog_format=ROW log_bin=binlog log_slave_updates=1 gtid_mode=ON enforce_gtid_consistency=1 relay_log=relay-bin expire_logs_days=7 read_only=ON

** The server-id value we used for this slave is 10006.

Next, under /storage/mysql-slave-6h/sql directory, create two SQL files, one to RESET MASTER (1reset_master.sql) and another one to establish the replication link using CHANGE MASTER statement (3setup_slave.sql).

Create a text file 1reset_master.sql and add the following line:

RESET MASTER;

Create a text file 3setup_slave.sql and add the following lines:

CHANGE MASTER TO MASTER_HOST = '192.168.55.171', MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'YlgSH6bLLy', MASTER_AUTO_POSITION = 1, MASTER_DELAY=21600; START SLAVE;

MASTER_DELAY=21600 is equal to 6 hours (in seconds). Then copy the backup file taken from our master (that has been transferred into our Docker host) to the "sql" directory and renamed it as 2mysqldump_complete.sql.gz:

$ cp ~/mysqldump_complete.tar.gz /storage/mysql-slave-6h/sql/2mysqldump_complete.tar.gz

The final look of our "sql" directory should be something like this:

$ pwd /storage/mysql-slave-6h/sql $ ls -1 1reset_master.sql 2mysqldump_complete.sql.gz 3setup_slave.sql

Take note that we prefix the SQL filename with an integer to determine the execution order when Docker initializes the MySQL container.

Once everything is in place, run the MySQL container for our 6-hour delayed slave:

$ docker run -d \ --name mysql-slave-6h \ -e MYSQL_ROOT_PASSWORD=password \ --mount type=bind,source=/storage/mysql-slave-6h/datadir,target=/var/lib/mysql \ --mount type=bind,source=/storage/mysql-slave-6h/mysql.conf.d,target=/etc/mysql/mysql.conf.d \ --mount type=bind,source=/storage/mysql-slave-6h/sql,target=/docker-entrypoint-initdb.d \ mysql:5.7

** The MYSQL_ROOT_PASSWORD value must be the same as the MySQL root password on the master.

The following lines are what we are looking for to verify if MySQL is running correctly and connected as a slave to our master (192.168.55.171):

$ docker logs -f mysql-slave-6h ... 2018-12-04T04:05:24.890244Z 0 [Note] mysqld: ready for connections. Version: '5.7.24-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL) 2018-12-04T04:05:25.010032Z 2 [Note] Slave I/O thread for channel '': connected to master 'rpl_user@192.168.55.171:3306',replication started in log 'FIRST' at position 4

You can then verify the replication status with following statement:

$ docker exec -it mysql-slave-6h mysql -uroot -p -e 'show slave status\G' ... Slave_IO_Running: Yes Slave_SQL_Running: Yes SQL_Delay: 21600 Auto_Position: 1 ...

At this point, our 5 minutes, 1-hour and 6-hour delayed slave containers are replicating correctly and our architecture is looking something like this:

Disaster Recovery Scenario

Let's say a user has accidentally dropped a wrong column on a big table. Consider the following statement was executed on the master:

mysql> USE shop; mysql> ALTER TABLE settings DROP COLUMN status;

If you are lucky enough to realize it immediately, you could use the 15-minute delayed slave to catch up to the moment before the disaster happens and promote it to become master, or export the missing data out and restore it on the master.

Firstly, we have to find the binary log position before the disaster happened. Grab the time now() on the master:

mysql> SELECT now(); +---------------------+ | now() | +---------------------+ | 2018-12-04 14:55:41 | +---------------------+

Then, get the active binary log file on the master:

mysql> SHOW MASTER STATUS; +---------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +---------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | binlog.000004 | 20260658 | | | 1560665e-ed2b-11e8-93fa-000c29b7f985:1-12031, 1b235f7a-d37b-11e8-9c3e-000c29bafe8f:1-62519, 1d8dc60a-e817-11e8-82ff-000c29bafe8f:1-326575, 791748b3-d37a-11e8-b03a-000c29b7f985:1-374 | +---------------+----------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Using the same date format, extract the information that we want from the binary log, binlog.000004. We estimate the start time to read from the binlog around 20 minutes ago (2018-12-04 14:35:00) and filter the output to show 25 lines before the "drop column" statement:

$ mysqlbinlog --start-datetime="2018-12-04 14:35:00" --stop-datetime="2018-12-04 14:55:41" /var/lib/mysql/binlog.000004 | grep -i -B 25 "drop column" '/*!*/; # at 19379172 #181204 14:54:45 server id 1 end_log_pos 19379232 CRC32 0x0716e7a2 Table_map: `shop`.`settings` mapped to number 766 # at 19379232 #181204 14:54:45 server id 1 end_log_pos 19379460 CRC32 0xa6187edd Write_rows: table id 766 flags: STMT_END_F BINLOG ' tSQGXBMBAAAAPAAAACC0JwEAAP4CAAAAAAEABnNidGVzdAAHc2J0ZXN0MgAFAwP+/gME/nj+PBCi 5xYH tSQGXB4BAAAA5AAAAAS1JwEAAP4CAAAAAAEAAgAF/+AYwwAAysYAAHc0ODYyMjI0NjI5OC0zNDE2 OTY3MjY5OS02MDQ1NTQwOTY1Ny01MjY2MDQ0MDcwOC05NDA0NzQzOTUwMS00OTA2MTAxNzgwNC05 OTIyMzM3NzEwOS05NzIwMzc5NTA4OC0yODAzOTU2NjQ2MC0zNzY0ODg3MTYzOTswMTM0MjAwNTcw Ni02Mjk1ODMzMzExNi00NzQ1MjMxODA1OS0zODk4MDQwMjk5MS03OTc4MTA3OTkwNQEAAADdfhim '/*!*/; # at 19379460 #181204 14:54:45 server id 1 end_log_pos 19379491 CRC32 0x71f00e63 Xid = 622405 COMMIT/*!*/; # at 19379491 #181204 14:54:46 server id 1 end_log_pos 19379556 CRC32 0x62b78c9e GTID last_committed=11507 sequence_number=11508 rbr_only=no SET @@SESSION.GTID_NEXT= '1560665e-ed2b-11e8-93fa-000c29b7f985:11508'/*!*/; # at 19379556 #181204 14:54:46 server id 1 end_log_pos 19379672 CRC32 0xc222542a Query thread_id=3162 exec_time=1 error_code=0 SET TIMESTAMP=1543906486/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; ALTER TABLE settings DROP COLUMN status

In the bottom few lines of the mysqlbinlog output, you should have the erroneous command that was executed at position 19379556. The position that we should restore is one step before this, which is in position 19379491. This is the binlog position where we want our delayed slave to be up to.

Then, on the chosen delayed slave, stop the delayed replication slave and start again the slave to a fixed end position that we figured out above:

$ docker exec -it mysql-slave-15m mysql -uroot -p mysql> STOP SLAVE; mysql> START SLAVE UNTIL MASTER_LOG_FILE = 'binlog.000004', MASTER_LOG_POS = 19379491;

Monitor the replication status and wait until Exec_Master_Log_Pos is equal to Until_Log_Pos value. This could take some time. Once caught up, you should see the following:

$ docker exec -it mysql-slave-15m mysql -uroot -p -e 'SHOW SLAVE STATUS\G' ... Exec_Master_Log_Pos: 19379491 Relay_Log_Space: 50552186 Until_Condition: Master Until_Log_File: binlog.000004 Until_Log_Pos: 19379491 ...

Finally verify if the missing data that we were looking for is there (column "status" still exists):

mysql> DESCRIBE shop.settings; +--------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | sid | int(10) unsigned | NO | MUL | 0 | | | param | varchar(100) | NO | | | | | value | varchar(255) | NO | | | | | status | int(11) | YES | | 1 | | +--------+------------------+------+-----+---------+----------------+

Then export the table from our slave container and transfer it to the master server:

$ docker exec -it mysql-slave-1h mysqldump -uroot -ppassword --single-transaction shop settings > shop_settings.sql

Drop the problematic table and restore it back on the master:

$ mysql -uroot -p -e 'DROP TABLE shop.settings' $ mysqldump -uroot -p -e shop < shop_setttings.sql Related resources  MySQL Docker Containers: Understanding the basics  MySQL on Docker - How to Containerize Your Database  6 Common Failure Scenarios for MySQL & MariaDB, and How to Fix Them

We have now recovered our table back to its original state before the disastrous event. To summarize, delayed replication can be used for several purposes:

  • To protect against user mistakes on the master. A DBA can roll back a delayed slave to the time just before the disaster.
  • To test how the system behaves when there is a lag. For example, in an application, a lag might be caused by a heavy load on the slave. However, it can be difficult to generate this load level. Delayed replication can simulate the lag without having to simulate the load. It can also be used to debug conditions related to a lagging slave.
  • To inspect what the database looked like in the past, without having to reload a backup. For example, if the delay is one week and the DBA needs to see what the database looked like before the last few days' worth of development, the delayed slave can be inspected.
Final Thoughts

With Docker, running multiple MySQL instances on a same physical host can be done efficiently. You may use Docker orchestration tools like Docker Compose and Swarm to simplify the multi-container deployment as opposed to the steps shown in this blog post.

Tags:  MySQL MariaDB docker replication delayed slave

MySQL 8 and the FRM drop.

(What I should keep in mind in case of disaster)

Retrieve and maintain in SQL format the tables definition of all tables in a database, is one of the best practices that we all should adopt.

To have that under versioning is also another BP to keep in mind.
Doing that may seems redundant, but it become a life saver in several situations.
From the need to review what had historically change in that table, know who change what and why, to when you need to recover your data and have your loved MySQL instance not able to start.

But let us be honest, just few do the right thing, and even fewer keep that information up to date. Given that, what can we do when we have the need to discover/recover the table structure? From the beginning, MySQL had used some external files to describe the internal structure. For instance, if I have a schema named windmills and a table named wmillAUTOINC1, on file system I will see:

-rw-r-----. 1 mysql mysql 8838 Mar 14 2018 wmillAUTOINC1.frm -rw-r-----. 1 mysql mysql 131072 Mar 14 2018 wmillAUTOINC1.ibd


The ibd file contains the data, while frm file contains the structure information.
Keeping aside ANY discussion about if this is safe, if it transactional and more… when we had some major crash and data corruption this approach had being helpful.
Being able to read from this file was the easiest way to get the information we need.
Simple tools like DBSake was making the task quite trivial, also allowing us to script it when in need to run long, complex tedious data recovery:

[root@master1 windmills]# /opt/tools/dbsake frmdump wmillAUTOINC1.frm -- -- Table structure for table `wmillAUTOINC1` -- Created with MySQL Version 5.7.20 --   CREATE TABLE `wmillAUTOINC1` ( `id` bigint(11) NOT NULL AUTO_INCREMENT, `uuid` char(36) COLLATE utf8_bin NOT NULL, `millid` smallint(6) NOT NULL, `kwatts_s` int(11) NOT NULL, `date` date NOT NULL, `location` varchar(50) COLLATE utf8_bin NOT NULL, `active` tinyint(2) NOT NULL DEFAULT '1', `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `strrecordtype` char(3) COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`), KEY `IDX_millid` (`millid`,`active`), KEY `IDX_active` (`id`,`active`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin ROW_FORMAT=DYNAMIC

Of course, if the frm file was also corrupted we could try to get the information from the ibdata dictionary.
If that is corrupted as well (trust me I saw all these) … well last resource was hoping customer has a recent table definition store somewhere, but as mentioned before, we are not so diligent, are we?
NOW in MySQL8 we do not have FRM files, the were drop.
Even more interesting is the fact that we do not have the same dictionary, most of the things that we knew had change, including the dictionary location, so what can be done?

Well Oracle had moved the FRM information and more, to what is call Serialized Dictionary Information (SDI), the SDI is wrote INSIDE the ibd file, and represent the redundant copy of the information contain in the data dictionary.

The SDI is update/modified by DDL operations on tables that reside in that tablespace.

This is it, if you have file per table normally you will have in that file ONLY the SDI for that table, but if you have multiple table on a tablespace, the SDI information will refer to ALL the tables.

To extract this information out from IBD files, Oracle provide an utility call ibd2sdi.

The application parses the SDI information and report a JSON file that can be easily manipulate to extract/build the table definition.
One exception is represented by Partitioned tables, the SDI information is contained ONLY in the first partition, and in case you drop it, is moved to the next one, will show that later.

 

Let us see now how it works.
In the next examples I will look for Tables name, attribute and datatype starting from the dictionary tables.
To take the info I will do this:

/opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/mysql.ibd |\
jq '.[]?|.[]?|.dd_object?|("------------------------------------"?,"TABLE NAME = ",.name?,"****",(.columns?|.[]?|(.name?,.column_type_utf8?)))' The result will be something like: "------------------------------------" "TABLE NAME = " "tables" "****" "id" "bigint(20) unsigned" "schema_id" "bigint(20) unsigned" "name" "varchar(64)" "type" "enum('BASE TABLE','VIEW','SYSTEM VIEW')" "engine" "varchar(64)" "mysql_version_id" "int(10) unsigned" "row_format" "enum('Fixed','Dynamic','Compressed','Redundant','Compact','Paged')" "collation_id" "bigint(20) unsigned" "comment" "varchar(2048)" <snip> "------------------------------------" "TABLE NAME = " "tablespaces" "****" "id" "bigint(20) unsigned" "name" "varchar(259)" "options" "mediumtext" "se_private_data" "mediumtext" "comment" "varchar(2048)" "engine" "varchar(64)" "DB_TRX_ID" "" "DB_ROLL_PTR" ""

 

As you cannot see because I cut the output for brevity, but you can if you run the above command by yourself, what I will get retrieve ALL the tables information, residing in the IBD. The other thing I hope you have noticed, is that I am NOT parsing ibdata, but mysql.ibd, why? Because the dictionary was moved out from ibdata and is now in mysql.ibd. Look what happens if I try to parse ibdata:

[root@master1 ~]# /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/ibdata1 |jq '.' [INFO] ibd2sdi: SDI is empty.

Be very careful in not messing up with your mysql.ibd file. Now what I can do to take information about my wmillAUTOINC1 table in MySQL8? That is quite simple:

/opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/windmills/wmillAUTOINC.ibd |jq '.' [ "ibd2sdi", { "type": 1, "id": 1068, "object": { "mysqld_version_id": 80013, "dd_version": 80013, "sdi_version": 1, "dd_object_type": "Table", "dd_object": { "name": "wmillAUTOINC", "mysql_version_id": 80011, "created": 20180925095853, "last_altered": 20180925095853, "hidden": 1, "options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=1;row_type=2;stats_auto_recalc=0;stats_sample_pages=0;", "columns": [ { "name": "id", "type": 9, "is_nullable": false, "is_zerofill": false, "is_unsigned": false, "is_auto_increment": true, "is_virtual": false, "hidden": 1, "ordinal_position": 1, "char_length": 11, "numeric_precision": 19, "numeric_scale": 0, "numeric_scale_null": false, "datetime_precision": 0, "datetime_precision_null": 1, "has_no_default": false, "default_value_null": false, "srs_id_null": true, "srs_id": 0, "default_value": "AAAAAAAAAAA=", "default_value_utf8_null": true, "default_value_utf8": "", "default_option": "", "update_option": "", "comment": "", "generation_expression": "", "generation_expression_utf8": "", "options": "interval_count=0;", "se_private_data": "table_id=1838;", "column_key": 2, "column_type_utf8": "bigint(11)", "elements": [], "collation_id": 83, "is_explicit_collation": false }, <SNIP> "indexes": [ { "name": "PRIMARY", "hidden": false, "is_generated": false, "ordinal_position": 1, "comment": "", "options": "flags=0;", "se_private_data": "id=2261;root=4;space_id=775;table_id=1838;trx_id=6585972;", "type": 1, "algorithm": 2, "is_algorithm_explicit": false, "is_visible": true, "engine": "InnoDB", <Snip> ], "foreign_keys": [], "partitions": [], "collation_id": 83 } } }, { "type": 2, "id": 780, "object": { "mysqld_version_id": 80011, "dd_version": 80011, "sdi_version": 1, "dd_object_type": "Tablespace", "dd_object": { "name": "windmills/wmillAUTOINC", "comment": "", "options": "", "se_private_data": "flags=16417;id=775;server_version=80011;space_version=1;", "engine": "InnoDB", "files": [ { "ordinal_position": 1, "filename": "./windmills/wmillAUTOINC.ibd", "se_private_data": "id=775;" } ] } } } ]

The JSON will contains:

  • A section describing the DB object at high level
  • Array of columns and related information
  • Array of indexes
  • Partition information (not here but in the next example)
  • Table space information

That is a lot more details than what we had in the FRM and is quite relevant and interesting information as well. Once extracted the SDI any JSON parser tool script can generate the information for the SQL DDL. I mention Partitions, let us cover it a second a bit more, given they can be tricky. As mentioned the SDI information is present ONLY in the first partition. All the others will have ONLY the tablespace information. Given that the first thing to do is to identify which Partition is the first… OR simple try to access all and when you are able to get the details just extract them. The process is the same:

[root@master1 ~]# /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170301.ibd |jq '.' [ "ibd2sdi", { "type": 1, "id": 1460, "object": { "mysqld_version_id": 80013, "dd_version": 80013, "sdi_version": 1, "dd_object_type": "Table", "dd_object": { "name": "wmillAUTOINCPART", "mysql_version_id": 80013, "created": 20181125110300, "last_altered": 20181125110300, "hidden": 1, "options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=1;row_type=2;stats_auto_recalc=0;stats_sample_pages=0;", "columns": [<snip> "schema_ref": "windmills", "se_private_id": 18446744073709552000, "engine": "InnoDB", "last_checked_for_upgrade_version_id": 80013, "comment": "", "se_private_data": "autoinc=31080;version=2;", "row_format": 2, "partition_type": 7, "partition_expression": "to_days(`date`)", "partition_expression_utf8": "to_days(`date`)", "default_partitioning": 1, "subpartition_type": 0, "subpartition_expression": "", "subpartition_expression_utf8": "", "default_subpartitioning": 0, ], <snip> "foreign_keys": [], "partitions": [ { "name": "PT20170301", "parent_partition_id": 18446744073709552000, "number": 0, "se_private_id": 1847, "description_utf8": "736754", "engine": "InnoDB", "comment": "", "options": "", "se_private_data": "autoinc=0;version=0;", "values": [ { "max_value": false, "null_value": false, "list_num": 0, "column_num": 0, "value_utf8": "736754" } ],

The difference as you can see is the section related to Partitions and Sub Partition that will be filled with all the details you may need to recreate the partitions. We will have:

  • Partition type
  • Partition expression
  • Partition values
  • …more

Same for sub partitions. Now again see what happens if I parse the second Partition:

[root@master1 ~]# /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170401.ibd |jq '.' [ "ibd2sdi", { "type": 2, "id": 790, "object": { "mysqld_version_id": 80011, "dd_version": 80011, "sdi_version": 1, "dd_object_type": "Tablespace", "dd_object": { "name": "windmills/wmillAUTOINCPART#P#PT20170401", "comment": "", "options": "", "se_private_data": "flags=16417;id=785;server_version=80011;space_version=1;", "engine": "InnoDB", "files": [ { "ordinal_position": 1, "filename": "./windmills/wmillAUTOINCPART#P#PT20170401.ibd", "se_private_data": "id=785;" } ] } } } ]

I will get only the information about the Tablespace but not the table. As promise let me show you now what happen if I delete the first partition, and the second one will become the first:

(root@localhost) [windmills]>alter table wmillAUTOINCPART drop partition PT20170301; Query OK, 0 rows affected (1.84 sec) Records: 0 Duplicates: 0 Warnings: 0 [root@master1 ~]# /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170401.ibd |jq '.'|more [ "ibd2sdi", { "type": 1, "id": 1461, "object": { "mysqld_version_id": 80013, "dd_version": 80013, "sdi_version": 1, "dd_object_type": "Table", "dd_object": { "name": "wmillAUTOINCPART", "mysql_version_id": 80013, "created": 20181129130834, "last_altered": 20181129130834, "hidden": 1, "options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=1;row_type=2;stats_auto_recalc=0;stats_sample_pages=0;", "columns": [ { "name": "id", "type": 9, "is_nullable": false, "is_zerofill": false, "is_unsigned": false, "is_auto_increment": true, "is_virtual": false, "hidden": 1, "ordinal_position": 1,

As also mention before the SDI is update at each DDL, and here we go, I will have all the information on the NOW FIRST partition. Please note the created attribute between the first time I query the Other partition and the one I had from now:

/opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170301.ibd "created": 20181125110300, /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170401.ibd "created": 20181129130834,

  To be clear the second created is NOW (PT20170401) when I did the drop on the other partition (PT20170301).

  Conclusions

At the end this solution is definitely more powerful of the FRM files.
It will allow us to parse the file and identify the table definition more easily, get much more details and information.

The problems will raise when and IF the IBD file will become corrupt.

As for manual For InnoDB, an SDI record requires a single index page, which is 16KB in size by default. However, SDI data is compressed to reduce the storage footprint.

Which it means that for each Table I have a page, if I associate record=table.

Which means that in case of IBD corruption I should (likely) be able to read those pages, unless bad (very bad) luck.

Still wonder how the dimension of an IBD affects the SDI retrieval, given I had not tried it yet but will let you know.

On another side I am working on a script to facilitate the SQL generation, still not ready but you can find it here

Last note but keep this in mind! It is state in the manual but in a hidden place small letters: DDL operations take longer due to writing to storage, undo logs, and redo logs instead of .frm files.

  References

https://stedolan.github.io/jq/

https://dev.mysql.com/doc/refman/8.0/en/ibd2sdi.html

https://dev.mysql.com/doc/refman/8.0/en/serialized-dictionary-information.html

https://dev.mysql.com/doc/refman/8.0/en/data-dictionary-limitations.html

Percona Server for MySQL 5.7.24-26 Is Now Available

Percona announces the release of Percona Server for MySQL 5.7.24-26 on December 4, 2018 (downloads are available here and from the Percona Software Repositories). This release merges changes of MySQL 5.7.24, including all the bug fixes in it. Percona Server for MySQL 5.7.24-26 is now the current GA release in the 5.7 series. All of Percona’s software is open-source and free.

This release includes fixes to the following upstream CVEs (Common Vulnerabilities and Exposures): CVE-2016-9843, CVE-2018-3155, CVE-2018-3143, CVE-2018-3156, CVE-2018-3251, CVE-2018-3133, CVE-2018-3144, CVE-2018-3185, CVE-2018-3247CVE-2018-3187, CVE-2018-3174, CVE-2018-3171. For more information, see Oracle Critical Patch Update Advisory – October 2018.

Improvements
  • PS-4790: Improve user statistics accuracy
Bugs Fixed
  • Slave replication could break if upstream bug #74145 (FLUSH LOGS improperly disables the logging if the log file cannot be accessed) occurred in master. Bug fixed PS-1017 (Upstream #83232).
  • Setting the tokudb_last_lock_timeout variable via the command line could cause the server to stop working when the actual timeout took place. Bug fixed PS-4943.
  • Dropping a TokuDB table with non-alphanumeric characters could lead to a crash. Bug fixed PS-4979.
  • When using the MyRocks storage engine, the server could crash after running ALTER TABLE DROP INDEX on a slave. Bug fixed PS-4744.
  • The audit log could be corrupted when the audit_log_rotations variable was changed at runtime. Bug fixed PS-4950.

Other Bugs Fixed

  • PS-4781: sql_yacc.yy uses SQLCOM_SELECT instead of SQLCOM_SHOW_XXXX_STATS
  • PS-4881: Add LLVM/clang 7 to Travis-CI
  • PS-4825: Backport MTR fixes from 8.0
  • PS-4998: Valgrind: compilation fails with: writing to ‘struct buf_buddy_free_t’ with no trivial copy-assignment
  • PS-4980: Valgrind: Syscall param write(buf) points to uninitialised byte(s): Event_encrypter::encrypt_and_write()
  • PS-4982: Valgrind: Syscall param io_submit(PWRITE) points to uninitialised byte(s): buf_dblwr_write_block_to_datafile()
  • PS-4983: Valgrind: Syscall param io_submit(PWRITE) points to uninitialised byte(s): buf_flush_write_block_low()
  • PS-4951: Many libc-related Valgrind errors on CentOS7
  • PS-5012: Valgrind: misused UNIV_MEM_ALLOC after ut_zalloc_nokey
  • PS-4908: UBSan and valgrind errors with encrypted temporary files
  • PS-4532: Replace obsolete HAVE_purify with HAVE_VALGRIND in ha_rocksdb.cc
  • PS-4955: Backport mysqld fixes for valgrind warnings from 8.0
  • PS-4529: MTR: index_merge_rocksdb2 inadvertently tests InnoDB instead of MyRocks
  • PS-5056: handle_fatal_signal (sig=11) in ha_tokudb::write_row
  • PS-5084: innodb_buffer_pool_size is an uninitialized variable
  • PS-4836: Missing PFS signed variable aggregation
  • PS-5033: rocksdb.show_engine: Result content mismatch
  • PS-5034: rocksdb.rocksdb: Result content mismatch
  • PS-5035: rocksdb.show_table_status: 1051: Unknown table ‘db_new’

Find the release notes for Percona Server for MySQL 5.6.24-26 in our online documentation. Report bugs in the Jira bug tracker.

 

pre-FOSDEM MySQL Day 2019

For the third year in a row, we will take advantage of the mass presence of our MySQL Engineers during FOSDEM to organize the pre-FOSDEM MySQL Day.

The program of this 3rd edition is already on track, thank you to all the speakers who already confirmed their participation.

Start End Event Speaker Company Topic Friday 1st February 09:30 10:00 MySQL Community Team Welcome 10:00 10:30 MySQL Shell – A DevOps-engineer day with MySQL’s development and administration tool Miguel Araújo Oracle MySQL Shell 10:35 11:05 MySQL Shell : the best DBA tool ? – How to use the MySQL Shell as a framework for DBAs Frédéric Descamps Oracle MySQL Shell 11:05 11:25 Coffee Break 11:25 11:55 Tuning MySQL 8.0 InnoDB for High Load Dimitri Kravtchuk Oracle MySQL 8.0 12:00 12:30 MySQL 8.0: advance tuning with Resource Group Marco Tusa Percona MySQL 8.0 12:35 13:30 Lunch Break 13:30 14:00 New index features in MySQL 8.0 Erik Frøseth Oracle MySQL 8.0 14:05 14:35 Optimizer Histograms: When they Help and When Do Not? Sveta Smirnova Percona MySQL 8.0 14:40 15:10 Regular expressions with full Unicode support – The ins and outs of the new regular expression functions and the ICU library Martin Hansson  Oracle MySQL 8.0 15:15 15:40 Coffee Break 15:40 16:10 Mirroring MySQL traffic with ProxySQL: use cases René Cannaò ProxySQL ProxySQL 16:15 16:45 Automating MySQL operations with containers Giuseppe Maxia VMware Containers 16:50 17:20 8 Group Replication Features That Will Make You Smile Tiago Vale Oracle Replication 17:25 17:55 Document Store & PHP David Stokes Oracle Document Store

As you can see, discovering MySQL 8.0, the Shell and the Document Store will be the main target of the day and we will also have the chance to have Community members sharing their experience.

Places are limited and therefor, registration is mandatory (please register only if you can come). Registration is open until January 25th.

Just after the famous and popular MySQL Community dinner will take place at the same location !

Looking forward to see you all during that awesome week-end !

Pages