Planet MySQL

HotBackup For MyRocks(Rocksdb) using Percona Xtrabackup

Xtrabackup now supports Hotbackup for Myrocks!! yes you heard me right, this is one of the most awaited features with xtrabackup. With the latest release of percona xtrabackup 8.0.6 this is enabled and is supported only for Percona Server version 8.0.15-6 or higher, you can see detailed released notes here.

Myrocks is getting much of the attention now because of its much improved write capabilities and compression. We have also planned to have detailed blog on Myrocks features and limitations.

We shall proceed to test the backup and restore of Myrocks

Environment: OS : Debian GNU/Linux 9 (stretch) Cores : 14 RAM : 36G HardDiskType : HDD MySQL version : 8.0.15-6 Percona Server (GPL), Release '6', Revision '63abd08' Xtrabackup version : 8.0.6

Below is the installed package both Percona MySQL & Xtrabackup

ii  libperconaserverclient21:amd64 8.0.15-6-1.stretch             amd64 Percona Server database client library ii  percona-server-client          8.0.15-6-1.stretch amd64        Percona Server database client binaries ii  percona-server-common          8.0.15-6-1.stretch amd64        Percona Server database common files (e.g. /etc/mysql/my.cnf) ii  percona-server-rocksdb         8.0.15-6-1.stretch amd64        MyRocks storage engine plugin for Percona Server ii  percona-server-server          8.0.15-6-1.stretch amd64        Percona Server database server binaries ii  percona-xtrabackup-80          8.0.6-1.stretch amd64        Open source backup tool for InnoDB and XtraDB

With the above setup, I have used both the rocksDB and InnoDB on a single database, As below.

+---------+-------------------+--------------+-----------------+ | ENGINE  | TABLE_NAME        | TABLE_SCHEMA | Total size(GB)  | +---------+-------------------+--------------+-----------------+ | ROCKSDB | customer          | erp          |  1.193671055138 | | InnoDB  | customerAttribute | erp          | 16.391601562500 | +---------+-------------------+--------------+-----------------+

Note: To make bulk loading on to myrocks, I have enabled the below session variables:

mysql> set rocksdb_bulk_load=ON; mysql> set rocksdb_commit_in_the_middle=ON; mysql> set sql_log_bin=0; Row Count of Both the Tables(Before backup): mysql> select count(*) from customer; (Myrocks) +----------+ | count(*) | +----------+ | 37605935 | +----------+ mysql> select count(*) from customerAttribute;(innodb) +-----------+ | count(*) | +-----------+ | 126022626 | +-----------+ MyRocks Disk Files:

Myrocks Datadir  on disk ie “.rocksdb” contains two key files

  • WAL files(redo)
    WAL(Write ahead log) log file (redologs), these logs would be automatically removed when its data is fully synced to data files ie., SST files
  • SST files
    SST files are the actual data files, SST files are where the compaction happens at different levels (L0….Ln)

To visualise  better on the disk files of Myrocks, am just sharing image by Percona CTO, Vadim Tkachenko

Full Backup:

Below is the Xtrabackup command used to take a full backup.

:~# xtrabackup --target-dir=/mysqlbackup/backup/ --backup

Now lets see some logs:

Backup Of InnoDB Tables:

As the tables of different engine co-exists, Xtrabackup completes the backup of InnoDB engine tables first as below

190512 18:00:29 >> log scanned up to (22897434392) xtrabackup: Generating a list of tablespaces Directories to scan '/var/lib/mysql/;./;.' Scanning '/var/lib/mysql/' Completed space ID check of 2 files. Allocated tablespace ID 2 for erp/customerAttribute, old maximum was 0 Using undo tablespace '/var/lib/mysql/undo_001'. Using undo tablespace '/var/lib/mysql/undo_002'. Opened 2 existing undo tablespaces. 190512 18:00:29 [01] Copying /var/lib/mysql/ibdata1 to /mysqlbackup/backup/ibdata1 190512 18:00:29 [01] ...done 190512 18:00:29 [01] Copying /var/lib/mysql/sys/sys_config.ibd to /mysqlbackup/backup/sys/sys_config.ibd 190512 18:00:29 [01] ...done 190512 18:00:29 [01] Copying /var/lib/mysql/erp/customerAttribute.ibd to /mysqlbackup/backup/erp/customerAttribute.ibd 190512 18:00:30 >> log scanned up to (22897434392) 190512 18:00:31 >> log scanned up to (22897434392) : 90512 18:02:31 >> log scanned up to (22897434392) 190512 18:02:31 [01] ...done 190512 18:02:31 [01] Copying /var/lib/mysql/mysql.ibd to /mysqlbackup/backup/mysql.ibd 190512 18:02:31 [01] ...done 190512 18:02:31 [01] Copying ./undo_002 to /mysqlbackup/backup/undo_002 190512 18:02:31 [01] ...done 190512 18:02:31 [01] Copying ./undo_001 to /mysqlbackup/backup/undo_001 190512 18:02:31 [01] ...done Backup of Non-InnoDB system Tables:

Once done with the InnoDB tables, now its proceeds for copying the system tables of various engine type as below, Along the metadata file of Myrocks table “/mysqlbackup/backup/erp/customer_365.sdi

190512 18:02:32 Executing LOCK TABLES FOR BACKUP... 190512 18:02:32 Starting to backup non-InnoDB tables and files 190512 18:02:32 [01] Copying mysql/general_log_209.sdi to /mysqlbackup/backup/mysql/general_log_209.sdi 190512 18:02:32 [01] ...done 190512 18:02:32 [01] Copying mysql/general_log.CSM to /mysqlbackup/backup/mysql/general_log.CSM 190512 18:02:32 [01] ...done 190512 18:02:32 [01] Copying mysql/general_log.CSV to /mysqlbackup/backup/mysql/general_log.CSV 190512 18:02:32 [01] ...done 190512 18:02:32 [01] Copying mysql/slow_log_210.sdi to /mysqlbackup/backup/mysql/slow_log_210.sdi 190512 18:02:32 [01] ...done : : 190512 18:02:32 [01] Copying erp/customer_365.sdi to /mysqlbackup/backup/erp/customer_365.sdi 190512 18:02:32 [01] ...done 190512 18:02:32 Finished backing up non-InnoDB tables and files Data copy for Myrocks :

Now lets see how Myrocks engine tables are backed up,

  • Xtrabackup first creates a “Checkpoint”(LSN) for RocksDB to make a consistent copy of data, here  its LSN=”22897434402”
  • Then it proceeds to copy the WAL logs, with respect to the checkpoint LSN “22897434402”
  • Then it starts copying the SST file, with respect to the checkpoint.
  • Finally it removes the check-point of rocksDB and marks the constant binlog pos along with it
190512 18:02:32 xtrabackup: Creating RocksDB checkpoint 190512 18:02:33 >> log scanned up to (22897434402) 190512 18:02:34 >> log scanned up to (22897434402) 190512 18:02:35 >> log scanned up to (22897434402) 190512 18:02:36 >> log scanned up to (22897434402) 190512 18:02:36 Executing FLUSH NO_WRITE_TO_BINLOG BINARY LOGS 190512 18:02:36 Selecting LSN and binary log position from p_s.log_status 190512 18:02:36 [00] Copying /var/log/mysql/ekl-logis-archiva-325561-binary.000011 to /mysqlbackup/backup/ekl-logis-archiva-325561-binary.000011 up to position 155 190512 18:02:36 [00] ...done 190512 18:02:36 [00] Writing /mysqlbackup/backup/ekl-logis-archiva-325561-binary.index 190512 18:02:36 [00] ...done 190512 18:02:36 [00] Copying ./.rocksdb/000364.log to /mysqlbackup/backup/.rocksdb/000364.log up to position 5056023 190512 18:02:36 [00] ...done 190512 18:02:36 [00] Copying ./.rocksdb/000367.log to /mysqlbackup/backup/.rocksdb/000367.log up to position 435553 190512 18:02:36 [00] ...done : 190512 18:03:11 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS... xtrabackup: The latest check point (for incremental): '22897434402' xtrabackup: Stopping log copying thread at LSN 22897434402. .190512 18:03:11 >> log scanned up to (22897434402) 190512 18:03:11 >> log scanned up to (22897434402) 190512 18:03:11 Executing UNLOCK TABLES 190512 18:03:11 All tables unlocked 190512 18:03:11 [00] Copying ib_buffer_pool to /mysqlbackup/backup/ib_buffer_pool 190512 18:03:11 [00] ...done 190512 18:03:11 [00] Copying .xtrabackup_rocksdb_checkpoint_1557664352626/000409.sst to /mysqlbackup/backup/.rocksdb/000409.sst 190512 18:03:12 [00] ...done : 190512 18:03:22 [00] Copying .xtrabackup_rocksdb_checkpoint_1557664352626/CURRENT to /mysqlbackup/backup/.rocksdb/CURRENT 190512 18:03:22 [00] ...done 190512 18:03:22 xtrabackup: Removing RocksDB checkpoint 190512 18:03:22 Backup created in directory '/mysqlbackup/backup/' MySQL binlog position: filename 'erp-mydbops-archiva-325561-binary.000011', position '155' 190512 18:03:22 [00] Writing /mysqlbackup/backup/backup-my.cnf 190512 18:03:22 [00] ...done 190512 18:03:22 [00] Writing /mysqlbackup/backup/xtrabackup_info 190512 18:03:22 [00] ...done xtrabackup: Transaction log of lsn (22897434392) to (22897434402) was copied. 190512 18:03:22 completed OK! Preparing Backup:

Now let’s proceed to prepare the backup, Which runs crash recovery process to make the backup consistent to the scanned LSN (22897434402)

Below is the command I have used for the same

# xtrabackup --target-dir=/mysqlbackup/backup --use-memory=1G --prepare

Am not proving the entire, Am just highlighting some major tasks

xtrabackup: Starting InnoDB instance for recovery. xtrabackup: Using 1073741824 bytes for buffer pool (set by --use-memory parameter) : : The log sequence number 19338764 in the system tablespace does not match the log sequence number 22897434392 in the ib_logfiles! Database was not shutdown normally! Starting crash recovery. Starting to parse redo log at lsn = 22897434182, whereas checkpoint_lsn = 22897434392 Doing recovery: scanned up to log sequence number 22897434402 Log background threads are being started... Applying a batch of 1 redo log records ... 100% Apply batch completed! xtrabackup: Last MySQL binlog file position 847444434, file name erp-mydbops-archiva-325561-binary.000010 Using undo tablespace './undo_001'. Using undo tablespace './undo_002'. Opened 2 existing undo tablespaces. xtrabackup: Last MySQL binlog file position 847444434, file name erp-mydbops-archiva-325561-binary.000010 : xtrabackup: starting shutdown with innodb_fast_shutdown = 1 FTS optimize thread exiting. Starting shutdown... Log background threads are being closed... Shutdown completed; log sequence number 22897434402 Number of pools: 1 xtrabackup: using the following InnoDB configuration for recovery: xtrabackup: innodb_data_home_dir = . xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend xtrabackup: innodb_log_group_home_dir = . xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 1073741824 : : Completed space ID check of 5 files. Initializing buffer pool, total size = 1.000000G, instances = 1, chunk size =128.000000M Completed initialization of buffer pool page_cleaner coordinator priority: -20 page_cleaner worker priority: -20 page_cleaner worker priority: -20 page_cleaner worker priority: -20 Setting log file ./ib_logfile101 size to 16777216 MB Progress in MB: 100 200 300 400 500 600 700 800 900 1000 Setting log file ./ib_logfile1 size to 16777216 MB Progress in MB: 100 200 300 400 500 600 700 800 900 1000 Renaming log file ./ib_logfile101 to ./ib_logfile0 New log files created, LSN=22897434636 Log background threads are being started... Applying a batch of 0 redo log records ... Apply batch completed! Using undo tablespace './undo_001'. Using undo tablespace './undo_002'. Opened 2 existing undo tablespaces. Removed temporary tablespace data file: "ibtmp1" : : Starting shutdown... Log background threads are being closed... Shutdown completed; log sequence number 22897434636 190512 18:09:28 completed OK!

Backup preparing has been completed.And is now ready to restore.

Backup DIR and its contents

Now let’s have a look at the backup directory and its contents:

Am using -a option ls command , since with backupdir datafile in myrocks  are stored hidden under “.rocksdb”

drwxr-xr-x 4 root root 103 May 10 16:27 .. drwxr-x--- 2 root root 28 May 12 18:00 sys -rw-r----- 1 root root 24M May 12 18:02 mysql.ibd -rw-r----- 1 root root 10M May 12 18:02 undo_002 -rw-r----- 1 root root 10M May 12 18:02 undo_001 drwxr-x--- 2 root root 143 May 12 18:02 mysql drwxr-x--- 2 root root 8.0K May 12 18:02 performance_schema drwxr-x--- 2 root root 59 May 12 18:02 erp -rw-r----- 1 root root 54 May 12 18:02 erp-mydbops-archiva-325561-binary.index -rw-r----- 1 root root 155 May 12 18:02 erp-mydbops-archiva-325561-binary.000011 -rw-r----- 1 root root 43 May 12 18:03 xtrabackup_binlog_info -rw-r----- 1 root root 10K May 12 18:03 ib_buffer_pool drwxr-x--- 2 root root 4.0K May 12 18:03 .rocksdb -rw-r----- 1 root root 481 May 12 18:03 xtrabackup_info -rw-r----- 1 root root 485 May 12 18:03 backup-my.cnf -rw-r--r-- 1 root root 1 May 12 18:09 xtrabackup_master_key_id -rw-r--r-- 1 root root 49 May 12 18:09 xtrabackup_binlog_pos_innodb -rw-r----- 1 root root 8.0M May 12 18:09 xtrabackup_logfile -rw-r----- 1 root root 101 May 12 18:09 xtrabackup_checkpoints -rw-r----- 1 root root 1.0G May 12 18:09 ib_logfile1 -rw-r----- 1 root root 1.0G May 12 18:09 ib_logfile0 drwxr-xr-x 8 root root 4.0K May 12 18:09 . -rw-r----- 1 root root 12M May 12 18:09 ibtmp1 -rw-r----- 1 root root 12M May 12 18:09 ibdata1 drwxr-x--- 2 root root 6 May 12 18:09 #innodb_temp -rw-r----- 1 root root 248 May 12 18:09 xtrabackup_tablespaces Backup Restore:

Now the backup has been prepared and ready to restore, so I will proceed to stop mysql server, remove contents of datadir & restore this backup.

Stopping server:

# service mysql stop # service mysql status (to check status)

Remove datadir:

While removing the content don’t forget to remove the hidden “.rocksdb” folder as below or it might cause conflicts while restoring.

# rm -rf /var/lib/mysql/* # rm -rf /var/lib/mysql/.rocksdb


Am using the option –copy-back to copy the content of backupdir to datadir, you can also use –move-back as well, to make a manual copy using #CP command.

# xtrabackup --target-dir=/mysqlbackup/backup --copy-back


xtrabackup: recognized client arguments: --target-dir=/mysqlbackup/backup --copy-back=1 xtrabackup version 8.0.6 based on MySQL server 8.0.14 Linux (x86_64) (revision id: c0a2d91) 190512 18:14:40 [01] Copying undo_001 to /var/lib/mysql/undo_001 190512 18:14:40 [01] ...done 190512 18:14:40 [01] Copying undo_002 to /var/lib/mysql/undo_002 190512 18:14:40 [01] ...done 190512 18:14:40 [01] Copying ib_logfile0 to /var/lib/mysql/ib_logfile0 190512 18:14:46 [01] ...done : : 190512 18:17:05 [00] Copying .rocksdb/000379.sst to /var/lib/mysql//.rocksdb/000379.sst 190512 18:17:05 [00] ...done 190512 18:17:05 [00] Copying .rocksdb/MANIFEST-000371 to /var/lib/mysql//.rocksdb/MANIFEST-000371 190512 18:17:05 [00] ...done 190512 18:17:05 [00] Copying .rocksdb/OPTIONS-000380 to /var/lib/mysql//.rocksdb/OPTIONS-000380 190512 18:17:05 [00] ...done 190512 18:17:05 [00] Copying .rocksdb/CURRENT to /var/lib/mysql//.rocksdb/CURRENT 190512 18:17:05 [00] ...done 190512 18:17:05 completed OK!

Now proceeding to start the server as below:

# service mysql start

After restoring am just proceeding to check the count for both Rocks and Innodb table as below to verify the counts before and after

Uptime: 12 sec Threads: 2  Questions: 9  Slow queries: 0  Opens: 135  Flush tables: 2  Open tables: 111  Queries per second avg: 0.750 ----------------------- mysql> select count(*) from customerAttribute;(Innodb) +-----------+ | count(*)  | +-----------+ | 126022626 | +-----------+ 1 row in set (55.14 sec) mysql> select count(*) from customer;(Rocksdb) +----------+ | count(*) | +----------+ | 37605935 | +----------+ 1 row in set (3 min 30.95 sec) On a concurrent server with incoming writes for the Myrocks, it makes check-point to the LSN and make the backup consistent, till that checkpoint.

In this blog we have just seen basic full backup of Myrocks in a hot online server using Xtrabackup, preparing the same and restoring it back. What I have not covered here is Incremental backup & restore of the same.

Thanks to Percona Team, for this most awaited feature. This will definitely boost the adaptability of Myrocks to next level.We will also continue to explore more.

MySQL @OSD Poland on May 14

We would like to remind that you can find MySQL at OpenSource Day, Poland in Warsaw next Tuesday, May 14. Please find details below:

  • Name: OpenSource Day
  • Date: Tue, May 14, 2019
  • Place: Warsaw, Poland
  • MySQL Session: "MySQL InnoDB Cluster: High Availability with no stress!" given by Vittorio Cioe, the MySQL Senior Sales Consultant

This year we are sharing our resources with our friends from Oracle Linux team. You can find the Linux session just after MySQL one in the online agenda. We are also going to share the booth in expo area, so if you have any queries for either MySQL or Linux products, do not hesitate to come to talk to us to our booth in expo area! We are looking forward to talking to you in Warsaw!

Popular Docker Images for MySQL and MariaDB Server

A Docker image can be built by anyone who has the ability to write a script. That is why there are many similar images being built by the community, with minor differences but really serving a common purpose. A good (and popular) container image must have well-written documentation with clear explanations, an actively maintained repository and with regular updates. Check out this blog post if you want to learn how to build and publish your own Docker image for MySQL, or this blog post if you just want to learn the basics of running MySQL on Docker.

In this blog post, we are going to look at some of the most popular Docker images to run our MySQL or MariaDB server. The images we have chosen are general-purpose public images that can at least run a MySQL service. Some of them include non-essential MySQL-related applications, while others just serve as a plain mysqld instance. The listing here is based on the result of Docker Hub, the world's largest library and community for container images.


The following table summarizes the different options:

Aspect MySQL (Docker) MariaDB (Docker) Percona (Docker) MySQL (Oracle) MySQL/MariaDB (CentOS) MariaDB (Bitnami) Downloads* 10M+ 10M+ 10M+ 10M+ 10M+ 10M+ Docker Hub mysql mariadb percona mysql/mysql-server mysql-80-centos7
mariadb-101-centos7 bitnami/mariadb Project page mysql mariadb percona-docker mysql-docker mysql-container bitnami-docker-mariadb Base image Debian 9 Ubuntu 18.04 (bionic)
Ubuntu 14.04 (trusty) CentOS 7 Oracle Linux 7 RHEL 7
CentOS 7 Debian 9 (minideb)
Oracle Linux 7 Supported database versions 5.5
8.0 5.5
10.4 5.6
8.0 5.5
8.0 5.5
10.2 10.1
10.3 Supported platforms x86_64 x86
ppc64le x86
x86_64 x86_64 x86_64 x86_64 Image size
(tag: latest)
129 MB 120 MB 193 MB 99 MB 178 MB 87 MB First commit May 18, 2014 Nov 16, 2014 Jan 3, 2016 May 18, 2014** Feb 15, 2015 May 17, 2015 Contributors 18 9 15 14 30 20 Github Star 1267 292 113 320 89 152 Github Fork 1291 245 121 1291** 146 71

* Taken from Docker Hub page.
** Forked from MySQL docker project.

mysql (Docker)

The images are built and maintained by the Docker community with the help of MySQL team. It can be considered the most popular publicly available MySQL server images hosted on Docker Hub and one of the earliest on the market (the first commit was May 18, 2014). It has been forked ~1300 times with 18 active contributors. It supports the Docker version down to 1.6 on a best-effort basis. At this time of writing, all the MySQL major versions are supported - 5.5, 5.6, 5.7 and 8.0 on x86_64 architecture only.

Most of the MySQL images built by others are inspired by the way this image was built. MariaDB, Percona and MySQL Server (Oracle) images are following a similar environment variables, configuration file structure and container initialization process flow.

The following environment variables are available on most of the MySQL container images on Docker Hub:


The image size (tag: latest) is averagely small (129MB), easy to use, well maintained and updated regularly by the maintainer. If your application requires the latest MySQL database container, this is the most recommended public image you can use.

mariadb (Docker)

The images are maintained by Docker community with the help of MariaDB team. It uses the same style of building structure as the mysql (Docker) image, but it comes with multiple architectures support:

  • Linux x86-64 (amd64)
  • ARMv8 64-bit (arm64v8)
  • x86/i686 (i386)
  • IBM POWER8 (ppc64le)

At the time of this writing, the images support MariaDB version 5.5 up until 10.4, where image with the "latest" tag size is around 120MB. This image serves as a general-purpose image and follows the instructions, environment variables and configuration file structure as mysql (Docker). Most applications that required MySQL as the database backend is commonly compatible with MariaDB, since both are talking the same protocol.

MariaDB server used to be a fork of MySQL but now it has been diverted away from it. In terms of database architecture design, some MariaDB versions are not 100% compatible and no longer a drop-in replacement with theirs respective MySQL versions. Check out this page for details. However, there are ways to migrate between each other by using logical backup. Simply said, that once you are in the MariaDB ecosystem, you probably have to stick with it. Mixing or switching between MariaDB and MySQL in a cluster is not recommended.

If you would like to set up a more advanced MariaDB setup (replication, Galera, sharding), there are other images built to achieve that objective much more easily, e.g, bitnami/mariadb as explained further down.

percona (Docker)

Percona Server is a fork of MySQL created by Percona. These are the only official Percona Server Docker images, created and maintained by the Percona team. It supports both x86 and x86_64 architecture and the image is based on CentOS 7. Percona only maintains the latest 3 major MySQL versions for container images - 5.6, 5.7 and 8.0.

The code repository points out that first commit was Jan 3, 2016 with 15 actively contributors mostly from Percona development team. Percona Server for MySQL comes with XtraDB storage engine (a drop-in replacement for InnoDB) and follows the upstream Oracle MySQL releases very closely (including all the bug fixes in it) with some additional features like MyRocks storage engine, TokuDB as well as Percona’s own bug fixes. In a way, you can think of it as an improved version of Oracle’s MySQL. You can easily switch between MySQL and Percona Server images, provided you are running on the compatible version.

The images recognize two additional environment variables for TokuDB and RocksDB for MySQL (available since v5.6):

  • INIT_TOKUDB - Set to 1 to allow the container to be started with enabled TOKUDB storage engine.
  • INIT_ROCKSDB - Set to 1 to allow the container to be started with enabled ROCKSDB storage engine.
mysql-server (Oracle)

The repository is forked from mysql by Docker team. The images are created, maintained and supported by the MySQL team at Oracle built on top of Oracle Linux 7 base image. The MySQL 8.0 image comes with MySQL Community Server (minimal) and MySQL Shell and the server is configured to expose X protocol on port 33060 from minimal repository. The minimal package was designed for use by the official Docker images for MySQL. It cuts out some of the non-essential pieces of MySQL like innochecksum, myisampack, mysql_plugin, but is otherwise the same product. Therefore, it has a very small image footprint which is around 99 MB.

One important point to note is the images have a built-in health check script, which is very handy for some people who are in need for an accurate availability logic. Otherwise, people have to write a custom Docker's HEALTHCHECK command (or script) to check for the container health.

mysql-xx-centos7 & mariadb-xx-centos7 (CentOS)

The container images are built and maintained by CentOS team which include MySQL database server for OpenShift and general usage. For RHEL based images, you can pull them from Red Hat's Container Catalog while the CentOS based images are hosted publicly at Docker Hub on different pages for every major version (only list out images with 10M+ downloads):

The image structure is a bit different and it doesn't make use of image tag like others, thus the image name becomes a bit longer instead. Having said that, you have to go to the correct Docker Hub page to get the major version you want to pull.

According to the code repository page, 30 contributors have collaborated in the project since February 15, 2015. It supports MySQL 5.5 up until 8.0 and MariaDB 5.5 until 10.2 for x86_64 architecture only. If you heavily rely on Red Hat containerization infrastructure like OpenShift, these are probably the most popular or well-maintained images for MySQL and MariaDB.

The following environment variables influence the MySQL/MariaDB configuration file and they are all optional:

  • MYSQL_MAX_CONNECTIONS (default: 151)
  • MYSQL_MAX_ALLOWED_PACKET (default: 200M)
  • MYSQL_FT_MIN_WORD_LEN (default: 4)
  • MYSQL_FT_MAX_WORD_LEN (default: 20)
  • MYSQL_AIO (default: 1)
  • MYSQL_TABLE_OPEN_CACHE (default: 400)
  • MYSQL_KEY_BUFFER_SIZE (default: 32M or 10% of available memory)
  • MYSQL_SORT_BUFFER_SIZE (default: 256K)
  • MYSQL_READ_BUFFER_SIZE (default: 8M or 5% of available memory)
  • MYSQL_INNODB_BUFFER_POOL_SIZE (default: 32M or 50% of available memory)
  • MYSQL_INNODB_LOG_FILE_SIZE (default: 8M or 15% of available memory)
  • MYSQL_INNODB_LOG_BUFFER_SIZE (default: 8M or 15% of available memory)
  • MYSQL_DEFAULTS_FILE (default: /etc/my.cnf)
  • MYSQL_BINLOG_FORMAT (default: statement)

The images support MySQL auto-tuning when the MySQL image is running with the --memory parameter set and if you didn't specify value for the following parameters, their values will be automatically calculated based on the available memory:

  • MYSQL_KEY_BUFFER_SIZE (default: 10%)
  • MYSQL_READ_BUFFER_SIZE (default: 5%)
  • MYSQL_INNODB_LOG_FILE_SIZE (default: 15%)
Severalnines   DevOps Guide to Database Management Learn about what you need to know to automate and manage your open source databases Download for Free bitnami/mariadb

The images are built and maintained by Bitnami, experts in software packaging in virtual or cloud deployment. The images are released daily with the latest distribution packages available and use a minimalist Debian-based image called minideb. Thus, the image size for the latest tag is the smallest among all which is around 87MB. The project has 20 contributors with the first commit happened on May 17, 2015. At this time of writing, it only supports MariaDB 10.1 up until 10.3.

One outstanding feature of this image is the ability to deploy a highly available MariaDB setup via Docker environment variables. A zero downtime MariaDB master-slave replication cluster can easily be setup with the Bitnami MariaDB Docker image using the following environment variables:

  • MARIADB_REPLICATION_MODE: The replication mode. Possible values master/slave. No defaults.
  • MARIADB_REPLICATION_USER: The replication user created on the master on first run. No defaults.
  • MARIADB_REPLICATION_PASSWORD: The replication users password. No defaults.
  • MARIADB_MASTER_HOST: Hostname/IP of replication master (slave parameter). No defaults.
  • MARIADB_MASTER_PORT_NUMBER: Server port of the replication master (slave parameter). Defaults to 3306.
  • MARIADB_MASTER_ROOT_USER: User on replication master with access to MARIADB_DATABASE (slave parameter). Defaults to root
  • MARIADB_MASTER_ROOT_PASSWORD: Password of user on replication master with access to
  • MARIADB_DATABASE (slave parameter). No defaults.
Related resources  MySQL on Docker: Building the Container Image  MySQL Docker Containers: Understanding the Basics  MySQL on Docker - How to Containerize Your Database

In a replication cluster, you can have one master and zero or more slaves. When replication is enabled the master node is in read-write mode, while the slaves are in read-only mode. For best performance its advisable to limit the reads to the slaves.

In addition, these images also support deployment on Kubernetes as Helm Charts. You can read more about the installation steps in the Bitnami MariaDB Chart GitHub repository.


There are tons of MySQL server images that have been contributed by the community and we can't cover them all here. Keep in mind that these images are popular because they are built for general purpose usage. Some less popular images can do much more advanced stuff, like database container orchestration, automatic bootstrapping and automatic scaling. Different images provide different approaches that can be used to address other problems.

Tags:  MySQL MariaDB docker percona

MySQL Server Deployment with Docker – Basic Installation Instructions for Both the Community and Enterprise Versions

An easy way to setup one or multiple MySQL server deployments on a single server is to use Docker – a computer program that performs operating-system-level virtualization. Docker is simple-to-use and allows you to run multiple containers at once.

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. (Source:

In other words, think of a container as a virtual machine without the graphical user interface (GUI). There are third-party GUI’s available, but for this post, I am going to use a terminal window.

I am not a Docker expert, so I did have to spend some time figuring out the basics. But with this tutorial, you should be able to install the Docker software and a MySQL server in less than a fifteen minutes.

Let’s get started

First, you will need to download and install Docker. I am not going to cover this part, but installation is fairly straightforward. I downloaded and installed the Docker Desktop for my Mac.

I already have a MySQL instance installed on my server, so I will install this new instance using a different port number than the default port of 3306. On the server-side, I will use port 3307 to connect to the default MySQL port of 3306 inside the Docker container. By using a different external port number, I can install multiple MySQL instances on one server, but still use the default port for the MySQL instance. MySQL has their own set of Docker container images on github, and I can install MySQL directly from the command line. I don’t have to download anything separately. You can create your own local repository, but for this example, Docker will pull the latest version from MySQL’s github page.

To install MySQL, I opened a terminal window and ran the following command – changing the first port number (the external port) to 3307. The second port number is the port for the MySQL instance inside the container. You will notice that Docker first checks the local repository, and then once it can’t locate it, it goes out to github. Installation is done via the Docker run command.

$ docker run -p 3307:3306 -d --name mysql -e MYSQL_ROOT_PASSWORD=password mysql/mysql-server Unable to find image 'mysql/mysql-server:latest' locally latest: Pulling from mysql/mysql-server 35defbf6c365: Pull complete e13cf68584a3: Pull complete 259d03b6a792: Pull complete 892ac46af8c0: Pull complete Digest: sha256:8dd16a45d0e3e789f2006b608abb1bb69f1a8632a338eef89aec8d6fccda7793 Status: Downloaded newer image for mysql/mysql-server:latest d8695b074a014f31c65112fb00ec1e5ad79d4c5ba94eb3be1d0fa424f14f414c

I can then verify to see if the MySQL container is up and running via the Docker container command:

$ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0b55334fedcb mysql/mysql-server "/ mysq…" 2 minutes ago Up 3 seconds (health: starting) 33060/tcp,>3306/tcp mysql

Note:To start or stop the container, simply type use the Docker start/stop command, where mysql is the name of the container – and not the application being run inside the container:

$ docker stop mysql mysql $ docker start mysql mysql

Note: If the container isn’t running and you need to start it, you will see an error like this when you try and connect to the container:

$ docker exec -it mysql bash Error response from daemon: Container d8695b074a014f31c65112fb00ec1e5ad79d4c5ba94eb3be1d0fa424f14f414c is not running

I now have a copy of the MySQL container image stored locally on my server. I can look at all of the Docker images installed so far with the Docker images command:

$ docker images -a REPOSITORY TAG IMAGE ID CREATED SIZE mysql/mysql-server latest 39649194a7e7 2 weeks ago 289MB

I can verify if MySQL is running by using the Docker container command:

$ docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0b55334fedcb mysql/mysql-server "/ --ip…" 23 seconds ago Exited (1) 22 seconds ago mysql

Now that I have MySQL installed and I have verified that the container is running, I can connect to the container using the Docker exec command: (The word mysql is the container name, and not the mysql database instance)

$ docker exec -it mysql bash bash-4.2#

After connecting, I am now at a regular Linux prompt. The MySQL data directory is stored in /var/lib/mysql, and the configuration file is in /etc/my.cnf.

bash-4.2# cd /var/lib/mysql bash-4.2# ls -l total 174160 drwxr-x--- 2 mysql mysql 4096 May 9 17:10 #innodb_temp -rw-r----- 1 mysql mysql 56 May 9 17:10 auto.cnf -rw-r----- 1 mysql mysql 178 May 9 17:10 binlog.000001 -rw-r----- 1 mysql mysql 155 May 9 17:10 binlog.000002 -rw-r----- 1 mysql mysql 32 May 9 17:10 binlog.index -rw------- 1 mysql mysql 1676 May 9 17:10 ca-key.pem -rw-r--r-- 1 mysql mysql 1112 May 9 17:10 ca.pem -rw-r--r-- 1 mysql mysql 1112 May 9 17:10 client-cert.pem -rw------- 1 mysql mysql 1676 May 9 17:10 client-key.pem -rw-r----- 1 mysql mysql 5456 May 9 17:10 ib_buffer_pool -rw-r----- 1 mysql mysql 50331648 May 9 17:10 ib_logfile0 -rw-r----- 1 mysql mysql 50331648 May 9 17:10 ib_logfile1 -rw-r----- 1 mysql mysql 12582912 May 9 17:10 ibdata1 -rw-r----- 1 mysql mysql 12582912 May 9 17:10 ibtmp1 drwxr-x--- 2 mysql mysql 4096 May 9 17:10 mysql -rw-r----- 1 mysql mysql 29360128 May 9 17:10 mysql.ibd srwxrwxrwx 1 mysql mysql 0 May 9 17:10 mysql.sock -rw------- 1 mysql mysql 2 May 9 17:10 mysql.sock.lock drwxr-x--- 2 mysql mysql 4096 May 9 17:10 performance_schema -rw------- 1 mysql mysql 1676 May 9 17:10 private_key.pem -rw-r--r-- 1 mysql mysql 452 May 9 17:10 public_key.pem -rw-r--r-- 1 mysql mysql 1112 May 9 17:10 server-cert.pem -rw------- 1 mysql mysql 1676 May 9 17:10 server-key.pem drwxr-x--- 2 mysql mysql 4096 May 9 17:10 sys -rw-r----- 1 mysql mysql 12582912 May 9 17:10 undo_001 -rw-r----- 1 mysql mysql 10485760 May 9 17:10 undo_002 bash-4.2# ls -l /etc/my.cnf -rw-r--r-- 1 root root 1239 May 9 17:10 /etc/my.cnf

I can log into MySQL the same way as if it was a regular MySQL instance. (When I created the container, I used “password” as the password, but you will want a more secure password)

bash-4.2# mysql -uroot -ppassword mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 18 Server version: 8.0.16 MySQL Community Server - GPL Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>

Remember – since I am not connecting to the instance from outside of Docker, I don’t have to use port 3307. But, I will have to do that if I want to connect via MySQL Workbench.

Before I connect via MySQL Workbench, I will want to create a different user for this connection, and use this user for my Workbench connection:

mysql> CREATE USER 'docker'@'%' IDENTIFIED BY 'password'; Query OK, 0 rows affected (0.02 sec) mysql> GRANT ALL PRIVILEGES ON * . * TO 'docker'@'%'; Query OK, 0 rows affected (0.01 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.01 sec)

I can now create a MySQL Workbench connection. I will open Workbench, and click on the plus symbol to create a new connection.

I need to provide a connection name (Docker Container 3307), the hostname ( and I need to specify port 3307. If you don’t have another installation of MySQL on your server, you can use the default port of 3306. I will store the password in my keychain by clicking on “Store in Keychain”.

To test and see if you have the correct information, click the “Test Connection” button

I can now use MySQL Workbench to connect to the MySQL Docker container:

That’s it. I now have MySQL installed as a Docker container, and I can access it via a terminal window or via Workbench.

MySQL Enterprise Version

The MySQL Community Edition is a great database server, but if you are going to run a database in a production environment, I would recommend you purchasing a MySQL Enterprise Edition license.

The MySQL Enterprise Edition includes the most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime. It reduces the risk, cost, and complexity in developing, deploying, and managing business-critical MySQL applications. (Source)

The steps for installing the Enterprise Edition is almost the same as the Community, but you have to download the Docker image from the Oracle Support portal (My Oracle Support). And, you will need a license to access support. If you are already a customer, login to the support web site, and go to the “Patches and Updates” tab, click on “Patch Search” and then on under “Product or Family (Advanced)”, search for the MySQL version you want and enter the description of “Docker”, then click the “Search” button.

MySQL only has the Linux version of the Enterprise Edition, but I can still install and run it on my Mac. The download file contains a tar file and a README file. For this example, the tar file is named mysql-enterprise-server-8.0.16.tar. I placed this file in my home directory, and from a terminal window, I will need to load the file into the repository using the Docker load command:

$ docker load -i mysql-enterprise-server-8.0.16.tar d6b2dcf96e3d: Loading layer [==================================================>] 220.6MB/220.6MB b84b6c2a237e: Loading layer [==================================================>] 8.704kB/8.704kB 141e4cf4cec5: Loading layer [==================================================>] 2.048kB/2.048kB Loaded image: mysql/enterprise-server:8.0

I can now see the Enterprise Edition image along with the Community Edition image:

$ docker images -a REPOSITORY TAG IMAGE ID CREATED SIZE mysql/mysql-server latest 39649194a7e7 2 weeks ago 289MB mysql/enterprise-server 8.0 d4410562024a 2 weeks ago 337MB

To install the Enterprise Edition, I only need to change a few of the variables from before. I will also want to use port 3308, since 3306 and 3307 are in use. (Remember – the first port number is the “external” server port number and the second is the port number inside the container)

$ docker run -p 3308:3306 -d --name mysqlEE -e MYSQL_ROOT_PASSWORD=password mysql/enterprise-server:8.0

I can now see the Enterprise Edition container:

$ docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5b4df641d044 mysql/enterprise-server:8.0 "/ mysq…" 6 seconds ago Up 4 seconds (health: starting) 33060/tcp,>3306/tcp mysqlEE 0b55334fedcb mysql/mysql-server "/ mysq…" 2 hours ago Up 2 hours (healthy) 33060/tcp,>3306/tcp mysql

I can connect to docker, and open MySQL – using the container name of mysqlEE:

$ docker exec -it mysqlEE bash bash-4.2# mysql -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 9 Server version: 8.0.16-commercial MySQL Enterprise Server - Commercial Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

I will want to create a new user like before, and then I can create a Workbench connection as well, using port 3308.

That’s it. I now have two instances of MySQL running in two separate containers on my server.

Deleting images and containers

Here are the commands to delete any images or containers. To remove one or more specific images, use the Docker images command to see what images are available:

$ docker images -a REPOSITORY TAG IMAGE ID CREATED SIZE mysql/mysql-server latest 39649194a7e7 2 weeks ago 289MB mysql/enterprise-server 8.0 d4410562024a 2 weeks ago 337MB

And you can delete the image by deleting the IMAGE ID, by using the Docker rmi command:

$ docker rmi 39649194a7e7 Untagged: mysql/mysql-server:latest Untagged: mysql/mysql-server@sha256:8dd16a45d0e3e789f2006b608abb1bb69f1a8632a338eef89aec8d6fccda7793 Deleted: sha256:39649194a7e780713ee5681d3bc5ff9e1fddaca744113d4a64ed61f67b7de601 Deleted: sha256:46837581982573a52d3af65de8ac243749c3f8bdf16043541e1a3cfcac721f6b Deleted: sha256:e311a637abb5186c3bafe967fbb4d10c16258b4b878258ed0ceaff9a07969930 Deleted: sha256:348e9a791d8deb3d6f7ea979c768db0086dbd5172fdbe065649aebfebe509c46 Deleted: sha256:c4a7cf6a6169fb6af5316b4917b6f3417d419b5b5c1e5befd74746996088fc57

To remove a container, use the Docker container command to get a list of containers:

$ docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5b4df641d044 mysql/enterprise-server:8.0 "/ mysq…" 6 seconds ago Up 4 seconds (health: starting) 33060/tcp,>3306/tcp mysqlEE 0b55334fedcb mysql/mysql-server "/ mysq…" 2 hours ago Up 2 hours (healthy) 33060/tcp,>3306/tcp mysql

And you can delete the container by deleting the CONTAINER ID via the the Docker container command:

$ docker container rm 5b4df641d044 5b4df641d044

For more information on installing MySQL with Docker, see Deploying MySQL on Linux with Docker.



Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] and on LinkedIn. Tony is the author of Twenty Forty-Four: The League of Patriots 
Visit for more information. Tony is the editor/illustrator for NASA Graphics Standards Manual Remastered Edition 
Visit for more information.

MySQL InnoDB Cluster Setup on Windows Platform

MySQL InnoDB Cluster Setup on Windows Platform

This is a sample tutorial to have a setup of MySQL Servers running on Windows to form a 3 Nodes InnoDB Cluster.   This tutorial includes :
- Configuring MySQL Instance
- Installing MySQL as Windows Service
- Using MySQL Shell to Configure Remote Admin User
- Using MySQL Shell to Create the InnoDB Cluster
- Bootstrapping MySQL Router Configuration
- Running MySQL Router with PowerShell
- Installing MySQL Router as Service
- Finally testing the Router Connection to RW and RO Nodes

Windows Platform
MySQL Version : 8.0.16 :  MySQL Server, MySQL Shell and MySQL Router
Program Installation Path
C:\Program Files\MySQL\MySQL Server 8.0
C:\Program Files\MySQL\MySQL Shell 8.0
C:\Program Files\MySQL\MySQL Router 8.0

The tutorial can be executed on SINGLE window machine.  But ideally, it should be running on 3 physically different machines.  (For multiple machines, M1, M2 and M3 are defined in this tutorial.  But with single machine, M1, M2 and M3 - they are the same machine.)
Data Directory resides on E:\tempdata

Instance 1 : 3310 - E:\tempdata\data1
Instance 2 : 3320 - E:\tempdata\data2
Instance 3 : 3330 - E:\tempdata\data3

File System :
NTFS is used for BOTH C:\ and E:\

exFAT should not be used.  It does not have privilege setting.  MySQL Router checks for the privilege setting on startup.  exFAT is always 'everyone'.  MySQL Router cannot be started.

In order to have 3 MySQL Instances running on same machines, the port number and data directory must be different.  Please check on the configuration my1.cnf, my2.cnf and my3.cnf for details.

Configuration Files

my1.cnf my2.cnf my3.cnf [mysqld] [mysqld] [mysqld] datadir=E:/tempdata/data1 datadir=E:/tempdata/data2 datadir=E:/tempdata/data3 basedir="C:/Program Files/MySQL/MySQL Server 8.0" basedir="C:/Program Files/MySQL/MySQL Server 8.0" basedir="C:/Program Files/MySQL/MySQL Server 8.0" log-error=E:/tempdata/data1/my.error log-error=E:/tempdata/data2/my.error log-error=E:/tempdata/data3/my.error port=3310 port=3320 port=3330 socket=E:/tempdata/data1/my.sock socket=E:/tempdata/data2/my.sock socket=E:/tempdata/data3/my.sock mysqlx-port=33100 mysqlx-port=33200 mysqlx-port=33300 mysqlx-socket=E:/tempdata/data1/myx.sock mysqlx-socket=E:/tempdata/data2/myx.sock mysqlx-socket=E:/tempdata/data3/myx.sock log-bin=logbin log-bin=logbin log-bin=logbin relay-log=logrelay relay-log=logrelay relay-log=logrelay binlog-format=row binlog-format=row binlog-format=row binlog-checksum=NONE binlog-checksum=NONE binlog-checksum=NONE server-id=101 server-id=102 server-id=103 # enable gtid # enable gtid # enable gtid gtid-mode=on gtid-mode=on gtid-mode=on enforce-gtid-consistency=true enforce-gtid-consistency=true enforce-gtid-consistency=true log-slave-updates=true log-slave-updates=true log-slave-updates=true # Table based repositories # Table based repositories # Table based repositories master-info-repository=TABLE master-info-repository=TABLE master-info-repository=TABLE relay-log-info-repository=TABLE relay-log-info-repository=TABLE relay-log-info-repository=TABLE # Extraction Algorithm # Extraction Algorithm # Extraction Algorithm transaction-write-set-extraction=XXHASH64 transaction-write-set-extraction=XXHASH64 transaction-write-set-extraction=XXHASH64
Configuring and Initializing Database
As shown above, the my1.cnf, my2.cnf and my3.cnf files are ready.

Assuming MySQL Installation in PATH
Create Configuration File (e.g. my1.cnf, my2.cnf and my3.cnf) – sitting on M1, M2 and M3
On M1 : mysqld –defaults-file=my1.cnf --initialize-insecure
On M2 : mysqld –defaults-file=my2.cnf --initialize-insecure
On M3 : mysqld –defaults-file=my3.cnf --initialize-insecure

--initialize-insecure is to initialize MySQL Database with super user ‘root@localhost’ having NO PASSWORD
PLEASE CHANGE the PASSWORD for the SUPER USER ‘root@localhost’  (for security reason)

Installing MySQL as Service
The service name is named as 'mysql3310', 'mysql3320', 'mysql3330'.

Assuming MySQL Binary in PATH
M1:mysqld --install mysql3310 --defaults-file=my1.cnf  M2:mysqld --install mysql3320 --defaults-file=my2.cnf  M3:mysqld --install mysql3330 --defaults-file=my3.cnf 
If you have the MySQL Notifier Installed, the mysql3310/mysql3320/mysql3330 services are registered to the MySQL Notifier.

Starting up MySQL services On Windows Service, Start the Service
M1: SC start mysql3310
M2: SC start mysql3320
M3: SC start mysql3330

Once the Servers are started, as good practice, change the password for the SUPER user.

The root password is EMPTY (no password) when MySQL Server is initialized with '--initialize-insecure’ option

Note : The root user can only access MySQL locally to the machine. The root@localhost is created when it is initialized. Remote access is not allowed.

M1:mysql -uroot -h127.0.0.1 -P3310
–> set password=‘<your password>’

M2:mysql -uroot -h127.0.0.1 -P3320
–> set password=‘<your password>’
M3:mysql -uroot -h127.0.0.1 -P3330
–> set password=‘<your password>’

Configuring Remote Admin User (gradmin)

Start MySQL Shell on M1

mysqlsh>dba.configureInstance(‘root:<your password>@’, { clusterAdmin:’gradmin’, clusterAdminPassword:’grpass’} )

Start MySQL Shell on M2

mysqlsh>dba.configureInstance(‘root:<your password>@’, { clusterAdmin:’gradmin’, clusterAdminPassword:’grpass’} )

Start MySQL Shell on M3

mysqlsh>dba.configureInstance(‘root:<your password>@’, { clusterAdmin:’gradmin’, clusterAdminPassword:’grpass’} )

Once we have the remote Admin User (e.g. gradmin), this user can login from anywhere.  Administration can be done remotely.

Creating MySQL InnoDB Cluster

Consider the InnoDB Cluster has data exchange between the servers.   It is a good practice to have separate subnet for the Cluster Data Exchange instead of     In the following Command, the subnet as "" is purely an example for the cluster Subnet.    The machine must have the interface.  The localAddress has to be defined for each server.  
You may want to change this subnet to and the localAddress as if it is a SINGLE machine configuration.

Note:  Connecting to the Node1(3310) MUST be done via IP Address no matter it is on the same machine.  DO NOT use    Please change your ip1 (as ip address for the M1) for the following commands.

Start MySQL Shell on M1
mysqlsh> \connect gradmin:grpass@ip1:3310
mysqlsh> dba.createCluster(‘mycluster’, {
   } )
mysqlsh>var xx = dba.getCluster()
mysqlsh> xx.addInstance(‘gradmin:grpass@ip2:3320’,{localAddress:’’})
mysqlsh> xx.addInstance(‘gradmin:grpass@ip3:3330’, {localAddress:’’})
mysqlsh> xx.status()

Bootstrapping MySQL Router Configuration

On application machine (not necessary to be the MySQL Server)
For testing, one of the (M1/M2/M3) is chosen.

INSTALL MySQL ROUTER if needed for any application machine.

Assuming MySQL Router in PATH, and the ROUTER configuration is to be created on E:\tempdata\config\myrouter

mysqlrouter --bootstrap gradmin:grpass@ip1:3310 --directory e:/tempdata/config/myrouter --name mycluster

cd e:\tempdata\config\myrouter\

Starting the PowerShell  (for testing)
PS > e:/tempdata/config/myrouter/start.ps1
Check the log file if necessary

Installing MySQL Router as Window Service

CMD – Run with Administrator
Service Name is always – MySQL Router
Firstly, to remove the MySQL Router serive
mysqlrouter --remove-service

Windows Service Installation
–mysqlrouter --install-service -c “e:/tempdata/config/myrouter/mysqlrouter.conf”

Start the Windows Server 'MySQL Router'
in command (CMD)
SC start MySQLRouter

Testing MySQL Router Connection to RW/RO nodes

Connecting to R/W node (e.g. M1:3310)
–On M1 : mysql –uroot –h127.0.0.1 –P3310
mysql> create user demo@’%’ identified by ‘demo’;
mysql> grant all on *.* to demo@’%’;

Connecting to ROUTER port on the Machine with Router configured and started
–mysql -udemo -pdemo -h127.0.0.1 -P6446 -e “select @@hostname, @@port;”
–mysql -udemo -pdemo -h127.0.0.1 -P6447 -e “select @@hostname, @@port;”
–mysql -udemo -pdemo -h127.0.0.1 -P6447 -e “select @@hostname, @@port;”

Enjoy the testing.

CRUM conjecture - read, write, space and cache amplification

The RUM Conjecture asserts that an index structure can't be optimal for all of read, write and space. I will ignore whether optimal is about performance or efficiency (faster is better vs efficient-er is better). I want to use CRUM in place of RUM where C stands for database cache.

The C in CRUM is the amount of memory per key-value pair (or row) the DBMS needs so that either a point query or the first row from a range query can be retrieved with at most X storage reads. The C can also be reported as the minimal database : memory ratio to achieve at most X storage reads per point query.

My points here are:
  • There are 4 amplification factors - read, write, space and cache
  • CRUM is for comparing index structure efficiency and performance
  • Read and write amplification have CPU and IO parts
  • Write amplification has immediate and deferred parts
Many X is faster than Y papers and articles neglect to quantify the tradeoffs made in pursuit of performance. I hope that changes and we develop better methods for quantifying the tradeoffs (a short rant on defining better).

Amplification factors (RUM -> CRUM) are used to compare index structures. Values for the factors are measured for real workloads and estimated for hypothetical ones. The comparison is the most valuable part. Knowing that the deferred CPU write-amp on inserts for a b-tree is 30 is not that useful. Knowing that it is 3X or 0.5X the value for an LSM is useful.

Workload matters. For estimates of amplification I usually assume uniform distribution because this simplifies the estimate. But there is much skew in production workloads and that impact can be measured to complement the estimates.

Read Amplification
This post is an overview for read-amp. This post explains it in detail for an LSM. There are two parts to read-amp -- CPU and IO. Thus for each of the three basic operations (point query, range seek, range next) there are 2 values for read-amp: CPU and IO. I have yet to consider deferred read-amp and by read-amp I mean immediate read-amp.

Metrics for CPU read-amp include CPU time, bytes/pages read and key comparisons. I use key comparisons when predicting performance and CPU time when running tests. I have not used bytes/pages read. While key comparisons are a useful metric they ignore other CPU overheads including hash table search, bloom filter search, page read and page decompress.

Metrics for IO read-amp include bytes read and pages read. I use pages read for disk and bytes read for SSD because disks are IOPs limited for small reads. IO read-amp implies extra CPU read-amp when the database is compressed. Decompressing pages after storage reads can use a lot of CPU with fast storage devices and even more with zlib but you should be using zstd.

With estimates for hypothetical workloads I assume there is a cache benefit as explained in the Cache Amplification section. This is likely to mean that comparisons assume a different amount of memory for index structures that have more or less cache-amp. For real tests I mostly run with database >> memory but don't attempt to use the least memory that satisfies the cache-amp X reads constraint.

Write Amplification
This post is an overview for write-amp. This post explains it in detail for an LSM. Write-amp has two dimensions: CPU vs IO, immediate vs deferred. For each operation (insert, delete, update) there are 4 values for write-amp: immediate CPU, deferred CPU, immediate IO and deferred IO.

The immediate write-amp occurs during the write. The deferred write-amp occurs after the write completes and includes writing back dirty pages in a b-tree and compaction in an LSM.

Possible metrics for CPU write-amp include bytes written, pages written, key comparisons and pages/bytes (de)compressed. Bytes and (in-memory) pages written are useful metrics for in-memory DBMS but my focus is on databases >> memory.

Possible metrics for IO write-amp include bytes written and pages written. These can be estimated for hypothetical workloads and measured for real ones. The choice between bytes or pages written might depend on whether disk or SSD is used as one is limited by ops/s and the other by transfer rate. If you use iostat to measure this then figure out whether Linux still counts bytes written as bytes trimmed.

Examples of deferred and immediate write-amp:
  • The InnoDB change buffer is deferred IO and CPU. Checking the change buffer and applying changes is deferred CPU. The deferred IO is from reading pages from storage to apply changes.
  • For a b-tree: page writeback for a b-tree is deferred IO, compression and creating the page checksum are deferred CPU, finding the in-memory copy of a page is immediate CPU, reading the page on a cache miss is immediate IO.
  • An LSM insert has immediate/deferred IO/CPU. 
    • Immediate CPU - key comparisons for memtable insert
    • Immediate IO - redo log write
    • Deferred IO - reading uncached pages for input SSTs and writing output SSTs during compaction
    • Deferred CPU - decompression, compression and key comparisons while merging input SSTs into output SSTs during compaction. Note that compaction does a merge, not a sort or merge+sort.

Space Amplification
Space-amp is the size of the database files versus the size of the data, or the ratio of the physical to logical database size. An estimate for the logical size is the size of the uncompressed database dump with some adjustment if secondary indexes are used. The space-amp is reduced by compression. It is increased by fragmentation in a b-tree and uncompacted data in an LSM.

It is best to measure this after the DBMS has reached a steady state to include the impact of fragmentation and uncompacted data.

Cache Amplification
I briefly described cache-amp in this post. The cache-amp describes memory efficiency. It represents the minimal database : memory ratio such that a point query requires at most X storage reads. A DBMS with cache-amp=10 (C=10) needs 10 times more memory than one with C=100 to satisfy the at most X reads constraint.

It can be more complicated to consider cache-amp for range seek and range next because processing them is more complicated for an LSM or index+log algorithm. Therefore I usually limit this to point queries.

For a few years I limited this to X=1 (at most 1 storage read). But it will be interesting to consider X=2 or 3. With X=1:
  • For a b-tree all but the leaf level must be in cache
  • For an LSM the cache must include all bloom filter and index blocks, all data blocks but the max level
  • For an index+log approach it depends (wait for another blog post)

Other posts
Related posts by me on this topic include:

the MySQL Team in Austin, TX

At the end of the month, some engineers of the MySQL Team will be present in Austin, TX !

We will attend the first edition of Percona Live USA in Texas.

During that show, you will have the chance to meet key engineers, product managers, as well as Dave and myself.

Let me present you the Team that will be present during the conference:

The week will start with the MySQL InnoDB Cluster full day tutorial by Kenny and myself. This tutorial is a full hands-on tutorial where we will start by migrating a classical asynchronous master-replicas topology to a new MySQL InnoDB Cluster. We will then experience several labs were we will see how to maintain our cluster.

If you registerd for our tutorial, please come with a laptop able to run 3 VirtualBox VMs that you can install from a USB stick. So please make free some disk space and install the latest Virtualbox on your system.

This year, I will also have to honor to present the State of the Dolphin, during the keynote.

During the conference, you will be able to learn a lot from our team on many different topics. Here is the list of the session by our engineers:

We will also be present in the expo hall where we will welcome you at our booth. We will show you demos of MySQL InnoDB Cluster and MySQL 8.0 Document Store, where NoSQL and SQL lives in peace together ! Don’t hesitate to visit us during the show.

We will also be present during the Community Dinner and will enjoy hear your thoughts about MySQL !

See you in almost 2 weeks in Texas !

Troubleshooting Data Differences in a MySQL Database Cluster

Overview The Skinny

From time to time we are asked how to check whether or not there are data discrepancies between Master/Slave nodes within a MySQL (or MariaDB) cluster that’s managed with Tungsten Clustering. This is always a challenging task, not least because we hope and believe that our replication mechanism would avoid such occurrences, that said there can be factors outside of our control that can appear to “corrupt” data – such as inadvertent execution of DML against a slave using a root level user account.

Tungsten Replicator, the core replication component in our Tungsten Clustering solution for MySQL (& MariaDB), is just that, a replicator – it takes transactions from the binary logs and replicates them around. The replicator isn’t a data synchronisation tool in that respect, the replicator won’t/can’t compare tables – this is by design and is one of the many benefits in the product that avoids a) us being tightly coupled to the database and b) avoids the inherent performance impact of what could be incredibly resource consuming processes, add to that the complications of how to confidently complete such checks in extremely active environments.

Agenda What’s Here?

The following steps walk through the recommended methods for troubleshooting based on a 3-node cluster using MySQL 5.7 community edition, managed by Tungsten Clustering 6.0

What’s Not Here?

There are a number of tools that can help identify and fix data drift, and even structural differences, however a lot of them assume native MySQL replication is in place and therefore just the default usage of such products within a Tungsten Clustering environment can cause further issues, and may even fail completely.

In this blog post, I am not going to cover the best practices to avoid data drift nor the rights and wrongs of it, what I will cover is how to utilise existing third-party tools designed for doing such tasks.

Identify Structural Differences Simple, Yet Effective

If you suspect that there are differences to a table structure, a simple method to resolve this will be to compare schema DDL, mysqldump offers an easy and fast way to extract DDL without row data, then using simple OS commands we can identify such differences.

  1. Extract DDL on the Master node, specifying the schema in place of :
    mysqldump -u root -p --no-data -h localhost --databases >master.sql
  2. Repeat the same on the Slave node(s):
    mysqldump -u root -p --no-data -h localhost --databases >slave.sql
  3. Now, using diff, you can compare the results:
    diff master.sql slave.sql

Using the output of diff, you can then craft the necessary DDL statements to re-align your structure

Identify Data Differences The Real Challenge

The first challenge when looking at data differences is that in busy environments, and especially if you are running a Multi-Site Multi-Master (pre v6) or Composite Multi-Master (v6) Topology, then you may well be presented with false positives due to the constant changing environment.

It is possible to use pt-table-checksum from the Percona Toolkit to identify data differences, providing you use the syntax described below for bypassing the native replication checks.

First of all, it is advisable to familiarise yourself with the product by reading through the providers own documentation here:

Once you are ready, ensure you install the latest version of the persona toolkit on all nodes, or at least ensure the version you install is compatible with your release of MySQL.

Next, execute the following on the Master node:

shell> pt-table-checksum --set-vars innodb_lock_wait_timeout=500 --recursion-method=none --ignore-databases=mysql --ignore-databases-regex=tungsten* h=localhost,u=tungsten,p=secret

It is important to include the ignore-database options – we do not want to compare the mysql schema, nor do we want to compare any tungsten tracking schemas.

You can add additional schemas to these options if necessary within your environment.

On first run, this will create a database called percona, and within that database a table called checksums. The process will gather checksum information on every table in every database excluding any listed using the ignore options mentioned previously. The tables and the processes will replicate through Tungsten Replicator and therefore you can now query these tables on the slave nodes, the following is an example SELECT that you can use:

SELECT db, tbl, SUM(this_cnt) AS total_rows, COUNT(*) AS chunks FROM percona.checksums WHERE ( master_cnt <> this_cnt OR master_crc <> this_crc OR ISNULL(master_crc) <> ISNULL(this_crc)) GROUP BY db, tbl;

This SELECT will return any tables that it detects are different, it won’t show you the differences, or indeed how many, this is just a basic check.

To identify and fix the changes, you could use then use pt-table-sync (Also within the Percona Toolkit), however this product would by default assume native replication and also try and fix the problems for you. This assumption is unavoidable, therefore within a Tungsten Clustering environment we need to supply the --print switch. This won’t execute the statement but will instead, display them on the screen (Or redirect to a file) and then from here you can gather the SQL needed to be executed to fix the mistakes and process this manually.

The output should be reviewed carefully to determine whether you want to manually patch the data, if there are significant differences, then you may need to consider using tungsten_provision_slave to reprovision a node instead.
To use pt-table-sync, first identify the tables with differences on each slave, in this example, the SELECT statement above identified that there was a data difference on the departments table within the employees database on db2. Execute the pt-table-sync script on the master, passing in the database name, table name and the slave host that the difference exists on:

shell> pt-table-sync --databases employees --tables departments --print h=db1,u=tungsten,p=secret,P=13306 h=db2

The first h= option should be the Master (also the node you run the script from) the second h= option relates to the slave that the difference exist on.

Executing the script will output SQL statements that can be used to patch the data, for example the above statement produces the following output:

UPDATE `employees`.`departments` SET `dept_name`='Sales' WHERE `dept_no`='d007' LIMIT 1 /*percona-toolkit src_db:employees src_tbl:departments src_dsn:P=13306,h=db1,p=...,u=tungsten dst_db:employees dst_tbl:departments dst_dsn:P=13306,h=db2,p=...,u=tungsten lock:0 transaction:1 changing_src:0 replicate:0 bidirectional:0 pid:24524 user:tungsten host:db1*/;

The UPDATE staments could now be issued directly on the slave to correct the problem.


Remember, at the start I mentioned one way data drift can happen is due to the inadvertent execution of DML on a slave, which is highly unrecommended, however in the following examples I contradict myself and suggest the only way to fix the data is to actually do just that. Care should be taken, and ALWAYS ensure you have a FULL backup, it would be recommended to place the cluster into MAINTENANCE mode and shun the slave node before making any changes so as not to cause any potential interruption to connected clients!

Summary The Wrap-Up

In this blog post we discussed how to check whether or not there are data discrepancies between Master/Slave nodes within a cluster.

To learn about Continuent solutions in general, check out

The Library Please read the docs!

For more information about troubleshooting data differences in Tungsten clusters, please visit

Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit

Want to learn more or run a POC? Contact us.

Percona XtraBackup 8.0.6 Is Now Available

Percona is glad to announce the release of Percona XtraBackup 8.0.6 on May 9, 2019. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

In version 8.0.6, Percona XtraBackup introduces the support of the MyRocks storage engine with Percona Server for MySQL version 8.0.15-6 or higher.

Percona XtraBackup 8.0.6 enables saving backups to an Amazon S3, MinIO, and Google Cloud Storage (using interoperability mode) when using xbcloud. The following example demonstrates how to use an Amazon S3 storage to make a full backup:

$ xtrabackup --backup --stream=xbstream --extra-lsndir=/tmp --target-dir=/tmp | \ xbcloud put --storage=s3 \ --s3-endpoint='' \ --s3-access-key='YOUR-ACCESSKEYID' \ --s3-secret-key='YOUR-SECRETACCESSKEY' \ --s3-bucket='mysql_backups' --parallel=10 \ ${date -I}-full_backup

All Percona software is open-source and free.

New Features
  • Amazon S3 is now supported in xbcloud. More information in PXB-1813.
  • The MyRocks storage engine is now supported with Percona XtraBackup. More information in PXB-1754.
Bugs Fixed
  • Percona XtraBackup could fail to restore the undo tablespace created during or before incremental backup. Bug fixed PXB-1780.
  • A backup could fail if log_bin_index was defined in my.cnf. Bug fixed PXB-1801.
  • When the row format was changed during the backup, xtrabackup could crash during the incremental prepare stage. Bug fixed PXB-1824.
  • During the prepare phase, PXB could freeze and never finish execution. Bug fixed PXB-1819.
  • Percona XtraBackup could crash during the prepare stage when making a backup of a host running MySQL Server v8.0.16. Bug fixed PXB-1839.

Other bugs fixed: PXB-1809PXB-1810PXB-1832PXB-1837.

Release notes with all the improvements for version 8.0.6 are available in our online documentation. Please report any bugs to the issue tracker.

Measuring MySQL Performance in Kubernetes

In my previous post Running MySQL/Percona Server in Kubernetes with a Custom Config I’ve looked at how to set up MySQL in Kubernetes to utilize system resources fully. Today I want to measure if there is any performance overhead of running MySQL in Kubernetes, and show what challenges I faced trying to measure it.

I will use a very simple CPU bound benchmark to measure MySQL performance in OLTP read-only workload:

sysbench oltp_read_only --report-interval=1 --time=1800 --threads=56 --tables=10 --table-size=10000000 --mysql-user=sbtest --mysql-password=sbtest --mysql-socket=/var/lib/mysql/mysql.sock run

The hardware is as follows:

Supermicro server

  • Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
  • 2 sockets / 28 cores / 56 threads
  • Memory: 256GB of RAM

The most interesting number there is 28 cores / 56 threads.  Please keep this in mind; we will need this later.

So let’s see the MySQL performance in the bare metal setup:

[ 607s ] thds: 56 tps: 22154.20 qps: 354451.12 (r/w/o: 310143.73/0.00/44307.39) lat (ms,95%): 2.61 err/s: 0.00 reconn/s: 0.00 [ 608s ] thds: 56 tps: 22247.80 qps: 355955.88 (r/w/o: 311461.27/0.00/44494.61) lat (ms,95%): 2.61 err/s: 0.00 reconn/s: 0.00 [ 609s ] thds: 56 tps: 21984.01 qps: 351641.13 (r/w/o: 307672.12/0.00/43969.02) lat (ms,95%): 2.66 err/s: 0.00 reconn/s: 0.00

So we can get about 22000 qps on this server.

Now, let’s see what we can get if the same server runs a Kubernetes node and we deploy the Percona server image on this node. I will use a modified image of Percona Server 8, which already includes sysbench inside.

You can find my image here:

And I use the following deployment yaml :

apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - name: mysql port: 3306 protocol: TCP targetPort: 3306 --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: mysql spec: selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: nodeSelector: smblade01 volumes: - name: mysql-persistent-storage hostPath: path: /mnt/data/mysql type: Directory - name: config-volume configMap: name: mysql-config optional: true containers: - image: vadimtk/ps-8-vadim imagePullPolicy: Always name: mysql env: # Use secret in real usage - name: MYSQL_ROOT_PASSWORD value: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: config-volume mountPath: /etc/my.cnf.d

The most important part here is that we deploy our image on smblade01 node (the same one I ran the bare metal benchmark).

Let’s see what kind of performance we get using this setup. The number I’ve got:

[ 605s ] thds: 56 tps: 10561.88 qps: 169045.04 (r/w/o: 147921.29/0.00/21123.76) lat (ms,95%): 12.98 err/s: 0.00 reconn/s: 0.00 [ 606s ] thds: 56 tps: 10552.00 qps: 168790.98 (r/w/o: 147685.98/0.00/21105.00) lat (ms,95%): 15.83 err/s: 0.00 reconn/s: 0.00 [ 607s ] thds: 56 tps: 10566.00 qps: 169073.97 (r/w/o: 147942.97/0.00/21131.00) lat (ms,95%): 5.77 err/s: 0.00 reconn/s: 0.00 [ 608s ] thds: 56 tps: 10581.08 qps: 169359.21 (r/w/o: 148195.06/0.00/21164.15) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00 [ 609s ] thds: 56 tps: 12873.80 qps: 205861.77 (r/w/o: 180116.17/0.00/25745.60) lat (ms,95%): 5.37 err/s: 0.00 reconn/s: 0.00 [ 610s ] thds: 56 tps: 20196.89 qps: 323184.24 (r/w/o: 282789.46/0.00/40394.78) lat (ms,95%): 3.02 err/s: 0.00 reconn/s: 0.00 [ 611s ] thds: 56 tps: 18033.21 qps: 288487.30 (r/w/o: 252421.88/0.00/36065.41) lat (ms,95%): 5.28 err/s: 0.00 reconn/s: 0.00 [ 612s ] thds: 56 tps: 11444.08 qps: 183129.22 (r/w/o: 160241.06/0.00/22888.15) lat (ms,95%): 5.37 err/s: 0.00 reconn/s: 0.00 [ 613s ] thds: 56 tps: 10597.96 qps: 169511.35 (r/w/o: 148316.43/0.00/21194.92) lat (ms,95%): 5.57 err/s: 0.00 reconn/s: 0.00 [ 614s ] thds: 56 tps: 10566.00 qps: 169103.93 (r/w/o: 147969.94/0.00/21133.99) lat (ms,95%): 5.67 err/s: 0.00 reconn/s: 0.00 [ 615s ] thds: 56 tps: 10640.07 qps: 170227.13 (r/w/o: 148948.99/0.00/21278.14) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00 [ 616s ] thds: 56 tps: 10579.04 qps: 169264.66 (r/w/o: 148106.58/0.00/21158.08) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00

You can see the numbers vary a lot, from 10550 tps to 20196 tps, with the most time being in the 10000tps range.
That’s quite disappointing. Basically, we lost half of the throughput by moving to the Kubernetes node.

But don’t panic, we can improve this. But first, we need to understand why this happens.

The answer lies in how Kubernetes applies Quality of Service for Pods. By default (if CPU or Memory limits are not defined) the QoS is BestEffort, which leads to the results we see above. To allocate all CPU resources, we need to make sure QoS Guaranteed. For this, we add the following to the image definition:

resources: requests: cpu: "55500m" memory: "150Gi" limits: cpu: "55500m" memory: "150Gi"

These are somewhat funny lines to define CPU limits. As you remember we have 56 threads, so initially I tried to set limits:

cpu: "56" , but it did not work as Kubernetes was not able to start the pod with the error Insufficient CPU. I guess Kubernetes allocates a few CPU percentages for the internal needs.

So the line

cpu: "55500m"  works, which means we allocate 55.5 CPU for Percona Server.

Let’s see what results we can have with Guaranteed QoS:

[ 883s ] thds: 56 tps: 20320.06 qps: 325145.96 (r/w/o: 284504.84/0.00/40641.12) lat (ms,95%): 2.81 err/s: 0.00 reconn/s: 0.00 [ 884s ] thds: 56 tps: 20908.89 qps: 334587.21 (r/w/o: 292769.43/0.00/41817.78) lat (ms,95%): 2.81 err/s: 0.00 reconn/s: 0.00 [ 885s ] thds: 56 tps: 20529.03 qps: 328459.46 (r/w/o: 287402.40/0.00/41057.06) lat (ms,95%): 2.81 err/s: 0.00 reconn/s: 0.00 [ 886s ] thds: 56 tps: 17567.75 qps: 281051.03 (r/w/o: 245914.53/0.00/35136.50) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00 [ 887s ] thds: 56 tps: 18036.82 qps: 288509.07 (r/w/o: 252437.44/0.00/36071.63) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00 [ 888s ] thds: 56 tps: 18398.23 qps: 294399.67 (r/w/o: 257603.21/0.00/36796.46) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00 [ 889s ] thds: 56 tps: 18402.90 qps: 294484.45 (r/w/o: 257677.65/0.00/36806.81) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00 [ 890s ] thds: 56 tps: 19428.12 qps: 310787.86 (r/w/o: 271934.63/0.00/38853.23) lat (ms,95%): 5.37 err/s: 0.00 reconn/s: 0.00 [ 891s ] thds: 56 tps: 19848.69 qps: 317646.11 (r/w/o: 277947.73/0.00/39698.39) lat (ms,95%): 5.28 err/s: 0.00 reconn/s: 0.00 [ 892s ] thds: 56 tps: 20457.28 qps: 327333.49 (r/w/o: 286417.93/0.00/40915.56) lat (ms,95%): 2.86 err/s: 0.00 reconn/s: 0.00

This is much better (mostly ranging in 20000 tps), but we still do not get to 22000 tps.

I do not have the full explanation of why there is still a 10% performance loss, but it might be related to this issue. And I see there is a work in progress to improve Guaranteed QoS performance but it was not merged into the mainstream releases yet. Hopefully, it will be in one of the next releases.


  • Out of the box, you may see quite bad performance when deploying in Kubernetes POD
  • To improve your experience you need to make sure you use Guaranteed QoS. Unfortunately, Kubernetes does not make it easy. You need to manually set the number of CPU threads, which is not always obvious if you use dynamic cloud instances.
  • With Guaranteed QoS there is still a performance overhead of 10%, but I guess this is the cost we have to accept at the moment.

OpenSSL FIPS support in MySQL 8.0

Many products use OpenSSL, but for the most part, without choosing to incorporate the OpenSSL FIPS (US Federal Information Processing Standards) module. However it may be that running OpenSSL and using FIPs mode is something you should seriously consider and implement.…

Facebook Twitter Google+ LinkedIn

Meet Codership, the makers of Galera Cluster at Percona Live Austin 2019

After a short hiatus, we hope to meet and see you at Percona Live Austin 2019 (28-30 May 2019), as we have sponsored the event and have a booth in the expo hall, in addition to having some talks.

Our CEO and co-founder Seppo Jaakola will have a talk titled Galera Cluster New Features, happening in room Texas 5, on Wednesday at 11.55AM – 12.45PM. It will be a very interesting talk as Galera Cluster 4 features have made there way into MariaDB Server 10.4, and you can expect to hear a little more about when Codership, the engineers and makers of Galera Cluster will provide a MySQL version.

If you happen to sign up for tutorials, do not miss Expert MariaDB: Harness the Strengths of MariaDB Server by Colin Charles as part of the tutorial involves setting up a bootstrapped three node MariaDB Galera Cluster 10.4 with the Galera 4 replication library and learning about the other unique features it has.

For those interested in Galera Cluster, don’t forget there are a few other talks about Galera Cluster in the program, including one by Yahoo! Japan titled Immutable Database Infrastructure with Percona XtraDB Cluster which should be a great story about how they deploy a lot (think hundreds!) of Galera Cluster nodes in production.

Our booth will always have the wonderful Larisa Urse manning it, and we want to talk to you all about what you need from Galera Cluster, roadmaps, plans, how you use Percona XtraDB Cluster (PXC) (based on Galera Cluster), and more. We will have great conversations, and a whole bunch more planned for the conference including participating in the Passport Program — so dropping by our booth, talking to us, and get that coveted stamp, and you will be in the running to win a pair of Bose noise cancelling headphones. Don’t forget that we have great support and consulting services too, so come talk to us to find out more!

Updating My AWS CodeBuild Project from Ubuntu 14.04 to 18.04

I received a cheery notification email (twice) from AWS that they’re deprecating Ubuntu 14.04 on CodeBuild:

Ubuntu 14.04 EOL Notification for AWS CodeBuild


As of April 2019, Ubuntu 14.04 has reached end of life (EOL). We will be removing it as an option in the CodeBuild console on May 15, 2019. Existing CodeBuild projects configured to use Ubuntu 14.04 will continue to work after this date but we highly recommend that customers update their projects to use “aws/codebuild/standard” build images, which are based on Ubuntu 18.04.

Note that all offered programming language runtimes are already included in the Ubuntu 18.04 image, as described in Docker Images Provided by CodeBuild. Therefore, it is no longer necessary to select runtimes separately.

Sincerely, Amazon Web Services

It’s nice the old ones will continue to work, however they gave 9 days notice that it won’t be available on the console any more. This will certainly break some workflows where projects are build and destroyed regularly. They don’t say if it creation will be removed from the API too, but if it will then some workflows with automated creation through e.g. CloudFormation will also break.

Anyway, on with the story. Checking the Personal Health Dashboard (PHD) link at the bottom, I saw the notification there too:

Personal Health Dashboard doesn’t email you automatically for every event. Thankfully, someone at AWS checked the box for “this notification is actually important so email the suckers.”

(If you want email notifications for everything on PHD, it’s not so easy as ticking a box. You have to roll up your sleeves and set it up via CloudWatch Events!)

I have just one CodeBuild project on my AWS account, and it’s running a script checking for Python package updates on my open source repositories. It’s a hack that it’s on CodeBuild in the first place, it’s just because it takes too long to fit within AWS Lambda’s limits.

After reading the linked guide “Docker Images Provided by CodeBuild”, I discovered all I needed to do was change to aws/codebuild/standard:2.0:

My first fix didn’t work when I ran it. I saw this error in the logs:

[Container] 2019/05/08 10:56:08 Waiting for agent ping [Container] 2019/05/08 10:56:10 Waiting for DOWNLOAD_SOURCE [Container] 2019/05/08 10:56:11 Phase is DOWNLOAD_SOURCE [Container] 2019/05/08 10:56:11 CODEBUILD_SRC_DIR=/codebuild/output/src870287925/src [Container] 2019/05/08 10:56:11 YAML location is /codebuild/readonly/buildspec.yml [Container] 2019/05/08 10:56:11 Processing environment variables [Container] 2019/05/08 10:56:11 Moving to directory /codebuild/output/src870287925/src [Container] 2019/05/08 10:56:11 Phase complete: DOWNLOAD_SOURCE State: FAILED [Container] 2019/05/08 10:56:11 Phase context status code: YAML_FILE_ERROR Message: This build image requires selecting at least one runtime version.

Sad face!

The new image works differently, and AWS didn’t describe this in the announcement. The runtimes are “already included” but you still need to “install” them. You do this with the runtime-versions key in the install phase of the buildspec - I needed to install Python 3.7 to run my script.

I changed my AWS::CodeBuild::Project resource in the Environment.Image and Source.BuildSpec properties:

PipCheckerBuildProject: Type: AWS::CodeBuild::Project Properties: Name: pip-checker Description: Checks pip Artifacts: Type: NO_ARTIFACTS ServiceRole: !ImportValue LambdaLogAndEmailIAMRoleArn Environment: Type: LINUX_CONTAINER ComputeType: BUILD_GENERAL1_SMALL Image: aws/codebuild/standard:2.0 Source: Type: S3 Location: !Sub ${S3Bucket}/${S3Key} BuildSpec: !Sub | version: 0.2 phases: install: runtime-versions: python: 3.7 commands: - pip3 install -r requirements.txt - apt-get update - apt-get install -y mysql-client build: commands: - python TimeoutInMinutes: 60

That is, I applied this diff:

@@ -80,7 +80,7 @@ Resources: Environment: Type: LINUX_CONTAINER ComputeType: BUILD_GENERAL1_SMALL - Image: aws/codebuild/python:3.7.1 + Image: aws/codebuild/standard:2.0 Source: Type: S3 Location: !Sub ${S3Bucket}/${S3Key} @@ -88,6 +88,8 @@ Resources: version: 0.2 phases: install: + runtime-versions: + python: 3.7 commands: - pip3 install -r requirements.txt - apt-get update

I deployed this and tested the script, it worked the second time.



Hope this helps you with this or similar CodeBuild upgrades,


My Open Source projects that still live on

I have recently created a new library called libMariaS3 for a couple of teams at MariaDB which provides a simple C API to Amazon’s S3. This was created because we needed a library which could link to GPLv2 software and Amazon’s own library is Apache 2.0 licensed which is incompatible.

It is not a perfect code base because I had a very short amount of time to get the first release out but it was a very fun project to work on. It led me to take a quick look this morning at a couple of other things I have created in the past to see where they are at today.


Around 5 years ago I worked for HP’s Advanced Technology Group and I worked on several different Open Source projects during that time. One of those projects was called libAttachSQL which was a very fast, asynchronous, Apache 2.0 licensed C connector for MySQL servers.

This project wasn’t very successful but the ideas were born out of another project I created which was almost a complete re-write of libdrizzle.

libdrizzle was a BSD licensed high-performance MySQL compatible connector which was part of the Drizzle project (Drizzle itself being a microkernel fork of MySQL). When development of Drizzle stalled I took on developing libdrizzle under a new name “libdrizzle-redux”.

Work commitments got in the way and development stalled, but I was contacted one day by someone who was very interested in the work I did in that project. I helped them setup a fork and gave them the general direction I was going with the project. Today libdrizzle-redux is still developed by a company called Sociomantic and the GitHub page can be found here.

Jenkins Job Builder

In early 2012 I was a member of the core OpenStack Infrastructure team. Jenkins was used as the CI platform then and we needed a way to make minor changes to a few hundred Jenkins jobs at a time. Every time we did this something broke and releases were stalled as a result. We needed a more automated way of making changes to Jenkins jobs and I created something which was eventually called “Jenkins Job Builder” to solve this. It had a different name originally but was changed very early on.

I soon moved on to lead another project in OpenStack which was HP’s LBaaS project called “Libra”. This unfortunately didn’t live on but a lot of the design ideas can be seen in the Octavia project.

As for Jenkins Job Builder? I’m told that many companies use it today and I’ve found it in the repositories of Ubuntu and Fedora. Although I suspect a lot has changed since my original code for the project.

Honorable Mention – mydumper

One honourable mention I can think of is “mydumper“. I did not create the project but I wrote a large amount of the code and led the maintenance for a while. The lead developer of this is now Max Bubenick and it appears to be still maintained.

I get a lot of people contacting me asking me to write certain patches to mydumper. Maybe one day I will, unfortunately work commitments often get in the way of Open Source projects I would like to work on.

How to Execute and Manage MySQL Backups for Oracle DBA’s

Migrating from Oracle database to open source can bring a number of benefits. The lower cost of ownership is tempting, and pushes a lot of companies to migrate. At the same time DevOps, SysOps or DBA’s need to keep tight SLA’s to address business needs.

One of the key concerns when you plan data migration to another database, especially open source is to how to avoid data loss. It’s not too far fetched that someone accidentally deleted part of the database, someone forgot to include a WHERE clause in a DELETE query or run DROP TABLE accidentally. The question is how to recover from such situations.

Things like that may and will happen, it is inevitable but the impact can be disastrous. As somebody said, “It’s all fun and games until backup fails”. The most valuable asset cannot be compromised. Period.

The fear of the unknown is natural if you are not familiar with new technology. In fact, the knowledge of Oracle database solutions, reliability and great features which Oracle Recovery Manager (RMAN) offers can discourage you or your team to migrate to a new database system. We like to use things we know, so why migrate when our current solution works. Who knows how many projects were put on hold because the team or individual was not convinced about the new technology?

Logical Backups (exp/imp, expdp/impdb) Related resources  Migration from Oracle Database to MariaDB - A Deep Dive  How to Manage MySQL - for Oracle DBAs  Migrating from Oracle Database to MariaDB - What You Should Know

According to MySQL documentation, logical backup is “a backup that reproduces table structure and data, without copying the actual data files.” This definition can apply to both MySQL and Oracle worlds. The same is “why” and “when” you will use the logical backup.

Logical backups are a good option when we know what data will be modified so you can backup only the part you need. It simplifies potential restore in terms of time and complexity. It’s also very useful if we need to move some portion of small/medium size data set and copy back to another system (often on a different database version). Oracle use export utilities like exp and expdp to read database data and then export it into a file at the operating system level. You can then import the data back into a database using the import utilities imp or impdp.

The Oracle Export Utilities gives us a lot of options to choose what data that needs to be exported. You will definitely not find the same number of features with mysql, but most of the needs are covered and the rest can be done with additional scripting or external tools (check mydumper).

MySQL comes with a package of tools that offer very basic functionality. They are mysqldump, mysqlpump (the modern version of mysqldump that has native support for parallelization) and MySQL client which can be used to extract data to a flat file.

Below you can find several examples of how to use them:

Backup database structure only

mysqldump --no-data -h localhost -u root -ppassword mydatabase > mydatabase_backup.sql

Backup table structure

mysqldump --no-data --single- transaction -h localhost -u root -ppassword mydatabase table1 table2 > mydatabase_backup.sql

Backup specific rows

mysqldump -h localhost --single- transaction -u root -ppassword mydatabase table_name --where="date_created='2019-05-07'" > table_with_specific_rows_dump.sql

Importing the Table

mysql -u username -p -D dbname < tableName.sql

The above command will stop load if an error occurs.

If you load data directly from the mysql client, the errors will be ignored and the client will proceed

mysql> source tableName.sql

To log output, you need to use

mysql> tee import_tableName.log

You can find all flags explained under below links:

If you plan to use logical backup across different database versions, make sure you have the right collation setup. The following statement can be used to check the default character set and collation for a given database:

USE mydatabase; SELECT @@character_set_database, @@collation_database;

Another way to retrieve the collation_database system variable is to use the SHOW VARIABLES.


Because of the limitations of the mysql dump, we often have to modify the output. An example of such modification can be a need to remove some lines. Fortunately, we have the flexibility of viewing and modifying the output using standard text tools before restoring. Tools like awk, grep, sed can become your friend. Below is a simple example of how to remove the third line from the dump file.

sed -i '1,3d' file.txt

The possibilities are endless. This is something that we will not find with Oracle as data is written in binary format.

There are a few things you need to consider when you execute logical mysql. One of the main limitations is pure support of parallelism and the object locking.

Logical backup considerations

When such backup is executed, the following steps will be performed.

  • LOCK TABLE table.
  • SELECT * FROM table INTO OUTFILE temporary file.
  • Write the contents of the temporary file to the end of the dump file.

By default mysqldump doesn’t include routines and events in its output - you have to explicitly set --routines and --events flags.

Another important consideration is an engine that you use to store your data. Hopefully these days most of productions systems use ACID compliant engine called InnoDB. Older engine MyISAM had to lock all tables to ensure consistency. This is when FLUSH TABLES WITH READ LOCK was executed. Unfortunately, it is the only way to guarantee a consistent snapshot of MyISAM tables while the MySQL server is running. This will make the MySQL server become read-only until UNLOCK TABLES is executed.

For tables on InnoDB storage engine, it is recommended to use --single- transaction option. MySQL then produces a checkpoint that allows the dump to capture all data prior to the checkpoint while receiving incoming changes.

The --single-transaction option of mysqldump does not do FLUSH TABLES WITH READ LOCK. It causes mysqldump to set up a REPEATABLE READ transaction for all tables being dumped.

A mysqldump backup is much slower than Oracle tools exp, expdp. Mysqldump is a single-threaded tool and this is its most significant drawback - performance is ok for small databases but it quickly becomes unacceptable if the data set grows to tens of gigabytes.

  • For each database schema and table, a dump performs these steps:
    • SHOW CREATE TABLE table.
    • SELECT * FROM table INTO OUTFILE temporary file.
    • Write the contents of the temporary file to the end of the dump file.
Physical backups (RMAN)

Fortunately, most of the limitations of logical backup can be solved with Percona Xtrabackup tool. Percona XtraBackup is the most popular, open-source, MySQL/MariaDB hot backup software that performs non-blocking backups for InnoDB and XtraDB databases. It falls into the physical backup category, which consists of exact copies of the MySQL data directory and files underneath it.

It’s the same category of tools like Oracle RMAN. RMAN comes as part of the database software, XtraBackup needs to be downloaded separately. Xtrabackup is available as rpm and deb package and supports only Linux platforms. The installation is very simple:

$ wget $ yum localinstall percona-XtraBackup-80-8.0.4-1.el7.x86_64.rpm

XtraBackup does not lock your database during the backup process. For large databases (100+ GB), it provides much better restoration time as compared to mysqldump. The restoration process involves preparing MySQL data from the backup files, before replacing or switching it with the current data directory on the target node.

Percona XtraBackup works by remembering the log sequence number (LSN) when it starts and then copying away the data files to another location. Copying data takes some time, and if the files are changing, they reflect the state of the database at different points in time. At the same time, XtraBackup runs a background process that keeps an eye on the transaction log (aka redo log) files, and copies changes from it. This has to be done continually because the transaction logs are written in a round-robin fashion, and can be reused after a while. XtraBackup needs the transaction log records for every change to the data files since it began execution.

When XtraBackup is installed you can finally perform your first physical backups.

xtrabackup --user=root --password=PASSWORD --backup --target-dir=/u01/backups/

Another useful option which MySQL administrators do is the streaming of backup to another server. Such stream can be performed with the use of xbstream tool, like on the below example:

Start a listener on the external server on the preferable port (in this example 1984)

nc -l 1984 | pigz -cd - | pv | xbstream -x -C /u01/backups

Run backup and transfer to an external host

innobackupex --user=root --password=PASSWORD --stream=xbstream /var/tmp | pigz | pv | nc 1984

As you may notice restore process is divided into two major steps (similar to Oracle). The steps are restored (copy back) and recovery (apply log).

XtraBackup --copy-back --target-dir=/var/lib/data innobackupex --apply-log --use-memory=[values in MB or GB] /var/lib/data

The difference is that we can only perform recovery to the point when the backup was taken. To apply changes after the backup we need to do it manually.

Point in Time Restore (RMAN recovery)

In Oracle, RMAN does all the steps when we perform recovery of the database. It can be done either to SCN or time or based on the backup data set.

RMAN> run { allocate channel dev1 type disk; set until time "to_date('2019-05-07:00:00:00', 'yyyy-mm-dd:hh24:mi:ss')"; restore database; recover database; }

In mysql, we need another tool to perform to extract data from binary logs (similar to Oracle’s archivelogs) mysqlbinlog. mysqlbinlog can read the binary logs and convert them to files. What we need to do is

The basic procedure would be

  • Restore full backup
  • Restore incremental backups
  • To identify the start and end times for recovery (that could be the end of backup and the position number before unfortunately drop table).
  • Convert necessary binglogs to SQL and apply newly created SQL files in the proper sequence - make sure to run a single mysqlbinlog command. > mysqlbinlog binlog.000001 binlog.000002 | mysql -u root -p
Encrypt Backups (Oracle Wallet)

Percona XtraBackup can be used to encrypt or decrypt local or streaming backups with xbstream option to add another layer of protection to the backups. Both --encrypt-key option and --encryptkey-file option can be used to specify the encryption key. Encryption keys can be generated with commands like

$ openssl rand -base64 24 $ bWuYY6FxIPp3Vg5EDWAxoXlmEFqxUqz1

This value then can be used as the encryption key. Example of the innobackupex command using the --encrypt-key:

$ innobackupex --encrypt=AES256 --encrypt-key=”bWuYY6FxIPp3Vg5EDWAxoXlmEFqxUqz1” /storage/backups/encrypted

To decrypt, simply use the --decrypt option with appropriate --encrypt-key:

$ innobackupex --decrypt=AES256 --encrypt-key=”bWuYY6FxIPp3Vg5EDWAxoXlmEFqxUqz1” /storage/backups/encrypted/2019-05-08_11-10-09/ Backup policies

There is no build in backup policy functionality either in MySQL/MariaDB or even Percona’s tool. If you would like to manage your MySQL logical or physical backups you can use ClusterControl for that.

ClusterControl is the all-inclusive open source database management system for users with mixed environments. It provides advanced backup management functionality for MySQL or MariaDB.

With ClusterControl you can:

  • Create backup policies
  • Monitor backup status, executions, and servers without backups
  • Execute backups and restores (including a point in time recovery)
  • Control backup retention
  • Save backups in cloud storage
  • Validate backups (full test with the restore on the standalone server)
  • Encrypt backups
  • Compress backups
  • And many others
ClusterControl: Backup Management Keep backups in the cloud

Organizations have historically deployed tape backup solutions as a means to protect
data from failures. However, the emergence of public cloud computing has also enabled new models with lower TCO than what has traditionally been available. It makes no business sense to abstract the cost of a DR solution from the design of it, so organizations have to implement the right level of protection at the lowest possible cost.

The cloud has changed the data backup industry. Because of its affordable price point, smaller businesses have an offsite solution that backs up all of their data (and yes, make sure it is encrypted). Both Oracle and MySQL does not offer built-in cloud storage solutions. Instead you can use the tools provided by Cloud vendors. An example here could be s3.

aws s3 cp severalnines.sql s3://severalnine-sbucket/mysql_backups Conclusion

There are a number of ways to backup your database, but it is important to review business needs before deciding on a backup strategy. As you can see there are many similarities between MySQL and Oracle backups which hopefully can meet you your SLA’s.

Always make sure that you practice these commands. Not only when you are new to the technology but whenever DBMS becomes unusable so you know what to do.

If you would like to learn more about MySQL please check our whitepaper The DevOps Guide to Database Backups for MySQL and MariaDB.

Tags:  MySQL backup management oracle

Not enforcing SSL on CloudSQL, really !

When creating a MySQL CloudSQL instance, SSL connections are not enforced by default and you get below in the Connections tab of the Google Cloud Platform console.  Is this a problem ?  Some people might think no, but I do not agree with them.  And if I am writing this post, you can probably guess that there is a lot to say about this subject.  Read on for the details. When creating a MySQL

SSH Differences Between Staging and INI Configuration Methods

The Question Recently, a customer asked us:

If we move to using the INI configuration method instead of staging, would password-less SSH still be required?

The Answer The answer is both “Yes” and “No”

No, for installation and updates/upgrades specifically. Since INI-based configurations force the tpm command to act upon the local host only for installs and updates/upgrades, password-less SSH is not required.

Yes, because there are certain commands that do rely upon password-less SSH to function. These are:

  • tungsten_provision_slave
  • multi_trepctl
  • tpm diag (pre-6.0.5)
  • tpm diag --hosts (>= 6.0.5)
  • Any tpm-based backup and restore operations that involve a remote node
Summary The Wrap-Up

In this blog post we discussed the SSH differences between the Staging and INI configuration methods.

To learn about Continuent solutions in general, check out

The Library Please read the docs!

For more information about monitoring Tungsten clusters, please visit

Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit

Want to learn more or run a POC? Contact us.

ProxySQL 2.0.3 and updated proxysql-admin tool

ProxySQL 2.0.3, released by ProxySQL, is now available for download in the Percona Repository along with an updated version of Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL,  and database servers in the MySQL ecosystem (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

The ProxySQL 2.0.3 source and binary packages available from the Percona download page for ProxySQL include ProxySQL Admin – a tool developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 2.0.3 are available as well. You can download the original ProxySQL from GitHub. GitHub hosts the documentation in the wiki format.

With ProxySQL 2.0.3, the proxysql-admin tool now uses the native ProxySQL support for Percona XtraDB Cluster and does not require custom bash scripts to keep track of PXC status.  As a result, proxysql_galera_checker and proxysql_node_monitor have been removed.

  • The proxysql-admin tool is MySQL 8.0 compatible
Added Features
  • New option --use-ssl to use SSL for connections between ProxySQL and the backend database servers
  • New option --max-transactions-behind to determine the maximum number of writesets that can be queued before the node is SHUNNED to avoid stale reads. The default value is 100
  • New operation --update-cluster to update the cluster membership by adding server nodes as found. (Note that nodes are added but not removed).  The --writer-hg option may be used to specify which galera hostgroup to update. The --remove-all-servers option instructs to remove all servers from the mysql_servers table before updating the cluster.
  • Hostgroups can now be specified on the command-line: --writer-hg, --reader-hg, --backup-writer-hg, and --offline-hg.
    Previously, these host groups were only read from the configuration file.
  • The --enable and --update-cluster options used simultaneously have special meaning. If the cluster has not been enabled, then --enable is run.  If the cluster has already been enabled, then --update-cluster is run.
  • New command --is-enabled to see if a cluster has been enabled. This command checks for the existence of a row in the mysql_galera_hostgroups table.  The --writer-hg option may be used to specify the writer hostgroup used to search the mysql_galera_hostgroups table.
  • New command --status to display galera hostgroup information. This command lists all rows in the current mysql_galera_hostgroups table as well as all servers that belong to these hostgroups.  With the --writer-hg option, only the information for the galera hostgroup with that writer hostgroup is displayed.
Changed Features
  • Setting --node-check-interval now changes the ProxySQL global variable mysql-monitor_galera_healthcheck_interval
    Note that this is a global variable, not a per-cluster variable.
  • The option --write-node now takes only a single address as a parameter. In the singlewrite mode we only set the weight if --write-node specifies address:port.  A priority list of addresses is no longer accepted.
  • The option --writers-as-readers option now accepts a different set of values. Due to changes in the behavior of ProxySQL between version 1.4 and version 2.0 related to Galera support, the values of --writers-as-readers have been changed.  This option now accepts the following values: yes, no, and backup.
    yes: writers, backup-writers, and read-only nodes can act as readers.
    no: only read-only nodes can act as readers.
    backup: only backup-writers can act as readers.
  • The commands --syncusers, --sync-multi-cluster-users, --adduser, and --disable can now use the --writer-hg option.
  • The command --disable removes all users associated with the galera cluster hostgroups. Previously, this command only removed the users with the CLUSTER_APP_USERNAME.
  • The command --disable now accepts the --writer-hg option to disable the Galera cluster associated with that hostgroup overriding the value specified in the configuration file.
Removed Features
  • Asynchronous slave reader support has been removed: the --include-slaves option is not supported.
  • A list of nodes in the priority order is no longer supported. Only a single node is supported at this time.
  • Since the galera_proxysql_checker and galera_node_monitor scripts are no longer run in the scheduler, automatic cluster membership updates are not supported.
  • Checking the pxc_maint_mode variable is no longer supported
  • Using desynced nodes if no other nodes are available is no longer supported.
  • The server status is no longer maintained in the mysql_servers table.
  • With --writers-as-readers=backup read-only nodes are not allowed.
    This a limitation of ProxySQL 2.0.  Note that backup is the default value of --writers-as-readers when --mode=singlewrite

ProxySQL is available under Open Source license GPLv3.