Planet MySQL

Window Functions with Unusual Boundaries

Somebody on Freenode wanted this:

Source Result +----+------+ +----+------+ | id | x | | id | c | +----+------+ +----+------+ | 1 | 1 | | 1 | 2 | | 2 | 1 | | 2 | 2 | | 3 | NULL | | 3 | NULL | | 4 | NULL | -> | 4 | NULL | | 5 | 1 | | 5 | 1 | | 6 | NULL | | 6 | NULL | | 7 | 1 | | 7 | 3 | | 9 | 1 | | 9 | 3 | | 10 | 1 | | 10 | 3 | +----+------+ +----+------+

The result uses the NULL values in x as boundaries of windows, and counts the number of rows within each window. I don't know why anyone wants such a thing; it is not ours to reason why...

Anyway, the point is that you can use arbitrary expressions, even subqueries, to define your window partitions.

SELECT id, -- Count of rows in windows bound by NULL values in x IF( x IS NULL, NULL, COUNT(*) OVER (PARTITION BY ( -- Partition by the number of earlier NULLs SELECT COUNT(*) FROM t AS t1 WHERE

How does it work?

First, let's see what that subquery is all about:


By counting the number of "NULL rows" appearing earlier in the table, we get a value we can use to find the starting point of each window.

That alone goes one row too far, though - the "NULL row" which should end each window gets included in the window. However, you can use multiple expressions to partition windows.


The combination of (p1, p2) neatly partition the rows, so each "NULL row" is by itself, and non-NULL rows are together.

Data Masking in MySQL

If you’re here, that probably means you’re about to acquire some sensitive data to take care of. Or that you’ve already acquired it and are protecting it, and you want to see how MySQL Enterprise Data Masking and De-Identification features can help you do it in an easier, better and more efficient manner.…

My Slides about MySQL 8.0 Performance from #OOW18 and #PerconaLIVE 2018

As promised, here are slides about MySQL 8.0 Performance from my talks at Oracle Open World 2018 and Percona LIVE Europe 2018 -- all is combined into a single PDF file to give you an overall summary about what we already completed, where we're going in the next updates within our "continuous release", and what kind of performance issues we're digging right now.. ;-))
Also, I'd like to say that both Conferences were simply awesome, and it's great to see a constantly growing level of skills of all MySQL Users attending these Conferences ! -- hope you'll have even more fun with MySQL 8.0 now ;-))

Percona Live Europe 2018: What’s Up for Wednesday

Welcome to Wednesday at Percona Live Europe 2018! Today is the final day! Check out all of the excellent sessions to attend.

Please see the important updates below.

Download the conference App

If you haven’t already downloaded the app, go to the app store and download the official Percona Live App! You can view the schedule, be alerted for any important updates, create your own personalized schedule, rate the talks and interact with fellow attendees.

For Apple: Download here
For Android: Download here

Rate the talks!

We want to encourage all attendees to rate the talks which you have attended. Please take a few moments to rate the talks which you attended on the Percona Live App.

Registration and Badge Pick Up

Registration is open from 8 am.

AWS Cloud Track

Join the featured cloud track today where AWS will be presenting A Deep Dive on Amazon Aurora, Zero to Serverless in 60 Seconds, Top 10 Mistakes When Migrating From Oracle to PostgreSQL to name a few! These sessions will run in Wallstreet 2!


Keynotes begin promptly at 9:15 am. Please be seated and ready! Arrive early to secure your spot! Keynotes will take place in Dow Jones next to the expo area.

Expo Opening Hours

Have you visited the expo area yet? The expo will be open from 8:00 am to 4:30 pm today.

Conference Slides

Conference slides and presentations will be available to view after the conference and will be located on the Percona Live Europe website.

Breaks and Lunch

Coffee Breaks: The morning break is at 10:50 am – 11:20 am and the afternoon break from 4:10 pm- 4:30 pm (Conference Floor Foyer)
Lunch: 1:10 pm -2:10 pm Lunch will be served on the conference floor and in Showroom and Gaia restaurant on the lobby level.

With Thanks to Our Sponsors!

We hope you have enjoyed the conference! Save the Date!

Percona Live 2019 will happen in Austin, Texas. Save the dates in your diary for May 28-30, 2019!

The conference will take place just after Memorial Day at The Hyatt Regency, Austin on the shores of Lady Bird Lake. This is also an ideal central location for those who wish to extend their stay and explore what Austin has to offer! Call for papers, ticket sales and sponsorship opportunities will be announced soon, so stay tuned!

Common Table Expressions: A Shocking Difference Between MySQL and MariaDB

Common Table Expressions (CTEs) are a very useful tool and frankly a big improvement on sub-queries.  But there are differences in how they are implemented in MySQL and MariaDB.  That  is not too surprising since the code fork many years ago. Different engineers implementing the same idea will have different approaches (and sometimes results). But differences in implementation are often important and, in this case, shockingly different.

Jesper Wisborg Krogh at Oracle OpenWorld and CodeOne gave a series of presentations and hands on labs that were excellent. He is an amazing Support Engineer and a great presenter of material at conferences.  In the lab for Common Table Expressions he did point out to me an interesting problem in MariaDB's implementation of CTEs. 

The Problem In a Nutshell
On the PostgreSQL Wiki, there is a
an SQL query (requires PostgreSQL 8.4 or MySQL 8.0) that produces an ASCII-art image of the Mandelbrot set written entirely in SQL 2008 conforming SQL.

-- Based on:

    SELECT i + 1
      FROM x
     WHERE i < 101
Z(Ix, Iy, Cx, Cy, X, Y, I) AS (
    SELECT Ix, Iy, X, Y, X, Y, 0
      FROM (SELECT CAST(-2.2 + 0.031 * i AS DECIMAL(13, 10)) AS X,
                  i AS Ix FROM x) AS xgen
           CROSS JOIN (
               SELECT CAST(-1.5 + 0.031 * i AS DECIMAL(13, 10)) AS Y,
                      i AS iY FROM x
           ) AS ygen
    SELECT Ix, Iy, Cx, Cy,
           CAST(X * X - Y * Y + Cx AS DECIMAL(13, 10)) AS X,
           CAST(Y * X * 2 + Cy AS DECIMAL(13, 10)), I + 1
      FROM Z
     WHERE X * X + Y * Y < 16.0
           AND I < 27
Zt (Ix, Iy, I) AS (
    SELECT Ix, Iy, MAX(I) AS I
      FROM Z
     GROUP BY Iy, Ix
     ORDER BY Iy, Ix
               ' .,,,-----++++%%%%@@@@#### ',
               GREATEST(I, 1),
           ) ORDER BY Ix SEPARATOR ''
       ) AS 'Mandelbrot Set'

The code is best run on the new MySQL Shell or MySQL Workbench but works well on the old MySQL shell but with desegregated output.
An abbreviated image of  the Mandelbot SQL output (See above for listing) , truncated for size. Produced with the new MySQL Shell (mysqlsh) on MySQL 8.0.13 
But then Jesper mention he had tested the SQL the night before the lab and it runs quickly on MySQL - 0.7445 seconds on my Windows laptop. The Mandelbrot SQL code ran in 0.74445 seconds on MySQL 8.0.13

But not on MariaDB.  Jesper said he ran the same code on MariaDB 10.3 but killed it after fifteen minutes.  It was late and he had to get up early to get to San Francisco.
Double Check
With a fresh install of Fedora 29 and MariaDB 10.3.10, I ran the Mandelbrot SQL code.  And I waited for the result.  After an hour I went to lunch. But the query was still running when I returned.  I went on to other work an occasionally checking back and running SHOW PROCESSLIST  from time to time to make sure it had not died. 
But after two hours I hit control-C as I had other tasks for that system.  There are some interesting Recursive CTE problems listed on Jira, but nothing obviously relevant.
But I was able to confirm that MySQL's implementation of Recursive CTEs works well but I can not say that about MariaDB's implementation. 

Temporary tables are now allowed in transactions when GTIDs are enabled

There has been a recurring request from our users to remove a limit when using global transaction identifiers (GTIDs) and temporary tables together. The limit is that temporary tables cannot be created and dropped inside a transaction when GTIDs are turned on.…

Welcome to Percona Live Europe 2018 Tuesday Keynotes and Sessions!

Hello, open source database enthusiasts at Percona Live Europe 2018! There is a lot to see and do today, and we’ve got some of the highlights listed below.

On Facebook? Go here for some pics that captured the action on Percona Live Europe 2018 Tutorials day (Monday, Nov. 5, 2018). 


Download the Conference App

We apologize for the confusion yesterday on the app but can assure you, the schedule and timings have been updated! If you haven’t already downloaded the app, go to the app store and download the official Percona Live App! You can view the schedule, be alerted for any important updates, create your own personalized schedule, rate the talks and interact with fellow attendees.

For Apple: Download here
For Android: Download here

Registration and Badge Pick Up

Registration is open from 8 am. The registration desk is located at the top of the stairs on the first floor of the Radisson Blu Hotel. 


Keynotes begin promptly at 9:15 am. Please be seated and ready! Arrive early to secure your spot! Keynotes will take place in Dow Jones next to the expo area. 

Community Networking Reception

Join the Open Source community on Tuesday evening at Chicago Meatpackers (Riverside), Frankfurt!

This is a great opportunity to socialize and network with Percona Live Attendees and Other Open Source Enthusiasts who’d like to come along too!

This is not a ticketed event or an official event of Percona Live Europe, simply an open invitation with a place to congregate for food and drinks! An A La Carte food menu and cash bar will be available.

Expo Opening Hours

The expo will be open from 8:00 am to 4:30 pm today. 

Breaks & Lunch

Coffee Breaks: Sponsored by Facebook!  AM Break will be at 10:50am – 11:20 am and the Afternoon break from  4:10 pm- 4:30 pm (Conference Floor Foyer)
Lunch: 1:10 pm -2:10 pm Lunch will be served on the conference floor and in Showroom and Gaia restaurant on the lobby level.

With thanks to our Sponsors!

Enjoy the conference!

How to Quickly Add a Node to an InnoDB Cluster or Group Replication

Quickly Add a Node to an InnoDB Cluster or Group Replication (Shutterstock)

In this blog, we’ll look at how to quickly add a node to an InnoDB Cluster or Group Replication using Percona XtraBackup.

Adding nodes to a Group Replication cluster can be easy (documented here), but it only works if the existing nodes have retained all the binary logs since the creation of the cluster. Obviously, this is possible if you create a new cluster from scratch. The nodes rotate old logs after some time, however. Technically, if the

gtid_purged set is non-empty, it means you will need another method to add a new node to a cluster. You also need a different method if data becomes inconsistent across cluster nodes for any reason. For example, you might hit something similar to this bug, or fall prey to human error. Hot Backup to the Rescue

The quick and simple method I’ll present here requires the Percona XtraBackup tool to be installed, as well as some additional small tools for convenience. I tested my example on Centos 7, but it works similarly on other Linux distributions. First of all, you will need the Percona repository installed:

# yum install -y -q

Then, install Percona XtraBackup and the additional tools. You might need to enable the EPEL repo for the additional tools and the experimental Percona repo for XtraBackup 8.0 that works with MySQL 8.0. (Note: XtraBackup 8.0 is still not GA when writing this article, and we do NOT recommend or advise that you install XtraBackup 8.0 into a production environment until it is GA). For MySQL 5.7, Xtrabackup 2.4 from the regular repo is what you are looking for:

# grep -A3 percona-experimental-\$basearch /etc/yum.repos.d/percona-release.repo [percona-experimental-$basearch] name = Percona-Experimental YUM repository - $basearch baseurl =$releasever/RPMS/$basearch enabled = 1

# yum install pv pigz nmap-ncat percona-xtrabackup-80 -q ============================================================================================================================================== Package Arch Version Repository Size ============================================================================================================================================== Installing: nmap-ncat x86_64 2:6.40-13.el7 base 205 k percona-xtrabackup-80 x86_64 8.0.1-2.alpha2.el7 percona-experimental-x86_64 13 M pigz x86_64 2.3.4-1.el7 epel 81 k pv x86_64 1.4.6-1.el7 epel 47 k Installing for dependencies: perl-DBD-MySQL x86_64 4.023-6.el7 base 140 k Transaction Summary ============================================================================================================================================== Install 4 Packages (+1 Dependent package) Is this ok [y/d/N]: y #

You need to do it on both the source and destination nodes. Now, my existing cluster node (I will call it a donor) – gr01 looks like this:

gr01 > select * from performance_schema.replication_group_members\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 76df8268-c95e-11e8-b55d-525400cae48b MEMBER_HOST: gr01 MEMBER_PORT: 3306 MEMBER_STATE: ONLINE MEMBER_ROLE: PRIMARY MEMBER_VERSION: 8.0.13 1 row in set (0.00 sec) gr01 > show global variables like 'gtid%'; +----------------------------------+-----------------------------------------------+ | Variable_name | Value | +----------------------------------+-----------------------------------------------+ | gtid_executed | aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302662 | | gtid_executed_compression_period | 1000 | | gtid_mode | ON | | gtid_owned | | | gtid_purged | aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-295538 | +----------------------------------+-----------------------------------------------+ 5 rows in set (0.01 sec)

The new node candidate (I will call it a joiner) – gr02, has no data but the same MySQL version installed. It also has the required settings in place, like the existing node address in group_replication_group_seeds, etc. The next step is to stop the MySQL service on the joiner (if already running), and wipe out it’s datadir:

[root@gr02 ~]# rm -fr /var/lib/mysql/*

and start the “listener” process, that waits to receive the data snapshot (remember to open the TCP port if you have a firewall):

[root@gr02 ~]# nc -l -p 4444 |pv| unpigz -c | xbstream -x -C /var/lib/mysql

Then, start the backup job on the donor:

[root@gr01 ~]# xtrabackup --user=root --password=*** --backup --parallel=4 --stream=xbstream --target-dir=./ 2> backup.log |pv|pigz -c --fast| nc -w 2 4444 240MiB 0:00:02 [81.4MiB/s] [ <=>

On the joiner side, we will see:

[root@gr02 ~]# nc -l -p 4444 |pv| unpigz -c | xbstream -x -C /var/lib/mysql 21.2MiB 0:03:30 [ 103kiB/s] [ <=> ] [root@gr02 ~]# du -hs /var/lib/mysql 241M /var/lib/mysql

BTW, if you noticed the difference in transfer rate between the two, please note that on the donor side I put

|pv| before the compressor while in the joiner before decompressor. This way, I can monitor the compression ratio at the same time!

The next step will be to prepare the backup on joiner:

[root@gr02 ~]# xtrabackup --use-memory=1G --prepare --target-dir=/var/lib/mysql 2>prepare.log [root@gr02 ~]# tail -1 prepare.log 181019 19:18:56 completed OK!

and fix the files ownership:

[root@gr02 ~]# chown -R mysql:mysql /var/lib/mysql

Now we should verify the GTID position information and restart the joiner (I have the

group_replication_start_on_boot=off in my.cnf):[root@gr02 ~]# cat /var/lib/mysql/xtrabackup_binlog_info binlog.000023 893 aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302662 [root@gr02 ~]# systemctl restart mysqld

Now, let’s check if the position reported by the node is consistent with the above:

gr02 > select @@GLOBAL.gtid_executed; +-----------------------------------------------+ | @@GLOBAL.gtid_executed | +-----------------------------------------------+ | aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302660 | +-----------------------------------------------+ 1 row in set (0.00 sec)

No, it is not. We have to correct it:

gr02 > reset master; set global gtid_purged="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302662"; Query OK, 0 rows affected (0.05 sec) Query OK, 0 rows affected (0.00 sec)

Finally, start the replication:

gr02 > START GROUP_REPLICATION; Query OK, 0 rows affected (3.91 sec)

Let’s check the cluster status again:

gr01 > select * from performance_schema.replication_group_members\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 76df8268-c95e-11e8-b55d-525400cae48b MEMBER_HOST: gr01 MEMBER_PORT: 3306 MEMBER_STATE: ONLINE MEMBER_ROLE: PRIMARY MEMBER_VERSION: 8.0.13 *************************** 2. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: a60a4124-d3d4-11e8-8ef2-525400cae48b MEMBER_HOST: gr02 MEMBER_PORT: 3306 MEMBER_STATE: ONLINE MEMBER_ROLE: SECONDARY MEMBER_VERSION: 8.0.13 2 rows in set (0.00 sec) gr01 > select * from performance_schema.replication_group_member_stats\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 15399708149765074:4 MEMBER_ID: 76df8268-c95e-11e8-b55d-525400cae48b COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 3 COUNT_CONFLICTS_DETECTED: 0 COUNT_TRANSACTIONS_ROWS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302666 LAST_CONFLICT_FREE_TRANSACTION: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:302665 COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_APPLIED: 2 COUNT_TRANSACTIONS_LOCAL_PROPOSED: 3 COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0 *************************** 2. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 15399708149765074:4 MEMBER_ID: a60a4124-d3d4-11e8-8ef2-525400cae48b COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 0 COUNT_CONFLICTS_DETECTED: 0 COUNT_TRANSACTIONS_ROWS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302666 LAST_CONFLICT_FREE_TRANSACTION: COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_APPLIED: 0 COUNT_TRANSACTIONS_LOCAL_PROPOSED: 0 COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0 2 rows in set (0.00 sec)

OK, our cluster is consistent! The new node joined successfully as secondary. We can proceed to add more nodes!

Galera Cluster supports high volume traffic at Paytrail and THINQ



“Galera Cluster has supported our growth all the way from a small number of transactions to high volume  traffic. Since 2012 our payment service with Galera has processed over 4 billion euros worth of product and service sales. Today we help 10 000+ webshops and online services in several countries to provide a pleasant shopping experience to their customers”




“We’re very happy how Galera Cluster facilitated automated failover eliminating downtime for THINQ services that programmatically route millions of phone calls. The comprehensive Codership documentation includes, for example, the AppArmor package settings necessary to make Galera Cluster nodes more secure. ”

North Carolina based, THINQ is a cloud-based software company that develops Business as a Service (BaaS) solutions for the telecommunications industry. Partners with top carriers like AT&T, Verizon, Comcast in providing wholesale rates for long distance or wireless calls.

Using Django with MySQL 8

A framework can be a great way to allow you to spend more time on the actual application or web site and less time on standard tasks. It can also greatly reduce the amount of custom code needed. Django is one of the best known web frameworks for Python, and the good news is that it works out of the box with MySQL Server 8 and MySQL Connector/Python 8. This blog will look at how to use Django with MySQL 8.

There actually is very little to get Django to work with MySQL 8. Just install it, configure Django to use MySQL Connector/Python as a backend, and that’s it. From the Django point of view, you just have to configure the database option in to use MySQL Connector/Python and your database settings, for example:

DATABASES = { 'default': { 'NAME': 'mydb', 'ENGINE': 'mysql.connector.django', 'USER': 'django', 'PASSWORD': '$@jkHhj34N!bd', 'OPTIONS': { 'autocommit': True, }, } }

The instructions in this blog should also work for older versions of MySQL.

Obviously this assumes, you have MySQL installed already. If you do not, the rest of the blog includes a more comprehensive list of steps. The first step is to install MySQL Server.

Django Administration Screen using MySQL as the backendInstalling MySQL Server

There are several ways to install MySQL Server and which one is the best depends on your circumstances and preferences. For the sake of this blog, I will show how MySQL Server can be installed on Oracle Linux/RHEL/CentOS 7 using RPMs and on Microsoft Windows using MySQL Installer. For more options, see the installation chapter in the reference manual. Let’s look at the Linux installation first.

RPM Install on Enterprise Linux

MySQL provides repositories for several Linux distributions including the Oracle Linux/RHEL/CentOS family. This makes it easy to install MySQL. The step to install the repository definition is:

$ sudo yum install plugins: langpacks, ulninfo Repository ol7_developer_EPEL is listed more than once in the configuration mysql80-community-release-el7-1.noarch.rpm | 25 kB 00:00:00 Examining /var/tmp/yum-root-Ts4OzC/mysql80-community-release-el7-1.noarch.rpm: mysql80-community-release-el7-1.noarch Marking /var/tmp/yum-root-Ts4OzC/mysql80-community-release-el7-1.noarch.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package mysql80-community-release.noarch 0:el7-1 will be installed --> Finished Dependency Resolution Dependencies Resolved ====================================================================================================== Package Arch Version Repository Size ====================================================================================================== Installing: mysql80-community-release noarch el7-1 /mysql80-community-release-el7-1.noarch 31 k Transaction Summary ====================================================================================================== Install 1 Package Total size: 31 k Installed size: 31 k Is this ok [y/d/N]: y Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mysql80-community-release-el7-1.noarch 1/1 Verifying : mysql80-community-release-el7-1.noarch 1/1 Installed: mysql80-community-release.noarch 0:el7-1 Complete!

Now, you can install MySQL Server. There are several RPMs to choose from and which you need depends on which other features you need to use. A common set of RPMs can be installed as:

shell$ sudo yum install mysql-community-server mysql-community-libs \ mysql-community-libs-compat mysql-community-common mysql-community-client ...

Note: If you have another MySQL installation, it will be upgraded to the latest release (at the time of writing 8.0.13).

On the first start, the data directory will be initialized:

shell$ sudo systemctl start mysqld

To keep a fresh installation secure, a random password has been set for the root user. This can be found from the MySQL error log:

shell$ sudo grep password /var/log/mysqld.log 2018-11-05T08:05:09.985857Z 5 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: KWNfuA!1r:PF

Use this password to connect to MySQL and update the password (please use a strong password):

shell$ mysql --user=root --password Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 19 Server version: 8.0.13 MySQL Community Server - GPL Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> ALTER USER root@localhost IDENTIFIED BY 'Kij0@jDi~Faf4'; Query OK, 0 rows affected (0.13 sec)

MySQL is now ready for use. Before continuing, I will show an example of installing MySQL on Microsoft Windows.

Microsoft Windows

On Microsoft Windows an easy way to install MySQL is to use the MySQL Installer. The installer can be downloaded from the MySQL download site. The MySQL Installer can be used to install most MySQL products. If you start MySQL Installer for the first time, you will be taken directly to the screen to choose the products to install; if you already have installed products, you will need to choose to add new products.

On the Select Products and Features screen, choose to install MySQL Server 8.0 (MySQL Installer will list the latest release from the list of available products):

Installing MySQL Server 8.0.13 from MySQL Installer

Optionally, you can filter the list of products. Feel free to choose other products you want. MySQL Notifier can be useful for starting and stopping MySQL, if you do not plan to have MySQL running at all times. You can also install MySQL Connector/Python this way, however for this blog a different method will be used.

Follow the installation wizard. For this blog, the default choices will work, though during the configuration you may want to ensure Open Windows Firewall ports for network access is unchecked unless you need remote access.

Before you can connect to MySQL from your Django program, you need a user and a schema (database) to use from your web site.

Preparing MySQL Server

While MySQL is now ready to work with Django, you will likely want to do a few more preparation steps. Here creating the MySQL user and schema (database) used by Django and support for named time zones will be covered.

Creating the User and Schema

An example of creating the user django@localhost and give it all privileges to the mydb schema and to create the mydb schema is:

mysql> CREATE USER django@localhost IDENTIFIED BY '$@jkHhj34N!bd'; Query OK, 0 rows affected (0.11 sec) mysql> GRANT ALL ON mydb.* TO django@localhost; Query OK, 0 rows affected (0.11 sec) mysql> CREATE DATABASE mydb CHARACTER SET utf8mb4; Query OK, 1 row affected (0.07 sec)

This will allow the django user to connect from the same host as MySQL Server is installed by authenticating with the password $@jkHhj34N!bd.

In MySQL 8 it is not necessary to specify the database character set to utf8mb4 as it is the default. However, if you use an older version of MySQL Server, you should ensure you are using UTF-8. The utf8mb4 character set means that characters using up to four bytes are supported.

Named Time Zones

If you want to used named time zones (for example Australia/Sydney), you will need to install the data for these in MySQL. On Linux you can use the mysql_tzinfo_to_sql script that comes with the MySQL installation:

shell$ mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql --user=root --password mysql

On Microsoft Windows, you need to download the time zone information and load these into the database, for example:

mysql> use mysql Database changed mysql> SOURCE timezone_posix.sql

See also MySQL Server Time Zone Support in the reference manual.

Now, you can move on to MySQL Connector/Python and Django.

Installing MySQL Connector/Python and Django

Both MySQL Connector/Python and Django can be installed in a platform independent way using the pip command. Since Django 2.1 is only available for Python 3.4 and later, it is recommended to use Python 3.4 or later. This blog assumes Python 3.6. (MySQL Connector/Python 8.0.13 and later also supports Python 3.7.)

If you do not have Python 3.6 installed on Oracle Linux/RHEL/CentOS 7, you can easily install it for example from for EPEL repository. Assuming you have configured the EPEL repository, the following steps install Python 3.6, enable pip, and update pip to the latest version:

shell$ yum install python36 shell$ python3.6 -m ensurepip shell$ python3.6 -m pip install --upgrade pip

You can now use python3.6 to invoke Python 3.6. In the following, replace python with python3.6 if you have installed Python 3.6 in this way.

To install the latest MySQL Connector/Python release (currently 8.0.13):

PS:> python -m pip install mysql-connector-python Collecting mysql-connector-python Downloading (3.2MB) 100% |████████████████████████████████| 3.2MB 3.0MB/s Requirement already satisfied: protobuf>=3.0.0 in c:\users\myuser\appdata\local\programs\python\python36\lib\site-packages (from mysql-connector-pytho st1) Requirement already satisfied: six>=1.9 in c:\users\myuser\appdata\local\programs\python\python36\lib\site-packages (from protobuf>=3.0.0->mysql-conne (1.11.0) Requirement already satisfied: setuptools in c:\users\myuser\appdata\local\programs\python\python36\lib\site-packages (from protobuf>=3.0.0->mysql-con n) (28.8.0) Installing collected packages: mysql-connector-python Successfully installed mysql-connector-python-8.0.13

Similar for installing Django:

PS:> python -m pip install Django Collecting Django Downloading (7.3MB) 100% |████████████████████████████████| 7.3MB 1.8MB/s Collecting pytz (from Django) Downloading (506kB) 100% |████████████████████████████████| 512kB 10.5MB/s Installing collected packages: pytz, Django Successfully installed Django-2.1.3 pytz-2018.7

That’s it. Now you are ready to use Django with MySQL Connector/Python 8 and MySQL Server 8.

Using Django

I will not go into details of how to use Django. If you are new to Django, you can consider going through the tutorial for Django 2.1 on the Django website. This sets up a web site using a database backend. The important thing with respect to MySQL is the configuration of the DATABASE property in (the first step in part 2):

DATABASES = { 'default': { 'NAME': 'mydb', 'ENGINE': 'mysql.connector.django', 'USER': 'django', 'PASSWORD': '$@jkHhj34N!bd', 'OPTIONS': { 'autocommit': True, }, } }

The key here is the engine. You need to set it to mysql.connector.django to use MySQL Connector/Python. The rest of the options are schema name, credentials, and MySQL Connector/Python specific options.

The Django documentation also has more information about using MySQL as the database. A couple of updates of the statements in the document:

  • As of MySQL 8, InnoDB does correctly restore the auto-increment ID after a restart.
  • The default character set in MySQL 8 is UTF-8 (called utf8mb4 in MySQL).
  • There is also support for a C Extension in MySQL Connector/Python. In MySQL 8 this is the default for the platforms/Python version where the C Extension is installed.
Note: In the second part of the tutorial, I need to swap the python sqlmigrate polls 0001 command with the next python migrate command. Otherwise the sqlmigrate command fails with an error.

The rest is all Python and Django. Have fun creating your next web site with Django, MySQL Connector/Python 8, and MySQL 8.

If you are interested in learning more about MySQL Connector/Python 8, then I am the author of MySQL Connector/Python Revealed (Apress) – also available as Amazon and other bookshops.

MySQL Meetup in Wien - "MySQL 8 - The next big thing!" (&. get together)

We are happy to announce that there will be a MySQL Day/MySQL Community Meetup hold in Oracle Buliding in Vienna on November 8th. Please find more details below:
  • Date: Thursday, November 8th, 2018 
  • Time: 7pm - 9pm
  • Address: Oracle Office DC Tower 1, Donau-City-Straße 7, 1220 Wien, Österreich
  • Meeting room: will be confirmed soon. 
  • Agenda: 
    • "Oracle MySQL 8 - The next big thing!" by Carsten Thalheimer the Master Principal Sales Consultant 
    • Discussion & pizza 
  • More information & Registration

Nov 5 -> Busy week for MySQL: where you can find us now?

as announced in the blog posted on Oct 18, 2018 we would like to just remind you about places and shows where you can find us this really busy week... We are ready for following shows:

  • PerconaLive in Frankfurt, Germany on November 5-7, 2018
    • as previously announced there are plenty of our MySQL experts, our friends from sales & Fred Descamps representing MySQL community at PerconaLive, Frankfurt which started today. Right now Fred Descamps & Kenny Grip are having a tutorial on "MySQL InnoDB Cluster in a Nutshell : The Saga Continues with 8.0" which ends before noon. Also there are plenty of MySQL talks (you can check them either on the previous blog post or directly at PerconaLive website).
    • From today's noon the expo is also open. MySQL is having a new designed booth there with our staff. Come to talk to us! We are looking forward to meeting you there!
  • PHP.RUHR, Dortmund, Germany on November 8, 2018
    • Next show where you can find us is PHP.RUHR. This is the third time we are supporting this one-day show and same as last year we are having a MySQL talk (in Mainstage: Developer on "MySQL 8.0 - The new Developer API") & Q&A session in the expo area where you can come to ask our MySQL speaker, Mario Beck MySQL related questions. 
    • Come to listen the news in MySQL 8.0 & talk to Mario at PHP.RUHR!
  • ​​Highload++, Moscow, Russia on November 8-9, 2018
    • We are really excited about being part of this big technology show in Russia. Last week we finally got a notification about that our MySQL talk was accepted so, you can find that talk in the conference schedule as follows:
      • "MySQL 8.0: SQL and NoSQL Scalability" given by Vittorio Cioe. Talk is scheduled for Nov 8th @14:00
    • Do not forget to stop by at our MySQL booth in the expo area. We are planning to have quiz for a MySQL prizes as well as InnoDB Cluster demo given by Vittorio on the first day and quiz followed by MEM demo given by Vittorio on the second day at our booth. Check the organizers' website & schedule for further updates & for sure come to our booth and MySQL session to get the latest MySQL updates and news. We are looking forward to talking to you at Highload++!!



Using Sass in Create React App v2

With the upgraded Create React App released recently, we got a lot of new tools to play with. Sass is one that I'm excited to have built in since we used to have to have .scss files compile and write to .css files right in our folder structure. Messy files when you have two of the same styles.scss and styles.css.

Create React App 2 makes it super easy to use Sass in 1 line.

You may be concerned about using Sass in React. Isn't a smarter way to write styles with CSS-in-JS libraries like styled-components or aphrodite? I believe that adding Sass support to Create React App will be a big help to beginners of React. How do I use Sass in React is one of the questions I always hear from people getting into React. With the React 16.6 additions like React.memo() and the React 16.7 functional additions like hooks, starting with React will be easier than ever!

Quick Start

The steps to use Sass in Create React App are:

  1. Install node-sass: npm install node-sass
  2. Change .css files to .scss
  3. Change any imports to use .scss
npm install node-sass -S

Once we've changed the file name from .css to .scss, we can import the Sass:

// replace import "./styles.css"; // with import "./styles.scss";

Done! Create React App will know to parse your .scss files and add the styles to your project.

Using and Sharing Sass Variables

How do we share variables across files? We are able to import our Sass files from other Sass files. Let's say you create a variables file:

variables.scss $primaryColor: #BADA55;

We can import this inside of another file like we normally would in Sass:

styles.scss // import starting from the src/ folder @import "variables.scss"; // can also be relative import // @import "./variables.scss"; // we can use the $primaryColor variable now h1, h2 { color: $primaryColor; }

Sass Files from 3rd Party Packages

If we want to use any 3rd party libraries like Bulma or Bootstrap (Bulma is my favorite right now), we don't need to import the entire CSS library anymore.

With Sass in React, we can import just the files we need. First, we have to install Bulma.

npm install bulma -S

If we look at Bulma's GitHub in the sass/ folder, we can see where they place their .sass files. Notice they are using .sass and we are using the .scss variant. No problems, node-sass can read and @import both!

Import files from node_modules using ~

The ~ let's webpack and Create React App know to look in the node_modules/ folder for the files we need. Let's add a few of the files we need to our app:

styles.scss // import using ~ @import "~bulma/sass/utilities/_all.sass"; @import "~bulma/sass/base/_all.sass"; @import "~bulma/sass/elements/button.sass"; @import "~bulma/sass/layout/section.sass";

Now, we can use Bulma's button and section.

App.js function App() { return ( <div className="App section"> <h1>Hello CodeSandbox</h1> <h2>Start editing to see some magic happen!</h2> <button className="button is-danger is-outlined"> Hello </button> </div> ); }

This approach let's us keep our CSS bundle sizes as small as possible as we only import what we need.

Conclusion and Demo

Using Sass in React is a quick way to get styling in your app. It is also recommended to look at CSS-in-JS solutions so that we can create even more modular CSS in our component based React apps.

Here's the CodeSandbox with the demo of Sass in Create React App 2:

On New Severity Levels for MySQL Bugs

Four weeks ago while working on a blog post about half baked XA transactions feature of MySQL server I've noted that there are new severity levels added by Oracle for MySQL bug reports. Previously we had 5 levels:

  • S1 (Critical) - mostly for all kinds of crashes, DoS attack vectors, data corruptions etc
  • S2 (Serious) - mostly for wrong results bugs, broken replication etc
  • S3 (Non-critical) - all kinds of minor but annoying bugs, from unexpected results in some corner cases to misleading or wrong error messages, inefficient or unclear code etc
  • S4 (Feature requests) - anything that should work or be implemented based on common sense, but is not documented in the manual and was not required by the original specification or implementation of some feature.
  • S5 (Performance) - everything works as expected and documented, but the resulting performance is bad or less than expected. Something does not scale well, doesn't return results fast enough in some cases, or could be made faster or some specific platform using some different code or library. This severity level was also probably added at Oracle times, at least it was not there in 2005 when I started to work on MySQL bugs.

Informal descriptions above are mine and may be incorrect or different from definitions Oracle engineers currently use. I tried to search for Oracle definitions that apply to MySQL, but was not able to find anything immediately useful (any help with public URL is appreciated). 

In general, severity is defined as the degree of impact a bug has on the operation or use of some software, so less severity assumes less impact on common MySQL operations. One may also expect that bugs with higher severity are fixed first (have higher internal priority). It may not be that simple (and was not during my days in MySQL, when many more inputs were taken into account while setting priority for the bug fix), but it's a valid assumption for any community member.

By default when searching for bugs you got all bugs of severity levels S1, S2, S3 and S5. You had to specifically care to get feature requests in search results while using bugs database search interface.

If you try to search bugs today, you'll see two more severity levels added, S6 (Debug Builds) and S7 (Test Cases):

Now we have 7 Severity levels for MySQL bug reports S6 severity level seems to be used for assertion failures and other bugs that affect only debug builds and can not be reproduced literally with non-debug binaries. S7 severity level is probably used for bug reports about failing MTR test cases, assuming that failure does NOT show a regression in MySQL software, but rather some non-determinism, platform dependencies, timing assumptions or other defects of the test case itself.

By default bug reports with these severity levels are NOT included in search (they are not considered "Production Bugs"). So, one has to care to see them. This, as well as normal common sense based assumption that lower severity eventually means to lower priority for the fix, caused some concerns. It would be great for somebody from Oracle to explain the intended use and reasons for introduction of these severity levels with some more text than a single tweet, to clarify possible FUD people may have. If applied formally, these new severity values may lead to low priority for quite important problems. Most debug assertions are in the code for really good reason, as many weird things (up to crashes and data corruption) may happen in non-debug binaries somewhere later in cases when debug-only assertion fails.

I was surprised to find out that at the moment we have 67 active S6 bug reports, and 32 active S7 bug reports. The latter list obviously includes reports that should not be S7, like Bug #92274 - "Question of mysql 8.0 sysbench oltp_point_select test" that is obviously about a performance regression noted in MySQL 8 (vs MySQL 5.7) by the bug reporter.

Any comments from Oracle colleagues on the reasons to introduce new severity levels, their formal definitions and impact on community bug reports processing are greatly appreciated.

Tuning MyRocks for performance

There are basically two things which I majorly like about using MyRocks, 1. LSM Advantage – smaller space & lower write amplification and 2. Best of MySQL like replication, storage engine centric database infrastructure operations and MySQL orchestration tools. Facebook built RocksDB as an embeddable and persistent key-value store with lower amplification factor () compared to InnoDB. Let me explain a scenario were InnoDB proves less efficient compared to RocksDB in SSD:
We know InnoDB is constrained by a fixed compressed page size. Alignment during fragmentation and compression causes extra unused space because the leaf nodes are not full. Let’s consider a InnoDB table with a compressed page size of 8KB. A 16KB in-memory page compressed to 5KB still uses 8KB on storage. Adding to this, each entry in the primary key index has 13 bytes of metadata (6 byte transaction id + 7 byte rollback pointer), and the metadata is not compressed, making the space overhead significant for small rows. Typically flash devices are limited by the WRITE endurance, In a typical scenario were index values are stored in leaf nodes and sorted by key, the often operational database may not fit in memory and keys get updated in an random platform leading to higher write amplification. In the worst case, updating one row requires a number of page reads, makes several pages dirty, and forces many dirty pages to be written back to storage.

Sow now what I really love about MyRocks?

It’s all about much lower write amplification factor of RocksDB compared to InnoDB is what I am very much impressed about. On pure flash, reducing write volume (write amplification) is important because flash burns out if writing too much data. Reducing write volume also helps to improve overall throughput on flash. InnoDB adopts “update in place” architecture. Even though updating just 1 record, an entire page where the row belongs becomes dirty, and the dirty page has to be written back to storage. On typical OLTP systems, modification unit (row) size is much smaller than I/O unit (page) size. This makes write amplification very high. I have published performance benchmarking of InnoDB, RocksDB and TokuDB, You can read about it here

Things to remember before tuning MyRocks:
  • Data loading limitations
    • Limitation – Transaction must fit in memory:
      • mysql > ALTER TABLE post_master ENGINE = RocksDB;
        • Error 2013 (HY000): Lost connection to MySQL server during query.
      • Higher memory consumption and eventually get killed by OOM killer
    • When loading data into MyRocks tables, there are two recommended session variables:
      • SET session sql_log_bin=0;
      • SET session rocksdb_bulk_load=1;

There are few interesting things to remember before bulk loading MyRocks and tuning the system variable rocksdb_bulk_load:

  1. Data being bulk loaded can never overlap with existing data in the table. It is always recommended to bulk data load into an empty table. But, The mode will allow loading some data into the table, doing other operations and then returning and bulk loading addition data if there is no overlap between what is loaded and what already exists.
  2. The data may not be visible until the bulk load mode is ended (i.e. the rocksdb_bulk_load is set to zero again). RocksDB stores data into “SST” (Sorted String Table) files and Until a particular SST has been added the data will not be visible to the rest of the system, thus issuing a SELECT on the table currently being bulk loaded will only show older data and will likely not show the most recently added rows. Ending the bulk load mode will cause the most recent SST file to be added. When bulk loading multiple tables, starting a new table will trigger the code to add the most recent SST file to the system — as a result, it is inadvisable to interleave INSERT statements to two or more tables during bulk load mode.
Configuring MyRocks for performance:

Character Sets:

  • MyRocks works more optimal with case sensitive collations (latin1_bin, utf8_bin, binary)


  • Read Committed isolation level is recommended. MyRocks’s transaction isolation implementation is different from InnoDB, but close to PostgreSQL. Default tx isolation in PostgreSQL is Read Committed.


  • Set kNoCompression (or kLZ4Compression) on L0-1 or L0-2
  • In the bottommost level, using stronger compression algorithm (Zlib or ZSTD) is recommended.
  • If using zlib compression, set kZlibCompression at the bottommost level (bottommost_compression).
  • If using zlib compression, set compression level accordingly. The above example (compression_opts=-14:1:0) uses zlib compression level 1. If your application is not write intensive, setting (compression_opts=-14:6:0) will give better space savings (using zlib compression level 6).
  • For other levels, set kLZ4Compression.

Data blocks, files and compactions

  • Set level_compaction_dynamic_level_bytes=true
  • Set proper rocksdb_block_size (default 4096). Larger block size will reduce space but increase CPU overhead because MyRocks has to uncompress many more bytes. There is a trade-off between space and CPU usage.
  • Set rocksdb_max_open_files=-1. If setting greater than 0, RocksDB still uses table_cache, which will lock a mutex every time you access the file. I think you’ll see much greater benefit with -1 because then you will not need to go through LRUCache to get the table you need.
  • Set reasonable rocksdb_max_background_jobs
  • Set not small target_file_size_base (32MB is generally sufficient). Default is 4MB, which is generally too small and creates too many sst files. Too many sst files makes operations more difficult.
  • Set Rate Limiter. Without rate limiter, compaction very often writes 300~500MB/s on pure flash, which may cause short stalls. On 4x MyRocks testing, 40MB/s rate limiter per instance gave pretty stable results (less than 200MB/s peak from iostat).

Bloom Filter

  • Configure bloom filter and Prefix Extractor. Full Filter is recommended (Block based filter does not work for Get() + prefix bloom). Prefix extractor can be configured per column family and uses the first prefix_extractor bits as the key. If using one BIGINT column as a primary key, recommended bloom filter size is 12 (first 4 bytes are for internal index id + 8 byte BIGINT).
  • Configure Memtable bloom filter. Memtable bloom filter is useful to reduce CPU usage, if you see high CPU usage at rocksdb::MemTable::KeyComparator. Size depends on Memtable size. Set memtable_prefix_bloom_bits=41943040 for 128MB Memtable (30/128M=4M keys * 10 bits per key).


  • Do not set block_cache at rocksdb_default_cf_options (block_based_table_factory). If you do provide a block cache size on a default column family, the same cache is NOT reused for all such column families.
  • Consider setting shared write buffer size (db_write_buffer_size)
  • Consider using compaction_pri=kMinOverlappingRatio for writing less on compaction.

The post Tuning MyRocks for performance appeared first on MySQL Consulting, Support and Remote DBA Services.

Converting an LSM to a B-Tree and back again

I wonder if it is possible to convert an LSM to a B-Tree. The goal is to do it online and in-place -- so I don't want two copies of the database while the conversion is in progress. I am interested in data structures for data management that adapt dynamically to improve performance and efficiency for a given workload. 
Workloads change in the short and long term. I hope that data structures can be adapt to the change and converting between an LSM and a B-Tree is one way to adapt. This is more likely to be useful when the data structure supports some kind of partitioning in the hope that different workloads can be isolated to different partitions -- and then some can use an LSM while others use a B-Tree.

LSM to B-Tree
A B-Tree is one tree. An LSM is a sequence of trees. Each sorted run in the LSM is a tree. With leveled compaction in RocksDB there are a few sorted runs in level 0 (L0) and then one sorted run in each of L1, L2 up to the max level (Lmax). 
A B-Tree persists changes by writing back pages -- either in-place or copy-on-write (UiP or CoW). An LSM persists changes by writing and then re-writing rows. I assume that page write back is required if you want to limit the database to one tree and row write back implies there will be more than one tree. 
There are two things that must be done online and in-place:
  1. Convert the LSM from many trees to one tree
  2. Convert from row write back to page write back
Note that my goal has slightly changed. I want to move from an LSM to a data structure with one tree. For the one-tree solution a B-Tree is preferred but not required.
The outline of a solution:
  1. Reconfigure the LSM to use 2 levels -- L0 and L1 -- and 3 trees -- memtable, L0, L1.
  2. Disable the L0. At this point the LSM has two trees -- memtable and L1.
  3. Flush the memtable and merge it into the L1. Now there is one tree.
  4. After the flush disable the memtable and switch to a page cache. Changes now require a copy of the L1 block in the page cache that eventually get written back via UiP or CoW.
The outline above doesn't explain how to maintain indexes for the L1. Note that after step 2 there is one tree on disk and the layout isn't that different from the leaf level of a B-Tree. The interior levels of the B-Tree could be created by reading/rewriting the block indexes stored in the SSTs.
B-Tree to LSM
The conversion can also be done in the opposite direction (B-Tree to LSM)
  1. Treat the current B-Tree as the max level of the LSM tree. While it might help to flush the page cache I don't think that is required. This is easier to do when your LSM uses a B-Tree per level, as done by WiredTiger.
  2. Record new changes for insert, update, delete in a memtable rather than a page cache.
  3. When the memtable is full then flush it to create a new tree (sorted run, SST) on disk.
  4. Eventually start to do compaction.

Maintenance Windows in the Cloud

Recently, I’ve been working with a customer to evaluate the different cloud solutions for MySQL. In this post I am going to focus on maintenance windows and requirements, and what the different cloud platforms offer.

Why is this important at all?

Maintenance windows are required so that the cloud provider can do the necessary updates, patches, and changes to our setup. But there are many questions like:

  • Is this going to impact our production traffic?
  • Is this going to cause any downtime?
  • How long does it take?
  • Any way to avoid it?

Let’s discuss the three most popular cloud provider: AWS, Google, Microsoft. These three each have a MySQL based database service where we can compare the maintenance settings.


When you create an instance you can define your maintenance window. It’s a 30 minutes block when AWS can update and restart your instances, but it might take more time, AWS does not guarantee the update will be done in 30 minutes. The are two different type of updates, Required and Available. 

If you defer a required update, you receive a notice from Amazon RDS indicating when the update will be performed. Other updates are marked as available, and these you can defer indefinitely.

It is even possible to disable auto upgrade for minor versions, and in that case you can decide when do you want to do the maintenance.

AWS separate OS updates and database engine updates.

OS Updates

It requires some downtime, but you can minimise it by using Multi-AZ deployments. First, the secondary instance will be updated. Then AWS do a failover and update the Primary instance as well. This means some small outage during the failover.

DB Engine Updates

For DB maintenance, the updates are applied to both instances (primary and secondary) at the same time. That will cause some downtime.

More information:

Google CloudSQL

With CloudSQL you have to define an hour for a maintenance window, for example 01:00–02:00, and in that hour, they can restart the instances at any time. It is not guaranteed the update will be done in that hour. The primary and the secondary have the same maintenance window. The read replicas do not have any maintenance window, they can be stopped at any time.

CloudSQL does not differentiate between OS or DB engine, or between required and available upgrades. Because the failover replica has the same maintenance window, any upgrade might cause database outage in that time frame.

More information:

Microsoft Azure

Azure provides a service called Azure Database for MySQL servers. I was reading the documentation and doing some research trying to find anything regarding the maintenance window, but I did not find anything.

I span up an instance in Azure to see if there is any available settings, but I did not find anything so at this point I do not know how Azure does OS or DB maintenance or how that impacts production traffic.

If someone knows where can I find this information in the documentation, please let me know.

Conclusion AWS CloudSQL Azure Maintenance Window 30m 1h Unknown Maintenance Window for Read Replicas No No Unknown Separate OS and DB updates Yes No Unknown Outage during update Possible Possible Unknown Postpone an update Possible No Unknown Different priority for updates Yes No Unknown


While I do not intend  to prefer or promote any of the providers, for this specific question, AWS offers the most options and controls for how we want to deal with maintenance.

Photo by Caitlin Oriel on Unsplash

Effective Monitoring of MySQL With SCUMM Dashboards - Part 3

We discussed in our previous blogs about the MySQL-related dashboards. We highlighted the things that a DBA can benefit from by studying the graphs, especially when performing their daily routines from diagnostics, metric reporting, and capacity planning. In this blog, we will discuss the InnoDB Metrics and the MySQL Performance Schema, which is very important especially on monitoring InnoDB transactions, disk/cpu/memory I/O, optimizing your queries, or performance tuning of the server.

This blog touches upon the deep topic of performance, considering that InnoDB would require extensive coverage if we tackle its internals. The Performance Schema is also extensive as it covers kernel and core parts of MySQL and storage engines.

Let’s begin walking through the graphs.

MySQL InnoDB Metrics

This dashboard is great for any MySQL DBA or ops person, as it offers a very good view into the InnoDB storage engine. There are certain graphs here that a user has to consider to enable, because not in all situations that the variables are set correctly in the MySQL configuration.

  • Innodb Checkpoint Age

    According to the manual, checkpointing is defined as follows: “As changes are made to data pages that are cached in the buffer pool, those changes are written to the data files sometime later, a process known as flushing. The checkpoint is a record of the latest changes (represented by an LSN value) that have been successfully written to the data files”. This graph is useful when you would like to determine how your server is performing checkpointing data to your disk. This can be a good reference if your transaction log (redo log or ib_logfile0) is too large. This graph is also a good indicator if you need to adjust variables such as innodb_log_file_size,, innodb_log_buffer_size, innodb_max_dirty_pages_pct, or innodb_adaptive_flushing_method. The closer checkpoint age is to the max checkpoint age, the more filled are the logs and InnoDB will be doing more I/O in order to maintain some free space in the logs. Checkpointing mechanism differs in subtle details between Percona XtraDB-based flavours, MariaDB and Oracle’s version, you can also find differences in it’s implementation between MySQL versions.

  • InnoDB Transactions

    Whenever there’s a large transaction on-going in your MySQL server, this graph is a good reference. It will count the transactions that were created at a specific time, and the history length (or is actually the history list length found in SHOW ENGINE INNODB STATUS) is the number of pages in the undo log. The trends you’ll see here is a good resource to check if it could mean, for example, that purge is delayed due to a very high insert rate of reloading the data or due to a long-running transaction, or if purge simply can't keep up due to a high disk I/O in the volume where your $DATADIR resides.

  • Innodb Row Operations

    For certain DBA tasks, you might want to determine the number of deletes, inserts, reads, and rows updated. Then this graph is what you can use to check these.

  • Innodb Row Lock Time

    This graph is a good resource to look upon when you are noticing that your application is encountering lots of occurrences for “Lock wait timeout exceeded; try restarting transaction”. This can also help you determine if you might have an indication for using bad queries on handling locks. This is also a good reference to look upon when optimizing your queries that involves locking of rows. If the time to wait is too high, you need to check the slow query log or run a pt-query-digest and see what are those suspecting queries causing that bloat in the graph.

  • InnoDB I/O

    Whenever you want to determine the amount of InnoDB data reads, disk flushes, writes, and log writes, this graph has what you need to look at. You can use this graph to determine if your InnoDB variables are well tuned to handle your specific requirements. For example, if you have Battery Backup Module cache but you are not gaining much of its optimum performance, you can rely on this graph to determine if your fsyncs() are higher than expected. Then changing the variable innodb_flush_method and using O_DSYNC can resolve the issue.

  • InnoDB Log File Usage Hourly

    This graph shows only the number of bytes written to the InnoDB redo log files and the growth of your InnoDB log files based on the 24-hour time range of the current date.

  • InnoDB Logging Performance

    This graph is closely related to InnoDB Log File Usage Hourly graph. You have to use this graph whenever you need to determine how large your innodb_log_file_size needs to be. You can determine the number of bytes written to the InnoDB redo log files and how efficiently your MySQL flushes data from memory to disk. Whenever you are experiencing a low-time in need to use your redo log space, then it would indicate that you have to increase your innodb_log_file size. In that case, this graph would tell you that you need to do so. However, to dig more into how much you need for your innodb_log_file, it might make more sense to check the LSN (Log Sequence Number) in SHOW ENGINE INNODB STATUS. Percona has a good blog related to this which is a good source to look at.

  • InnoDB Deadlocks

    In certain situations that your application client is often experiencing deadlocks or you have to look at how much your MySQL is experiencing deadlocks, this graph serves the purpose. Deadlocks indicate that you have poor SQL design which leads to your transactions creating a race condition causing deadlocks.

  • Index Condition Pushdown

    A little word of caution when looking at this graph. First, you have to determine that you have your MySQL global variable innodb_monitor_enable set to the correct value that is module_icp. Otherwise, you’ll experience a “No Data Points” as shown below:

    The graph’s purpose, if has data points defined as what I have in the sample outputs, will provide a DBA with an overlook of how well your queries are benefiting with Index Condition Pushdown or ICP for short. ICP is great feature in MySQL that offers optimization to your queries. Instead of MySQL reading the full rows filtered in your WHERE queries upon retrieval, it will add more checks after your secondary indexes. This adds more granularity and saves time, otherwise the engine has to read the full-table rows instead when it is based only on the filtered index and no ICP is used. This avoids reading the full rows corresponding to your index tuples that matches your secondary indexes.

    Let me elaborate a bit about this graph, let say I have a table named:

    mysql> show create table a\G *************************** 1. row *************************** Table: a Create Table: CREATE TABLE `a` ( `id` int(11) NOT NULL, `age` int(11) NOT NULL, KEY `id` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec)

    And has some small data:

    mysql> select * from a; +----+-----+ | id | age | +----+-----+ | 1 | 1 | | 2 | 1 | | 3 | 1 | | 3 | 41 | | 4 | 41 | | 5 | 4 | | 4 | 4 | | 4 | 4 | +----+-----+ 8 rows in set (0.00 sec)

    When ICP is enabled, results is more efficient and feasible:

    mysql> explain extended select * from a where id>2 and id<4 and age=41; +----+-------------+-------+------------+-------+---------------+------+---------+------+------+----------+------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+-------+---------------+------+---------+------+------+----------+------------------------------------+ | 1 | SIMPLE | a | NULL | range | id | id | 4 | NULL | 2 | 12.50 | Using index condition; Using where | +----+-------------+-------+------------+-------+---------------+------+---------+------+------+----------+------------------------------------+ 1 row in set, 2 warnings (0.00 sec)

    Than without ICP,

    mysql> set optimizer_switch='index_condition_pushdown=off'; Query OK, 0 rows affected (0.00 sec) mysql> explain extended select * from a where id>2 and id<4 and age=41; +----+-------------+-------+------------+-------+---------------+------+---------+------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+-------+---------------+------+---------+------+------+----------+-------------+ | 1 | SIMPLE | a | NULL | range | id | id | 4 | NULL | 2 | 12.50 | Using where | +----+-------------+-------+------------+-------+---------------+------+---------+------+------+----------+-------------+ 1 row in set, 2 warnings (0.00 sec)

    This is a simple example of ICP, and how this graph can benefit a DBA.

  • InnoDB Buffer Pool Content

    When working with MySQL and using InnoDB engine, this graph is one of the most common values (innodb_buffer_pool*) that you have to tune up to optimize MySQL performance. Specifically speaking on its buffer pool content, it displays the trends for dirty pages against the total buffer pool content. The total buffer pool content includes the clean pages aside of dirty pages. Determining how efficient your MySQL is handling the buffer pool, this graph serves its purpose.

  • InnoDB Buffer Pool Pages

    This graph is helpful when you want to check how efficient MySQL is using your InnoDB buffer pool. You can use this graph, for instance, if your daily traffic doesn’t fill up the assigned innodb_buffer_pool_size, then this could indicate that certain parts of an application aren’t useful or do not serve any purpose or if you set the innodb_buffer_pool_size very high which might be good to lower the value and reclaim back space to your memory.

  • InnoDB Buffer Pool I/O

    When you have to check the number of pages created and written on InnoDB tables or page reads to InnoDB buffer pool by operations on InnoDB tables.

  • InnoDB Buffer Pool Requests

    When you want to determine how efficiently are your queries are accessing the InnoDB buffer pool, this graph serves the purpose. This graph will show the trends based on the data points on how your MySQL server performs when InnoDB engine has to frequently access the disk (indication of buffer pool has not warmed up yet), how frequent the buffer pool requests were handling read requests and write requests.

  • InnoDB Read-Ahead

    When you have the variable innodb_random_read_ahead set to ON, then add this graph as a valuable trend to look at as part of your DBA routine. It shows the trends on how your MySQL InnoDB storage engine manages the buffer pool by the read-ahead background thread, how it manages those subsequently evicted without having been accessed by queries, and how does InnoDB initiate the random read-ahead when a query scans a large portion of a table but in random order.

  • InnoDB Change Buffer

    When you have Percona Server 5.7 running, this graph is useful when monitoring how well InnoDB has allocated change buffering. This changes includes those inserts, updates, and deletes which are specified by innodb_change_buffering variable. Change buffering helps speed up queries, avoiding substantial random access I/O that would be required to read-in secondary index pages from disk.

  • InnoDB Change Buffer Activity

    This is related to the InnoDB Change Buffer graph, but dissects the information into more viable data points. These provide more information to monitor how InnoDB handles change buffering. This is useful in a particular DBA task to determine if your innodb_change_buffer_max_size is set to a too high value, since the change buffering shares the same memory of the InnoDB buffer pool reducing the memory available to cache data pages. You might have to consider to disable change buffering if the working set almost fits in the buffer pool, or if your tables have relatively few secondary indexes. Remember that change buffering does not impose extra overhead, because it only applies to pages that are not in the buffer pool. This graph is also useful if you have to determine how merges are useful if you do have to benchmark your application based on certain requests for particular scenarios. Let say you have a bulk inserts, you have to set innodb_change_buffering=insert and determine if having the values set in your buffer pool and innodb_change_buffer_max_size do not impact disk I/O, specially during recovery or slow shutdown (necessary if you want to do a failover with low downtime requirement). Also, this graph can serve your purpose to evaluate certain scenarios, since merging of change buffer may take several hours when there are numerous secondary indexes to update and many affected rows. During this time, disk I/O is increased, which can cause a significant slowdown for disk-bound queries.

MySQL Performance Schema

The MySQL Performance Schema is a complicated topic. It’s a long and hard one, but I’m going to discuss only information that is specific to the graphs we have in SCUMM. There are certain variables as well that you must consider, and ensure they are set properly. Ensure that you have your variable innodb_monitor_enable = all and userstat=1 to see data points in your graphs. As a note, when I am using the word “event” here, it does not mean that this is related to MySQL Event Scheduler. I’m talking about specific events such as MySQL parses a query, MySQL is reading or writing to relay/binary log file, etc.

Let’s proceed with the graphs then.

  • Performance Schema File IO (Events)

    This graph fetches data points related to any events that occurred in MySQL which might have been instrumented to create multiple instances of the instrumented object (e.g. binary log reads or InnoDB data file reads). Each row summarizes events for a given event name. For example, if there is an instrument for a mutex that is created for each connection, then there could be many instances of this instrumented event as there are multiple connections. The summary row for the instrument summarizes over all these instances. You can check these events in MySQL manual for Performance Schema Summary Tables for more info.

  • Performance Schema File IO (Load)

    This graph is same as “Performance Schema File IO (Events)” graph except that it’s instrumented based on the load.

  • Performance Schema File IO (Bytes)

    This graph is same as “Performance Schema File IO (Events)” graph except that it’s instrumented based on the the size in bytes. For example, how much time did a specific event take when MySQL triggered wait/io/file/innodb/innodb_data_file event.

  • Performance Schema Waits (Events)

    This graph has the data graph for all waits spent on a specific event. You can check Wait Event Summary Tables in the manual for more info.

  • Performance Schema Waits (Load)

    Same as the “Performance Schema Waits (Events)” graph but this time it shows the trends for the load.

  • Index Access Operations (Load)

    This graph is an aggregation of all the table index I/O wait events grouped by index(es) of a table, as generated by the wait/io/table/sql/handler instrument. You can check the MySQL manual about the Performance Schema table table_io_waits_summary_by_index_usage for more info.

  • Table Access Operations (Load)

    “Same as Index Access Operations (Load)” graph, it’s an aggregation of all table I/O wait events group by table, as generated by the wait/io/table/sql/handler instrument. This is very useful to DBAs. For example, you would like to trace how fast it takes to access (fetch) or update (insert, delete, update) a specific table. You can check in the MySQL manual about the Performance Schema table table_io_waits_summary_by_table for more info.

  • Performance Schema SQL & External Locks (Events)

    Related resources  Effective Monitoring of MySQL with SCUMM Dashboards - Part 1  Effective Monitoring of MySQL Replication with SCUMM Dashboards - Part 2  How to Monitor MySQL or MariaDB Galera Cluster with Prometheus Using SCUMM

    This graph is an aggregation (counts of how many times it occured) of all table lock wait events, as generated by the wait/lock/table/sql/handler instrument which is group by table. The SQL lock here in the graph means of the internal locks. These internal locks are read normal, read with shared locks, read high priority, read no insert, write allow write, write concurrent insert, write delayed, write low priority, write normal. While the external locks are read external and write external. In any DBA task, this is very useful if you have to trace and investigate locks on a particular table regardless of its type. You can check the table table_lock_waits_summary_by_table for more info.

  • Performance Schema SQL and External Locks (Seconds)

    Same as graph “Performance Schema SQL & External Locks (Events)”, but specified in seconds. If you want to look for your table locks based on seconds it held the locks, then this graph is your good resource.


The InnoDB Metrics and MySQL Performance Schema are some of the most in-depth and complicated parts in the MySQL domain, especially when there is no visualization to assist the interpretation. Thus, going to a manual trace and investigations may take some of your time and hard work. SCUMM dashboards offer a very efficient and feasible way to handle these and lower the extra load on any DBA routine task.

In this blog, you learnt how to use the dashboards for InnoDB and Performance Schema to improve database performance. These dashboards can make you more efficient at analyzing performance.

Tags:  MySQL monitoring dashboards clustercontrol scumm

GH-OST for MySQL Schema Change.

Schema change is one of the crucial tasks in MySQL with huge tables. Schema change can cause locks.

What is gh-ost?

                         gh-ost is a triggerless online schema change for MySQL by Github Engineering .It produces light workload on the master during the schema changes . We need online schema change to alter a table without downtime (locking) in schema change is the most widely used tool for making changes in the is just an alternative to pt-online schema change.

Why we have to use gh-ost?

          pt-online-schema uses triggers for migrating data from existing table to new table (shadow table ).It uses triggers like After insert,After update,After delete. It uses triggers to propagate live changes in the original table to the shadow schema uses synchronous approach ( All changes are applied on the temporary table immediately).

fb-online schema change uses asynchronous approach(All changes are added to change log table later applied on shadow table)

Major difference between these online schema changes and gh-ost is triggers less. Before understanding about gh-ost we need to know something about triggers and what are the disadvantages over triggers based online schema change.


Trigger is a stored routine that get activated when a particular event occur on the table.  For example a row can be inserted by using insert statement and insert trigger activates for the newly inserted row. Triggers generally contains set of queries.They use same transaction space as the query manipulates the tables. This ensures atomicity of both original table and shadow table ( new table ).

              Query and trigger run concurrently and competes for resources.It may cause lock in production.Generally, Triggers are used to record ongoing changes to the original table. pt-osc pauses the execution when server has high currency (–critical-load) or replication lag (–max-lag ) is very high. But it never cancel the triggers in order to maintain synchronization with the original table and the duplicate table.


gh-ost does not use triggers. It is a triggerless.

How ghost track live changes in the original table ?

                   gh-ost uses asynchronous approach similar to fb-osc but it does not use triggers and change log table. Instead of using new change table, it uses binary logs.In order to use gh-ost online schema migration, we must enable a few variables in MySQL server.

1) log-bin=mysql-bin (gh-ost acts as fake replica and pulls the events from the binary logs)

2) binlog-format=row (gh-ost will not work if this variable is set to statement.we need to change it to the row based or let allow the ghost to change by using the option –switch-to-rbr )

3) log-slave-updates=on (if you are using gh-ost in slave, we must enable this variable in order to generate binary logs)

These variables must be enabled because gh-ost acts as fake replica and it needs binlogs to apply live changes in the table.

log-bin=mysql-bin binlog-format=ROW log-slave-updates=ON

How to install gh-ost ?


1)Download the release from its official site

[aakash@mydbopslabs12 ~]$ wget

2) Extract the file

[aakash@mydbopslabs12 ~]$ tar -xvf gh-ost-binary-linux-20180417090238.tar.gz

How it works ?

  1. Connecting to replica/master In gh-ost online schema change we can avoid the –execute option by using no-op migration.It is similar to dry-run in pt-osc.
  2. validate alter statement and checking privileges and existence of tables.
  3. creation of ghost table which is similar to original table
  4. Apply alter on ghost table
  5. copy data from original table to ghost table
  6. copy live changes in the original table and copied to the ghost table by reading DML events from the binlog
  7. swapping the tables ghost → original and original → old
  8. Dropping the older table.

Ensure proper grants for MySQL user 

mysql>grant ALTER, CREATE, DELETE, DROP, INDEX, INSERT, LOCK TABLES, SELECT, TRIGGER, UPDATE  on *.* to 'ghost'@''; Query OK, 0 rows affected (0.01 sec) mysql>grant super,replication slave on *.* to 'ghost'@’’ ; Query OK, 0 rows affected (0.01 sec) mysql>flush privileges; Query OK, 0 rows affected (0.07 sec)

 After setting these things we need to check binlog_format. we need to set it to ROW.

[root@mydbopslabs11 vagrant]#  ./gh-ost --host=localhost --user=root --password='Aakash@007' --database=nlwiktionary_nousers --table=text --alter="ADD COLUMN new3 INT NOT NULL DEFAULT '0'" --chunk-size=2000 --max-load=Threads_connected=20

2018-05-27 11:51:50 FATAL You must be using ROW binlog format. I can switch it for you, provided –switch-to-rbr and that localhost:3306 doesn’t have replicas

                          In the above line , we have given user,password,databases,tables to alter.we also add chunk size for the background table copying process

                          After setting binlog_format=row or use –switch-to-rbr option we can proceed to the next step.In verbose mode,it gives detailed output.without –execute flag we can give dry-run.It checks for all the preliminary validations like user privileges,binary logs,connections.

[root@mydbopslabs11 vagrant]# ./gh-ost --host=localhost --user=root --password='Secret' --database=osm --table=users --alter="ADD COLUMN new3 INT NOT NULL DEFAULT '0'" --chunk-size=2000 --max-load=Threads_connected=20 --verbose 2018-05-27 11:59:17 INFO starting gh-ost 1.0.45 2018-05-27 11:59:17 INFO Migrating `osm`.`users` 2018-05-27 11:59:17 INFO connection validated on localhost:3306 2018-05-27 11:59:17 INFO User has ALL privileges 2018-05-27 11:59:17 INFO binary logs validated on localhost:3306 2018-05-27 11:59:17 INFO Restarting replication on localhost:3306 to make sure binlog settings apply to replication thread 2018-05-27 11:59:17 INFO Inspector initiated on mydbopslabs11:3306, version 5.7.22-log 2018-05-27 11:59:17 INFO Table found. Engine=InnoDB 2018-05-27 11:59:17 INFO Estimated number of rows via EXPLAIN: 3941 2018-05-27 11:59:17 INFO Recursively searching for replication master 2018-05-27 11:59:17 INFO Master found to be mydbopslabs11:3306 2018-05-27 11:59:17 INFO Tearing down inspector 2018-05-27 11:59:17 FATAL It seems like this migration attempt to run directly on master. Preferably it would be executed on a replica (and this reduces load from the master). To proceed please provide --allow-on-master. Inspector config=localhost:3306, user=root, applier config=localhost:3306, user=root If we run the gh-ost in master,we have to use the flag --allow-on-master to execute on the master itself. This feature is used on Aurora RDS too. # Migrating `osm`.`users`; Ghost table is `osm`.`_users_gho` # Migrating mydbopslabs11:3306; inspecting mydbopslabs11:3306; executing on mydbopslabs11 # Migration started at Sun May 27 12:37:54 +0000 2018 # chunk-size: 2000; max-lag-millis: 1500ms; dml-batch-size: 10; max-load: Threads_connected=20; critical-load: ; nice-ratio: 0.000000

From the above output,we can understand that which table are we going to alter(users) and which table is used as temporary table(_users_gho).It also produces another table with suffix _ghc.It is used to store migration logs and status.And maximum acceptance lag is 1.5s.

Copy: 3991/3991 100.0%; Applied: 0; Backlog: 0/1000; Time: 3s(total), 1s(copy); streamer: mydbopslabs11-bin.000003:637475; State: migrating; ETA: due Copy: 3991/3991 100.0%; Applied: 0; Backlog: 0/1000; Time: 3s(total), 1s(copy); streamer: mydbopslabs11-bin.000003:638321; State: migrating; ETA: due Copy: 3991/3991 100.0%; Applied: 0; Backlog: 0/1000; Time: 4s(total), 1s(copy); streamer: mydbopslabs11-bin.000003:642607; State: migrating; ETA: due

From these we can understand,how many rows have been copied and how many events have been copied from the binary logs and also the estimated time to complete the copy of the tables.

Read binary logs from slave → Analyze tables on slave → apply changes in the master

lock the original table → after sync,rename the original table to _old → rename the ghost table to original table → Drop the old table

  • No triggers and no additional tables are created
  • configuration changes during run time.
  • No support for foreign keys
  • No support for tables having triggers

Image Courtesy: Photo by David Menidrey on Unsplash