Planet MySQL

Architecting reliable backup and recovery strategy

I have been managing multiple databases, mostly in Microsoft SQL Server and MySQL server, both on on-premise and cloud. We have faced a lot of challenging issues such as records deleted from a user table, backup file is corrupted, backup file is not compatible, backup files got deleted, backup storage is full and backup is running for long time, etc. When you are facing this issues for the first time, it is surprising to see new kind of issues every day and if you are not good in documentation, repetitive issues will keep occurring and we keep fixing rather than suctioning. If you are facing same challenges, then you need to focus on your backup and recovery strategy.

A well-designed backup and recovery strategy maximizes data availability and minimizes data loss without tolerating business requirement. In this post, we will discuss about the following topics:
  1. Recovery Time Objective (RTO)
  2. Recovery Point Objective (RPO)
  3. Business Strategy
  4. Backup 
  • Backup Type
  • Backup Frequency
  • Backup Device
  • Retention and Archival plans
  • Restore
    • Backup Availability
    • Restoration Access
    • Documentation
  • Testing
  • Recovery Time Objective (RTO): Time taken to recover the data

    Recovery Time Objective defines, how long would an interruption to data access need to be, before the business ceased to be viable / capable of working successfully?

    Best Practice:

    Backup and Restoration plan that involves quick recovery with small data loss might be acceptable to business than a plan that reduces data loss but takes much longer to implement.

    Deciding optimal RTO might involve following things:
    •  Identifying the correct backup media
    •  Identifying the authorized persons to perform restore
    •  Identifying documentation related to the restore

    RTO for the Stock Exchange System might be 1 minutes – which means, in case of any failure on database system, backup should be restored and system should be online within a minute.

    Recovery Point Objective (RPO): The amount of data loss

    Recovery Point Objective defines how much data loss will be acceptable to business.

    Best Practice:

    Backup plan should be able to recover all committed data without any loss and it should be acceptable to loss the data which is in process at the time of failure.


    RPO for the banking system might be 1 seconds – which means, in case of any failure, the bank can loss at most 1 second worth of data.

    Business Strategy:

    Backup and recovery plan should be in sync with business requirement. As a best practice, it should be communicated with business stakeholders and expectations of the business users should be managed in line with the agreed strategy.

    Each database must be categorized based on the importance to the core functions of the organization and RPO and RTO for each database must be documented well in advance and communicated to all stakeholders.


    In certain business, Sales database is very critical than Marketing database and needs to restored in high priority, in case of failure. The RPO and RTO for Sales and Marketing databases will be defined accordingly.

    The business requirements will determine all aspects of the backup strategy, including:
    •          How frequently backups need to occur?

    •          How much data is to be backed up each time?

    •          The type of media, that the backups will be held on?

    •          Retention and Archival plans for the media? 
    Backup Type:

    Backup and recovery plan should clearly define the backup type required for different scenarios. A full backup might be appropriate in certain cases and partial backup will be appropriate in other scenario.

    For example, Product database may have huge number of records and backup and restoration of a huge backup is time consuming. When you have RTO of 1 minutes, it is not right strategy to have a full backup which needs 1 hour of restoration time. Similarly, when RPO is defined as 5 seconds, we cannot run full backup on every 5 seconds.

    Backup and Recovery strategy should address this question. It should choose the combination of different backup type to achieve the defined RPO and RTO metrics.

    Generally, there are 3 types of backups are used:
    •  Full backup.
    •  Differential backup. 
    •  Log backup.
    Backup Frequency:

    Most of the places, the backups are implemented in below frequency:
    •  Full backup – every week
    •  Differential backup – every day
    •  Log backup – every hour

    Review the above schedule and think, will it satisfy the business requirement. There is no right or wrong questions, the above frequency might satisfy the RPO of 1 hours and depends on the size of the database and where backup stored, RTO might be achievable. There is no one size fit all approach here, you have to understand the business requirement and define the RTO and RPO.

    When business demand no data loss, log backup of every 1 hour will not help, in case any failure, with the above backup plan, you may lose the data worth 1 hour. A well-designed backup and restore plan will address this issue and it should be documented and shared across business stakeholders.

    Backup Device:

    This is one of the important factors, choosing the wrong backup device, will negatively impact the RTO and business requirement. Generally, backup device can be either one or combination of below devices:
    •  Disk Files – (SAN, NAS, etc.)
    •  Cloud Storage (Azure Blob Storage, Amazon Cloud Storage, etc.)
    Best Practice:

    Make sure, the backup device is easily accessible and secured. There are instances, SAN administrator mistakenly deleted the backup files while freeing disk spaces and other instances, where DBA does not have access to backup storage device and the person who is having access is on vacation. If you are in Enterprise setup, usually different things are managed by multiple persons, if you are in small or start-up setup, you might have access to everything. Plan your backup and recovery strategy accordingly, where to write the backup and how you can access the backup, when there is a need.

    Also, there are compliance policy, where it demands the backup must be secured, might be using encryption algorithms.

    Archival and Retention Policy:

    Archival and Retention policy should be defined considering the legal requirements and compliance requirement. Depends upon your region compliance requirements may impact how you store and how long you retain the backup.


    Backup Availability:
    There are scenarios, where backup file got corrupted or mistakenly deleted from storage, if you have only one backup copy, it will lead to loss of data. Backup plan should consider this and implement, more than one copy of backup in an accessible location.
    Best Practice:
    Generally, it is good practice to have more than 2 copies of backup. Keep 1 one copy in the local machine or attached drive, other copy in SAN/NAS storage and another copy in remote location, such as cloud or other geo.

    Keeping a copy in local help in reduce the time needed to transfer the file to local machine and it will help in achieving RTO.

    Restoration Access:

    In a large enterprise setup, everyone may not have access to restore or recover data. These permissions include, access to backup storage, access to restoration and security access, if encryptions are in place. It is advisable to capture all these permissions in Backup and Recovery strategy and share it with business and technical stakeholders.

    In case of emergency, everyone knows their role and understand the sequence, when to execute their job. It will remove the confusions and keep the team in calm environment.


    I have interviewed a lot of people for the database administration role and I realized, the documentation is least important task in their day to day job. For a DBA, most of the tasks are repetitive and we are using a lot of scripts and automations. Even something developed or written by us few days back will be very difficult to understand later point of time. When you are in emergency, searching on Google or using others scripts will not help you and it seems it is simple to recover the data but it is not. It is mandatory to document each and every steps and business requirements in clear manner and communicate with all stakeholders.

    Also, it is important to agree both the business and technical stakeholders on how quickly data can be retrieved and how much data can be lost and get sign-off on the documentation.


    As John Ruskin said, “Quality is never an accident; it is always the result of intelligent effort.” It does not matter, how perfectly you designed your backup and recovery strategy, unless you tested many number of times and documented all the challenges and resolution, it is going to be surprise at the time of emergency.

    As a best practice, restore the backup on testing and staging frequently in a defined interval and restore on the production machine in longer intervals. There are instances, a backup copy may work in one machine with same database engine and configuration, may not work in other machine with same engine and configuration. Make a habit of test, test and test frequently in a defined period.

    A well-designed Backup and Recovery strategy will ensure the data availability and minimize the data lose. A documented strategy will act as guiding light in terms of emergency. It will remove the confusions and dependency on individuals. A good tested and orchestrated strategy ensures the business continuity and give guarantee to your sleep J.

    I hope this helps in planning your strategy, please share your thoughts on this article and let me know, if I need to edit or add content on this article. Thanks you for your time. Be proactive rather than being reactive!!!

    Impact of “tmpdir” change in MySQL replication

    Recently we had encountered a strange issue with replication and temp directory(tmpdir) change while working for one major client.

    All the servers under this were running with Percona flavor of MySQL versioned 5.6.38 hosted on a Debian 8(Jessie)

    The MySQL architecture setup  is as follows one master with 5 direct slaves under it

    Through this blog, we will take you through the issue we had faced and how we fixed ultimately.

    Client Request:

    There was a simple request from our client to add a column and index to a 16GB production table since the table had foreign keys, to avoid complications we decided to go with online DDL instead of pt-online-schema.

    When we started to alter, it got failed due to insufficient space in “tmpdir”.MySQL by default would be using “/tmp” for temp table creating, sorting and other temp operation, Since we had only 5.7G left on the “/” in master Alter failed as below.

    mysql> alter table payment.transactions add column wallet_aggregate_id bigint(20) default null,add index idx_wallet_aggregate_id(wallet_aggregate_id),algorithm=inplace,lock=none; ERROR 1878 (HY000): Temporary file write failure. root@payment-mydbops:/home/waseem.mydbops# df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 9.8G 3.6G 5.7G 39% / mysql> select @@tmpdir; +----------+ | @@tmpdir | +----------+ | /tmp | +----------+ 1 row in set (0.00 sec) Change of  MySQL tmpdir:

    We had to explain the situation to the stakeholders. With the approval of client we changed tmpdir from “/tmp” to “/var/tmp/mysql”  partition with a MySQL restart, then the alter was completed successfully in master

    root@payment-mydbops:/home/waseem.mydbops# df -h /var/tmp/mysql Filesystem Size Used Avail Use% Mounted on /dev/mapper/vgmysql-mysqltmp 25G 33M 25G 1% /var/tmp/mysql mysql> select @@tmpdir; +----------------+ | @@tmpdir | +----------------+ | /var/tmp/mysql | +----------------+ 1 row in set (0.00 sec) mysql> alter table transactions add column aggregate_id bigint(20) default null,add index idx_aggregate_id(aggregate_id),algorithm=inplace,lock=none; Query OK, 0 rows affected (48 min 1.57 sec) Records: 0 Duplicates: 0 Warnings: 0 Raise of Replication Issue:

    The replication went down in all the 5 slaves connected to master, because of the alters since all the slaves were using “/tmp”, as their tmpdir which had only 9.8G of usable space. below is the snap of replication error

    mysql> show slave status\G *************************** 1. row *************************** Master_Log_File: 118120-mysql-bin.000237 Read_Master_Log_Pos: 171370534 Relay_Log_File: mysqld-relay-bin.000704 Relay_Log_Pos: 99663098 Relay_Master_Log_File: 118120-mysql-bin.000237 Slave_IO_Running: Yes Slave_SQL_Running: No Last_Errno: 1878 Last_Error: Error 'Temporary file write failure.' on query. Default database: ''. Query: 'alter table transactions add column aggregate_id bigint(20) default null,add index idx_aggregate_id(aggregate_id),algorithm=inplace,lock=none' Exec_Master_Log_Pos: 99662928 Relay_Log_Space: 171371048 Seconds_Behind_Master: NULL 1 row in set (0.00 sec)

    Since we had faced the same issue with the master it’s just a straight forward to change the tmpdir to “/var/tmp/mysql” and restarted MySQL to fix the replication issue too.

    But the actual surprise factor is that the slave is completely out of the replication cluster, slave status is as below.

    mysql> show slave status\G Empty set (0.01 sec)

    Below is the error found from the error log

    2019-02-20 18:04:29 547 [ERROR] Error in checking /var/lib/mysql/ repository info type of FILE. 2019-02-20 18:04:29 547 [ERROR] Error creating master info: Error checking repositories. 2019-02-20 18:04:29 547 [ERROR] Failed to create or recover replication info repository. 2019-02-20 18:04:29 547 [Note] Check error log for additional messages. You will not be able to start replication until the issue is resolved and the server restarted. 2019-02-20 18:04:29 547 [Note] Event Scheduler: Loaded 0 events

    Since we had the in the table as below, we had ignored the error at first line initially.

    mysql> show global variables like '%repository%'; +---------------------------+---------------------+ | Variable_name | Value | +---------------------------+---------------------+ | master_info_repository | TABLE | | relay_log_info_repository | TABLE | +---------------------------+---------------------+

    Since we had the Binlog co-ordinated handly with us we decided to run the change master command again to resume replication, it again failed with below error on server-id

    change master to master_host='',master_user='replication',master_password='dessert',master_log_file='118120-mysql-bin.000237',master_log_pos=99662928; ERROR 1794 (HY000): Slave is not configured or failed to initialize properly. You must at least set --server-id to enable either a master or a slave. Additional error messages can be found in the MySQL error log. 

    This error made us check the config file for server-id, which was untouched.

    Again we went back to error log as below

    2019-02-20 18:04:29 547 [ERROR] Error in checking /var/lib/mysql/ repository info type of FILE. 2019-02-20 18:04:29 547 [ERROR] Error creating master info: Error checking repositories. 2019-02-20 18:04:29 547 [ERROR] Failed to create or recover replication info repository.

    Now we had checked the file  “/var/lib/mysql/” even though we had our repo info in the table and found the file to be empty, we decided to remove the file and restart MySQL. But still, the issue was not fixed

    Finally, we decided to change the master_info_repository and relay_log_info_repository from “TABLE” to “FILE” format, also removed empty file from datadir and restarted MySQL. Below are the changed done in the slave.

    mysql> show global variables like '%repository%'; +---------------------------+-------+ | Variable_name | Value | +---------------------------+-------+ | master_info_repository | FILE | | relay_log_info_repository | FILE | +---------------------------+-------+ root@payments-mydbops:/home/waseem.mydbops# less /etc/mysql/conf.d/mysql.cnf | grep -i repository master_info_repository = FILE relay_log_info_repository = FILE

    Even we had our “Change master to” query ready for re-configuring, surprisingly there was no room for that since after the above change, the slave resumed without any manual intervention as below

    mysql> show slave status\G *************************** 1. row *************************** Master_Log_File: 118120-mysql-bin.000237 Read_Master_Log_Pos: 214316053 Relay_Log_File: mysqld-relay-bin.000702 Relay_Log_Pos: 290 Relay_Master_Log_File: 118120-mysql-bin.000237 Slave_IO_Running: Yes Slave_SQL_Running: Yes Seconds_Behind_Master: 42840 1 row in set (0.00 sec) Conclusion :

    The above issue was quite strange. still, we are trying to reproduce the issue with other versions of MySQL such as 5.6, 5.7 as well as 8.0.

    We had resolved the issue, but still, we are trying to understand more about this. Will keep everyone posted

    For now, hope this blog would be helpful for someone to solve a similar issue with replication.

    Thanks for your time and continued support!!

    Image courtesy: Photo by Christian Regg on Unsplash

    MariaDB MaxScale Masking Basics and Examples

    I wanted to take a moment to write up a post on MariaDB MaxScale’s masking basics and include some real-world examples.

    We have nice documentation on the subject, and Dipti wrote a nice blog post on it as well. I just wanted to provide my take on it, and hopefully build upon what is already there and offer some additional insights.

    To provide a 50-foot overview, the masking filter makes it possible to obfuscate the returned value of a particular column.

    3 quite common columns where this would be very beneficial: Social Security Number (“SSN”), Date of Birth (“DOB”), and Credit Card Number (“CCNUM”).

    To use masking, it assumes you already have a MaxScale service up and running. For instance, the readwrite splitter.

    In this case, you would already have a configuration file similar to this (3 backend servers, 1 master (server1), 2 slaves (server2 & server3), with readwritesplit (Read-Write-Service) and its listener (Read-Write-Listener) set up:

    [maxscale] threads=4 log_info=1 local_address= log_debug=1 # debug only [server1] type=server address= port=3306 protocol=MySQLBackend [server2] type=server address= port=3344 protocol=MySQLBackend [server3] type=server address= port=3340 protocol=MySQLBackend [Read-Write-Service] type=service router=readwritesplit servers=server1,server2,server3 user=root passwd=xxx max_slave_connections=100% enable_root_user=1 [Read-Write-Listener] type=listener service=Read-Write-Service protocol=MySQLClient port=4006 [MaxAdmin-Service] type=service router=cli enable_root_user=1

    In the examples from the aforementioned manual and blog post, you will see something like this for your “configuration” addition:

    [MyMasking] type=filter module=masking rules=... [MyService] type=service ... filters=MyMasking

    MyMasking is the name you will choose for your masking filter.

    MyService is a service you already have defined and running. In this example, it is [Read-Write-Service].

    Thus, I simply add the following line to [Read-Write-Service]:


    If you already have a filter defined for this service, say NamedServerFilter, then you can add a second filter like this (i.e., each filter is separated by a “|”):

    filters=NamedServerFilter | MyMasking

    And then add your [MyMasking] section/configuration:

    [MyMasking] type=filter module=masking warn_type_mismatch=always large_payload=abort rules=/etc/maxscale.modules.d/masking_rules.json

    In the above, the type is “filter”, and the module is “masking”. Both of those are self-explanatory.

    The “warn_type_mismatch” instructs MaxScale to log a warning if a masking rule matches a column that is not of one of the allowed types. Possible values are “never” and “always” (with “never” being the default). However, a limitation of masking is that can only be used for masking columns of the following types: BINARY, VARBINARY, CHAR, VARCHAR, BLOB, TINYBLOB, MEDIUMBLOB, LONGBLOB, TEXT, TINYTEXT, MEDIUMTEXT, LONGTEXT, ENUM and SET. If the type of the column is something else (INTs, DATEs, etc.), then no masking will be performed. So you might want to be “warned” if this happens, hence why I chose “always”.

    The “large_payload” specifies how the masking filter should treat payloads larger than 16MB. Possible values are “ignore” and “abort” (with “abort” being the default). If you choose ignore, then if the result set is > 16MB, then no masking will be performed, and the result set will be returned to the client. If abort, then the client conneciton is closed.

    And the “rules” defines the path and name to the masking_rules.json file which you must use to define your rules, what you want filtered, which columns, from which tables, schemas, or database-wide, and options on how to handle the display, and so forth. It is very flexible, suffice to say.

    Thus my updated config file becomes:

    [maxscale] threads=4 log_info=1 local_address= log_debug=1 # debug only [server1] type=server address= port=3306 protocol=MySQLBackend [server2] type=server address= port=3344 protocol=MySQLBackend [server3] type=server address= port=3340 protocol=MySQLBackend [Read-Write-Service] type=service router=readwritesplit servers=server1,server2,server3 user=root passwd=xxx max_slave_connections=100% enable_root_user=1 filters=MyMasking [Read-Write-Listener] type=listener service=Read-Write-Service protocol=MySQLClient port=4006 [MaxAdmin-Service] type=service router=cli enable_root_user=1 [MyMasking] type=filter module=masking warn_type_mismatch=always large_payload=abort rules=/etc/maxscale.modules.d/masking_rules.json

    In MaxScale 2.3, there is also a “prevent_function_usage” option, which can be set to “true” or “false”. If true, then all statements that contain functions referring to masked columns will be rejected. Otherwise, not. True is the default, thus I’ll omit this part, so that this config can be used for all 2.x MaxScale setups.

    Now we need to create masking_rules.json (in /etc/maxscale.modules.d/), and we should be all set to start masking.

    chris@chris-linux-laptop-64:/etc/maxscale.modules.d$ cat masking_rules.json { "rules": [ { "replace": { "column": "SSN" }, "with": { "fill": "*" } } ] }

    This is the most basic. In this rule, *any* column named “SSN” in *any* schema will be replaced with all “*”s.

    So, once you’ve made your config change, and created masking_rules.json, it’s time to restart MaxScale so that it reads/loads your new masking filter:

    sudo service maxscale restart

    Now for some testing:

    CREATE SCHEMA employees; USE employees; CREATE TABLE employees (name char(10), location char(10), SSN char(11), DOB char(10), CCNUM char(16)); INSERT INTO employees VALUES ('chris', 'hanger18', '123-45-6789', '07/07/1947', '6011123456789012');

    Note that I made DOB a CHAR column so that masking would be applicable as it is not for a DATE column.

    Thus with no masking, we see everything:

    SELECT * FROM employees.employees; +-------+----------+-------------+------------+------------------+ | name | location | SSN | DOB | CCNUM | +-------+----------+-------------+------------+------------------+ | chris | hanger18 | 123-45-6789 | 07/07/1947 | 6011123456789012 | +-------+----------+-------------+------------+------------------+

    Now, connect to the service listener, in this case [Read-Write-Listener] running on port 4006:

    mysql -uroot -pxxx -P4006 --protocol=tcp SELECT * FROM employees.employees; +-------+----------+-------------+------------+------------------+ | name | location | SSN | DOB | CCNUM | +-------+----------+-------------+------------+------------------+ | chris | hanger18 | *********** | 07/07/1947 | 6011123456789012 | +-------+----------+-------------+------------+------------------+

    So we successfully ***’ed out SSN. Now, to also handle DOB and CCNUM. So edit the masking_rules.json file to:

    { "rules": [ { "replace": { "column": "SSN" }, "with": { "fill": "*" } }, { "replace": { "column": "DOB" }, "with": { "fill": "*" } }, { "replace": { "column": "CCNUM" }, "with": { "fill": "*" } } ] }

    If for the time being, you can still use MaxAdmin to reload the file without having to restart MaxScale (though do note maxadmin is deprecated in 2.3, and will be removed soon, though I suspect all functionality it provided will be available via maxctrl soon, if not already.):

    sudo maxadmin MaxScale> call command masking reload MyMasking

    Assuming the last command completed without errors, then can now simply re-query (via port 4006). However, first exit the connection to port 4006 and then re-connect:

    select * from employees.employees; +-------+----------+-------------+------------+------------------+ | name | location | SSN | DOB | CCNUM | +-------+----------+-------------+------------+------------------+ | chris | hanger18 | *********** | ********** | **************** | +-------+----------+-------------+------------+------------------+

    Note: The column names are case-sensitive, so if you have columns like “SSN” and “ssn”, then you will need to add 2 entries to masking_rules.json.

    Here is a table that uses “ssn” instead of “SSN” (everything else is the same):

    CREATE TABLE employees2 (name char(10), location char(10), ssn char(11), DOB char(10), CCNUM char(16)); INSERT INTO employees2 VALUES ('chris', 'hanger18', '123-45-6789', '07/07/1947', '6011123456789012'); SELECT * FROM employees.employees2; +-------+----------+-------------+------------+------------------+ | name | location | ssn | DOB | CCNUM | +-------+----------+-------------+------------+------------------+ | chris | hanger18 | 123-45-6789 | ********** | **************** | +-------+----------+-------------+------------+------------------+

    As you can see, the “ssn” is not masked, but DOB and CCNUm still are. So let’s add a sction for “ssn” in masking_rules.json:

    { "rules": [ { "replace": { "column": "SSN" }, "with": { "fill": "*" } }, { "replace": { "column": "ssn" }, "with": { "fill": "*" } }, { "replace": { "column": "DOB" }, "with": { "fill": "*" } }, { "replace": { "column": "CCNUM" }, "with": { "fill": "*" } } ] }

    Then reload the file:

    sudo maxadmin MaxScale> call command masking reload MyMasking

    And then exit port 4006 and re-connect, and re-issue the query:

    SELECT * FROM employees.employees2; +-------+----------+-------------+------------+------------------+ | name | location | ssn | DOB | CCNUM | +-------+----------+-------------+------------+------------------+ | chris | hanger18 | *********** | ********** | **************** | +-------+----------+-------------+------------+------------------+

    There we have it.

    And again, you have many more options when it comes to your string replacements, matching, fill, values, obfuscation, pcre2 regex, and so forth. I’ll leave you to the manual page to investigate those options if you wish.

    All in all, I hope this is helpful for anyone wanting to get started using MaxScale’s masking filter.

    MariaDB 10.3.13 and MariaDB Connector/C 3.0.9 now available

    The MariaDB Foundation is pleased to announce the availability of MariaDB 10.3.13, the latest stable release in the MariaDB 10.3 series, as well as MariaDB Connector/C 3.0.9, the latest stable release in the MariaDB Connector/ODBC series. See the release notes and changelogs for details. Download MariaDB 10.3.13 Release Notes Changelog What is MariaDB 10.3? MariaDB […]

    The post MariaDB 10.3.13 and MariaDB Connector/C 3.0.9 now available appeared first on

    MySQL 8 is not always faster than MySQL 5.7

    MySQL 8.0.15 performs worse in sysbench oltp_read_write than MySQL 5.7.25

    Initially I was testing group replication performance and was puzzled why MySQL 8.0.15 performs consistently worse than MySQL 5.7.25.

    It appears that a single server instance is affected by a performance degradation.

    My testing setup

    Hardware details:
    Bare metal server provided by, instance size: c2.medium.x86
    24 Physical Cores @ 2.2 GHz
    (1 X AMD EPYC 7401P)
    Memory: 64 GB of ECC RAM

    Storage : INTEL® SSD DC S4500, 480GB

    This is a server grade SATA SSD.


    sysbench oltp_read_write --report-interval=1 --time=1800 --threads=24 --tables=10 --table-size=10000000 --mysql-user=root --mysql-socket=/tmp/mysql.sock run

    In the following summary I used these combinations:

    • innodb_flush_log_at_trx_commit=0 or 1
    • Binlog: off or on
    • sync_binlog=1000 or sync_binlog=1

    The summary table, the number are transactions per second (tps – the more the better)

    +-------------------------------------------+--------------+--------------+-------+ | case                                      | MySQL 5.7.25 | MySQL 8.0.15 | ratio | +-------------------------------------------+--------------+--------------+-------+ | trx_commit=0, binlog=off                  | 11402 tps | 9840(*) | 1.16 | +-------------------------------------------+--------------+--------------+-------+ | trx_commit=1, binlog=off                  | 8375 | 7974 | 1.05 | +-------------------------------------------+--------------+--------------+-------+ | trx_commit=0, binlog=on, sync_binlog=1000 | 10862        | 8871 | 1.22 | +-------------------------------------------+--------------+--------------+-------+ | trx_commit=0, binlog=on, sync_binlog=1    | 7238 | 6459 | 1.12 | +-------------------------------------------+--------------+--------------+-------+ | trx_commit=1, binlog=on, sync_binlog=1    | 5970 | 5043 | 1.18 | +-------------------------------------------+--------------+--------------+-------+

    Summary: MySQL 8.0.15 is persistently worse than MySQL 5.7.25.

    In the worst case with

    trx_commit=0  and sync_binlog=1000 , it is worse by 22%, which is huge.

    I was looking to use these settings for group replication testing, but these settings, when used with MySQL 8.0.15, provide much worse results than I had with MySQL 5.7.25

    (*)  in the case of trx_commit=0, binlog=off, MySQL 5.7.25 performance is very stable, and practically stays at the 11400 tps level. MySQL 8.0.15 varies a lot from 8758 tps to 10299 tps in 1 second resolution measurements


    To clarify some comments, I’ve used latin1 CHARSET in this benchmark for both MySQL 5.7 and MySQL 8.0


    [mysqld] datadir= /mnt/data/mysql socket=/tmp/mysql.sock ssl=0 #innodb-encrypt-tables=ON character_set_server=latin1 collation_server=latin1_swedish_ci skip-character-set-client-handshake #skip-log-bin log-error=error.log log_bin = binlog relay_log=relay sync_binlog=1000 binlog_format = ROW binlog_row_image=MINIMAL server-id=1 # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # Recommended in standard MySQL setup # general  table_open_cache = 200000  table_open_cache_instances=64  back_log=3500  max_connections=4000 # files  innodb_file_per_table  innodb_log_file_size=15G  innodb_log_files_in_group=2  innodb_open_files=4000 # buffers  innodb_buffer_pool_size= 40G  innodb_buffer_pool_instances=8  innodb_log_buffer_size=64M # tune  innodb_doublewrite= 1  innodb_thread_concurrency=0  innodb_flush_log_at_trx_commit= 0  innodb_flush_method=O_DIRECT_NO_FSYNC  innodb_max_dirty_pages_pct=90  innodb_max_dirty_pages_pct_lwm=10  innodb_lru_scan_depth=2048  innodb_page_cleaners=4  join_buffer_size=256K  sort_buffer_size=256K  innodb_use_native_aio=1  innodb_stats_persistent = 1  #innodb_spin_wait_delay=96 # perf special  innodb_adaptive_flushing = 1  innodb_flush_neighbors = 0  innodb_read_io_threads = 16  innodb_write_io_threads = 16  innodb_io_capacity=1500  innodb_io_capacity_max=2500  innodb_purge_threads=4  innodb_adaptive_hash_index=0 max_prepared_stmt_count=1000000

    Photo by Suzy Hazelwood from Pexels


    JSON specific window functions in MySQL 8.0

    In 8.0, we introduced window functions in the MySQL server. A detailed explanation of how window functions can be used and what is supported in 8.0 is explained here.

    Now, let’s see what is new in window functions in the latest version 8.0.…

    Facebook Twitter Google+ LinkedIn

    Hybrid OLTP/Analytics Database Workloads in Galera Cluster Using Asynchronous Slaves

    Using Galera cluster is a great way of building a highly available environment for MySQL or MariaDB. It is a shared-nothing cluster environment which can be scaled even beyond 12-15 nodes. Galera has some limitations, though. It shines in low-latency environments and even though it can be used across WAN, the performance is limited by network latency. Galera performance can also be impacted if one of the nodes starts to behave incorrectly. For example, excessive load on one of the nodes may slow it down, resulting in slower handling of the writes and that will impact all of the other nodes in the cluster. On the other hand, it is quite impossible to run a business without analyzing your data. Such analysis, typically, requires running heavy queries, which is quite different from an OLTP workload. In this blog post, we will discuss an easy way of running analytical queries for data stored in Galera Cluster for MySQL or MariaDB, in a way that it does not impact the performance of the core cluster.

    How to run analytical queries on Galera Cluster?

    As we stated, running long running queries directly on a Galera cluster is doable, but perhaps not so good idea. Hardware-dependant, this can be acceptable solution (if you use strong hardware and you will not run a multi-threaded analytical workload) but even if CPU utilization will not be a problem, the fact that one of the nodes will have mixed workload (OLTP and OLAP) will alone pose some performance challenges. OLAP queries will evict data required for your OLTP workload from the buffer pool, and this will slow down your OLTP queries. Luckily, there is a simple yet efficient way of separating analytical workload from regular queries - an asynchronous replication slave.

    Replication slave is a very simple solution - all you need is just another host which can be provisioned and asynchronous replication has to be configured from Galera Cluster to that node. With asynchronous replication, the slave will not impact the rest of the cluster in any way. No matter if it is heavily loaded, uses different (less powerful) hardware, it will just continue replicating from the core cluster. The worst case scenario is that the replication slave will start lagging behind but then it is up to you to implement multi-threaded replication or, eventually to scale up the replication slave.

    Once the replication slave is up and running, you should run the heavier queries on it and offload the Galera cluster. This can be done in multiple ways, depending on your setup and environment. If you use ProxySQL, you can easily direct queries to the analytical slave based on the source host, user, schema or even the query itself. Otherwise it will be up to your application to send analytical queries to the correct host.

    Setting up a replication slave is not very complex but it still can be tricky if you are not proficient with MySQL and tools like xtrabackup. The whole process would consist of setting up the repository on a new server and installing the MySQL database. Then you will have to provision that host using data from Galera cluster. You can use xtrabackup for that but other tools like mydumper/myloader or even mysqldump will work as well (as long as you execute them correctly). Once the data is there, you will have to setup the replication between a master Galera node and the replication slave. Finally, you would have to reconfigure your proxy layer to include the new slave and route the traffic towards it or make tweaks in how your application connects to the database in order to redirect some of the load to the replication slave.

    What is important to keep in mind, this setup is not resilient. If the “master” Galera node would go down, the replication link will be broken and it will take a manual action to slave the replica off another master node in the Galera cluster.

    This is not a big deal, especially if you use replication with GTID (Global Transaction ID) but you have to identify that the replication is broken and then take the manual action.

    How to set up the asynchronous slave to Galera Cluster using ClusterControl?

    Luckily, if you use ClusterControl, the whole process can be automated and it requires just a handful of clicks. The initial state has already been set up using ClusterControl - a 3 node Galera cluster with 2 ProxySQL nodes and 2 Keepalived nodes for high availability of both database and proxy layer.

    Adding the replication slave is just a click away:

    Replication, obviously, requires binary logs to be enabled. If you do not have binlogs enabled on your Galera nodes, you can do it also from the ClusterControl. Please keep in mind that enabling binary logs will require a node restart to apply the configuration changes.

    Even if one node in the cluster has binary logs enabled (marked as “Master” on the screenshot above), it’s still good to enable binary log on at least one more node. ClusterControl can automatically failover the replication slave after it detects that the master Galera node crashed, but for that, another master node with binary logs enabled is required or it won’t have anything to fail over to.

    As we stated, enabling binary logs requires restart. You can either perform it straight away, or just make the configuration changes and perform the restart at some other time.

    After binlogs have been enabled on some of the Galera nodes, you can proceed with adding the replication slave. In the dialog you have to pick the master host, pass the hostname or IP address of the slave. If you have recent backups at hand (which you should do), you can use one to provision the slave. Otherwise ClusterControl will provision it using xtrabackup - all the recent master data will be streamed to the slave and then the replication will be configured.

    After the job completed, a replication slave has been added to the cluster. As stated earlier, should the die, another host in the Galera cluster will be picked as the master and ClusterControl will automatically slave off another node.

    As we use ProxySQL, we need to configure it. We’ll add a new server into ProxySQL.

    We created another hostgroup (30) where we put our asynchronous slave. We also increased “Max Replication Lag” to 50 seconds from default 10. It is up to your business requirements how badly analytics slave can be lagging before it becomes a problem.

    Related resources  How to Improve Performance of Galera Cluster for MySQL or MariaDB  How to Benchmark Performance of MySQL & MariaDB using SysBench  ClusterControl for Galera Cluster  Download ClusterControl

    After that we have to configure a query rule that will match our OLAP traffic and route it to the OLAP hostgroup (30). On the screenshot above we filled several fields - this is not mandatory. Typically you will need to use one, two of them at most. Above screenshot serves as an example so we can easily see that you can match queries using schema (if you have a separate schema with analytical data), hostname/IP (if OLAP queries are executed from some particular host), user (if application uses particular user for analytical queries. You can also match queries directly by either passing a full query or by marking them with SQL comments and let ProxySQL route all queries with a “OLAP_QUERY” string to our analytical hostgroup.

    As you can see, thanks to ClusterControl we were able to deploy a replication slave to Galera Cluster in just a couple of clicks. Some may argue that MySQL is not the most suitable database for analytical workload and we tend to agree. You can easily extend this setup using ClickHouse and by setting up a replication from asynchronous slave to ClickHouse columnar datastore for much better performance of analytical queries. We described this setup in one of the earlier blog posts.

    Tags:  galera cluster oltp hybrid MySQL MariaDB

    MySQL 8.0 allows unprivileged user access to its data if table mysql.user does not exists

    Yesterday my Percona colleague Ceri Williams found a nasty bug in MySQL 8.0. Its twin for Percona Server reported at PS-5431.

    He was installing MySQL Server 8.0 having not supported option in his configuration file. Thus initialization failed, but, surprisingly, the subsequent restart was successful and he was able to create, modify and delete tables in his MySQL installation. In other words: he got full access to the database. But he did not create any user account yet!

    This new instance of MySQL Server did not have privilege tables, particularly mysql.user, and silently started with --skip-grant-option.

    At the first glance starting just initialized MySQL Server with a non-safe option should not cause any harm, because datadir has no data yet. But this is what the experienced user expects! Newbies may miss the fact that their server is now accessible worldwide and start inserting data.

    This is bad by itself, but we experienced more and found that the same behavior may happen if someone removes table mysql.user, then let MySQL Server read its privilege tables. E.g., performs a restart.

    To demonstrate this behavior I created a table which holds credit card information and inserted data into it:

    mysql> create table test.very_important_table(
        -> id int not null primary key,
        -> credit_card_num char(16),
        -> credit_card_owner varchar(256),
        -> credit_card_expire_month char(2),
        -> credit_card_expire_year char(2),
        -> credit_card_cvv char(3))
        -> engine=innodb;
    Query OK, 0 rows affected (3.80 sec)

    mysql> insert into test.very_important_table values(1, '1234123412341234', 'Sveta Smirnova', '02', '20', '123');
    Query OK, 1 row affected (0.04 sec)

    mysql> select current_user();
    | current_user() |
    | root@localhost |
    1 row in set (0.00 sec)

    After that, I removed the table mysql.user (which I did not expect I will be allowed to do):

    mysql> drop table mysql.user;
    Query OK, 0 rows affected (0.08 sec)

    Then I restarted the server and logged in as user foo and was able to access data in the table with credit card information:

    $ mysql -ufoo test
    mysql> select current_user();
    | current_user()                    |
    | skip-grants user@skip-grants host |
    1 row in set (0.00 sec)

    mysql> select * from very_important_table;
    | id | credit_card_num  | credit_card_owner | credit_card_expire_month | credit_card_expire_year | credit_card_cvv |
    |  1 | 1234123412341234 | Sveta Smirnova    | 02                       | 20                      | 123             |
    1 row in set (0.00 sec)

    A bug report was considered a feature request in the first turn and now still in the discussion on how serious it is. There are a few reasons why you may not take this flaw seriously.

    First, there is no harm when someone has access to the empty MySQL instance. Allowing to connect to such an instance may help DBA to solve the issue, for example, by running the mysql_upgrade command. Thought it does not restore mysql.user table anyway:

    mysql> show tables from mysql like 'user';
    Empty set (0.00 sec)

    mysql> \q
    $ mysql_upgrade
    Checking if update is needed.
    Checking server version.
    Error occurred: Query against mysql.user table failed when checking the mysql.session.

    $ mysql

    mysql> show tables from mysql like 'user';
    Empty set (0.00 sec)

    Second, it is not expected that someone would drop table mysql.user manually. It is also not harmful for SQL injection attack if you limit access to your application user: one has to have a privilege which allows updating mysql database to perform the drop. It is good practice to do not grant such privileges for application users.

    Third, while it is possible that mysql tablespace may be corrupted on the file system level, it is very unlikely that it will damage only data, belonging to the privilege tables. There are much higher chances that MySQL Server won't be able to start at all after such a corruption happens.

    Fourth, to perform such an attack on the file system level, one needs to have access to the file system and can do more significant harm than the behavior reported.

    These are the reasons why this behavior may be not considered bad security flow.

    However, for new users of MySQL, it is possible to make such a mistake and silently open access to their database for everyone.

    How to Install MySQL on Ubuntu 18.04

    In this tutorial, we will show you how to install and improve the security of MySQL on Ubuntu 18.04. MySQL is one of the most popular and widely-used open-source relational database management systems. It is used for building web applications such as WordPress, Drupal, Joomla, and Magento, and is an important part of the LAMP/LEMP […]

    Percona Monitoring and Management (PMM) 1.17.1 Is Now Available

    Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL®, MongoDB®, and PostgreSQL performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL®, MongoDB®, and PostgreSQL® servers to ensure that your data works as efficiently as possible.

    In this release, we are introducing support for detection of our upcoming PMM 2.0 release in order to avoid potential version conflicts in the future, as PMM 1.x will not be compatible with PMM 2.x.

    Another improvement in this release is we have updated the Tooltips for Dashboard MySQL Query Response Time by providing a description of what the graphs display, along with links to related documentation resources. An example of Tooltips in action:

    PMM 1.17.1 release provides fixes for CVE-2018-16492 and CVE-2018-16487 vulnerabilities, related to Node.js modules. The authentication system used in PMM is not susceptible to the attacks described in these CVE reports. PMM does not use client-side data objects to control user-access.

    In release 1.17.1 we have included two improvements and fixed nine bugs.

    • PMM-1339: Improve tooltips for MySQL Query Response Time dashboard
    • PMM-3477: Add Ubuntu 18.10 support
    Fixed Bugs
    • PMM-3471: Fix global status metric names in mysqld_exporter for MySQL 8.0 compatibility
    • PMM-3400: Duplicate column in the Query Analytics dashboard Explain section
    • PMM-3353: postgres_exporter does not work with PostgreSQL 11
    • PMM-3188: Duplicate data on Amazon RDS / Aurora MySQL Metrics dashboard
    • PMM-2615: Fix wrong formatting in log which appears if pmm-qan-agent process fails to start
    • PMM-2592: MySQL Replication Dashboard shows error with multi-source replication
    • PMM-2327: Member State Uptime and Max Member Ping time charts on the MongoDB ReplSet dashboard return an error
    • PMM-955: Fix format of User Time and CPU Time Graphs on MySQL User Statistics dashboard
    • PMM-3522: CVE-2018-16492 and CVE-2018-16487

    Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

    Authentication and Authorization With Flask-Login

    Allowing users to login to your app is one of the most common features you'll add to a web app you build. This article will cover how to add simple authentication to your Flask app. The main package we will use to accomplish this is Flask Login.

    What We'll Be Building

    We're going to build some sign up and login pages that allows our app to allow users to login and access protected pages that non-logged in users can see. We'll grab information from the user model and display it on our protected pages when the user logs in to simulate what a profile would look like.

    We will cover the following in this article:

    • Use the Flask-Login library for session management
    • Use the built-in Flask utility for hashing passwords
    • Add protected pages to our app for logged in users only
    • Use Flask-SQLAlchemy to create a user model
    • Create sign up and login forms for our users to create accounts and login
    • Flash error messages back to users when something goes wrong
    • Use information from the user's account to display on the profile page
    Setting Up The Application

    Our app will use the Flask app factory pattern with blueprints. We'll have one blueprint that handles everything auth related, and we'll have another blueprint for our regular routes, which include the index and the protected profile page. In a real app, of course, you can break down the functionality in any way you like, but what I've proposed will work well for this tutorial.

    To start, we need to create the directories and files for our project.

    - project ---- templates -------- base.html <!-- contains common layout and links --> -------- index.html <!-- show the home page --> -------- login.html <!-- show the login form --> -------- profile.html <!-- show the profile page --> -------- signup.html <!-- show the signup form --> ---- <!-- setup our app --> ---- <!-- the auth routes for our app --> ---- <!-- the non-auth routes for our app --> ---- <!-- our user model -->

    You can create those files and we'll add them as we progress along.

    Install Packages

    There are three main packages we need for our project:

    • Flask
    • Flask-Login - to handle the user sessions after authentication
    • Flask-SQLAlchemy - to represent the user model and interface with our database

    We'll only be using SQLite for the database to avoid having to install any extra dependencies for the database. Here's what you need to run after creating your virtual environment to install the packages.

    pip install flask flask-sqlalchemy flask-login Main App File

    Let's start by creating the file for our project. This will have the function to create our app which will initialize the database and register our blueprints. At the moment this won't do much, but it will be needed for the rest of our app. All we need to do is initialize SQLAlchemy, set some configuration values, and register our blueprints here. # from flask import Flask from flask__sqlalchemy import SQLAlchemy # init SQLAlchemy so we can use it later in our models db = SQLAlchemy() def create_app(): app = Flask(__name) app.config['SECRET_KEY'] = '9OLWxND4o83j4K4iuopO' app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///db.sqlite' db.init_app(app) # blueprint for auth routes in our app from .auth import auth as auth_blueprint app.register_blueprint(auth_blueprint) # blueprint for non-auth parts of app from .main import main as main_blueprint app.register_blueprint(main_blueprint) return app Route Scaffolding

    Now that we have the main app file, we can start adding in our routes.

    For our routes, we'll use two blueprints. For our main blueprint, we'll have a home page (/) and profile page (/profile) for after we log in. If the user tries to access the profile page without being logged in, they'll be sent to our login route.

    For our auth blueprint, we'll have routes to retrieve both the login page (/login) and signup page (/signup). We'll also have routes for handling the POST request from both of those two routes. Finally, we'll have a logout route (/logout) to logout an active user.

    Let's go ahead and add them even though they won't do much. Later we will update them so we can use them. # from flask import Blueprint from . import db main = Blueprint('main', __name__) @main.route('/') def index(): return 'Index' @main.route('/profile') def profile(): return 'Profile' # from flask import Blueprint from . import db auth = Blueprint('auth', __name__) @auth.route('/login') def login(): return 'Login' @auth.route('/signup') def signup(): return 'Signup' @auth.route('/logout') def logout(): return 'Logout'

    You can now set the FLASK_APP and FLASK_DEBUG values and run the project. You should be able to view navigate to the five possible URLs and see the text returned.

    export FLASK_APP=project export FLASK_DEBUG=1 flask run


    Let's go ahead and create the templates that are used in our app. This is the first step before we can implement the actual login functionality. Our app will use four templates:

    • index.html
    • profile.html
    • login.html
    • signup.html

    We'll also have a base template that will have code common to each of the pages. In this case, the base template will have navigation links and the general layout of the page. Let's create them now.

    templates/base.html <!-- templates/base.html --> <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Flask Auth Example</title> <link rel="stylesheet" href="" /> </head> <body> <section class="hero is-primary is-fullheight"> <div class="hero-head"> <nav class="navbar"> <div class="container"> <div id="navbarMenuHeroA" class="navbar-menu"> <div class="navbar-end"> <a href="{{ url_for('main.index') }}" class="navbar-item"> Home </a> <a href="{{ url_for('main.profile') }}" class="navbar-item"> Profile </a> <a href="{{ url_for('auth.login') }}" class="navbar-item"> Login </a> <a href="{{ url_for('auth.signup') }}" class="navbar-item"> Sign Up </a> <a href="{{ url_for('auth.logout') }}" class="navbar-item"> Logout </a> </div> </div> </div> </nav> </div> <div class="hero-body"> <div class="container has-text-centered"> {% block content %} {% endblock %} </div> </div> </section> </body> </html> templates/index.html <!-- templates/index.html --> {% extends "base.html" %} {% block content %} <h1 class="title"> Flask Login Example </h1> <h2 class="subtitle"> Easy authentication and authorization in Flask. </h2> {% endblock %} templates/login.html <!-- templates/login.html --> {% extends "base.html" %} {% block content %} <div class="column is-4 is-offset-4"> <h3 class="title">Login</h3> <div class="box"> <form method="POST" action="/login"> <div class="field"> <div class="control"> <input class="input is-large" type="email" name="email" placeholder="Your Email" autofocus=""> </div> </div> <div class="field"> <div class="control"> <input class="input is-large" type="password" name="password" placeholder="Your Password"> </div> </div> <div class="field"> <label class="checkbox"> <input type="checkbox"> Remember me </label> </div> <button class="button is-block is-info is-large is-fullwidth">Login</button> </form> </div> </div> {% endblock %} templates/signup.html <!-- templates/signup.html --> {% extends "base.html" %} {% block content %} <div class="column is-4 is-offset-4"> <h3 class="title">Sign Up</h3> <div class="box"> <form method="POST" action="/signup"> <div class="field"> <div class="control"> <input class="input is-large" type="email" name="email" placeholder="Email" autofocus=""> </div> </div> <div class="field"> <div class="control"> <input class="input is-large" type="text" name="name" placeholder="Name" autofocus=""> </div> </div> <div class="field"> <div class="control"> <input class="input is-large" type="password" name="password" placeholder="Password"> </div> </div> <button class="button is-block is-info is-large is-fullwidth">Sign Up</button> </form> </div> </div> {% endblock %} templates/profile.html <!-- templates/profile.html --> {% extends "base.html" %} {% block content %} <h1 class="title"> Welcome, Anthony! </h1> {% endblock %}

    Once you've added the templates, we can update the return statements in each of the routes we have to return the templates instead of the text. # from flask import Blueprint, render_template ... @main.route('/') def index(): return render_template('index.html') @main.route('/profile') def profile(): return render_template('profile.html') # from flask import Blueprint, render_template ... @auth.route('/login') def login(): return render_template('login.html') @auth.route('/signup') def signup(): return render_template('signup.html')

    For example, here is what the signup page looks like if you navigate to /signup. You should be able see the pages for /, /login, and /profile as well. We'll leave /logout alone for now because it won't display a template when it's done.

    User Model

    Our user model represents what it means for our app to have a user. To keep it simple, we'll have fields for an email address, password, and name. Of course in your application, you may decide you want much more information to be stored per user. You can add things like birthday, profile picture, location, or any user preferences.

    Models created in Flask-SQLAlchemy are represented by classes which then translate to tables in a database. The attributes of those classes then turn into columns for those tables.

    Let's go ahead and create that user model. # from . import db class User(db.Model): id = db.Column(db.Integer, primary_key=True) # primary keys are required by SQLAlchemy email = db.Column(db.String(100), unique=True) password = db.Column(db.String(100)) name = db.Column(db.String(1000)) Database Config

    Like I said before, we'll be using a SQLite database. We could create a SQLite database on our own, but let's have Flask-SQLAlchemy do it for us.

    We already have the path of the database specified in the file, so we just need to tell Flask-SQLAlchemy to create the database for us in the Python REPL.

    If you stop your app and open up a Python REPL, we can create the database using the create_all method on the db object.

    from project import db, create_app db.create_all(app=create_app()) # pass the create_app result so Flask-SQLAlchemy gets the configuration.

    You should now see a db.sqlite file in your project directory. This database will have our user table in it.

    Sign up Method

    Now that we have everything set up, we can finally get to writing the code for the authorization.

    For our sign up function, we're going to take the data the user types into the form and add it to our database. But before we add it, we need to make sure the user doesn't already exist in the database. If it doesn't, then we need to make sure we hash the password before placing it into the database, because we don't want our passwords stored in plaintext.

    Let's start by adding a second function to handle the POSTed form data. In this function, we will gather the data passed from the user first.

    Let's start by creating the function and adding a redirect to the bottom because we know when we add the user to the database, we will redirect to the login route. # from flask import Blueprint, render_template, redirect, url_for ... @auth.route('/signup', methods=['POST']) def signup_post(): # code to validate and add user to database goes here return redirect(url_for('auth.login'))

    Now, let's add the rest of the code necessary for signing up a user.

    To start, we'll have to use the request object to get the form data. If you're not familar with the request object, I wrote on article on it here: # from flask import Blueprint, render_template, redirect, url_for, request from import generate_password_hash, check_password_hash from .models import User from . import db ... @auth.route('/signup', methods=['POST']) def signup_post(): email = request.form.get('email') name = request.form.get('name') password = request.form.get('password') user = User.query.filter_by(email=email).first() # if this returns a user, then the email already exists in database if user: # if a user is found, we want to redirect back to signup page so user can try again return redirect(url_for('auth.signup')) # create new user with the form data. Hash the password so plaintext version isn't saved. new_user = User(email=email, name=name, password=generate_password_hash(password, method='sha256')) # add the new user to the database db.session.add(new_user) db.session.commit() return redirect(url_for('auth.login')) Test Sign Up Method

    Now that we have the sign up method done, we should be able to create a new user. Use the form to create a user.

    There are two ways you can verify if the sign up worked: you can use a database viewer to look at the row that was added to your table, or you can simply try signing up with the same email address again, and if you get an error, you know the first email was saved properly. So let's take that approach.

    We can add code to let the user know the email already exists and tell them to go to the login page. By calling the flash function, we will send a message to the next request, which in this case, is the redirect. The page we land on will then have access to that message in the template.

    First, we add the flash before we redirect back to our signup page. # from flask import Blueprint, render_template, redirect, url_for, request, flash ... @auth.route('/signup', methods=['POST']) def signup_post(): ... if user: # if a user is found, we want to redirect back to signup page so user can try again flash('Email address already exists') return redirect(url_for('auth.signup'))

    To get the flashed message in the template, we can add this code above the form. This will display the message directly above the form.

    templates/signup.html <!-- templates/signup.html --> ... {% with messages = get_flashed_messages() %} {% if messages %} <div class="notification is-danger"> {{ messages[0] }}. Go to <a href="{{ url_for('auth.login') }}">login page</a>. </div> {% endif %} {% endwith %} <form method="POST" action="/signup"> ...

    Login Method

    The login method is similiar to the signup function in that we will take the user information and do something with it. In this case, we will compare the email address entered to see if it's in the database. If so, we will test the password the user provided by hashing the password the user passes in and comparing it to the hashed password in the database. We know the user has entered the correct password when both hashed passwords match.

    Once the user has passed the password check, we know that they have the correct credentials and we can go ahead and log them in using Flask-Login. By calling login_user, Flask-Login will create a session for that user that will persist as the user stays logged in, which will allow the user to view protected pages.

    We can start with a new route for handling the POSTed data. We'll redirect to the profile page when the user successfully logs in. # ... @auth.route('/login', methods=['POST']) def login_post(): # login code goes here return redirect(url_for('main.profile'))

    Now, we need to verify if the user has the correct credentials. # ... @auth.route('/login', methods=['POST']) def login_post(): email = request.form.get('email') password = request.form.get('password') remember = True if request.form.get('remember') else False user = User.query.filter_by(email=email).first() # check if user actually exists # take the user supplied password, hash it, and compare it to the hashed password in database if not user or not check_password_hash(user.password, password): flash('Please check your login details and try again.') return redirect(url_for('auth.login')) # if user doesn't exist or password is wrong, reload the page # if the above check passes, then we know the user has the right credentials return redirect(url_for('main.profile'))

    Let's add in the block in the template so the user can see the flashed message. Like the signup form, let's add the potential error message directly above the form.

    templates/login.html <!-- templates/login.html --> ... {% with messages = get_flashed_messages() %} {% if messages %} <div class="notification is-danger"> {{ messages[0] }} </div> {% endif %} {% endwith %} <form method="POST" action="/login">

    So we have have the ability to say a user has been logged in succesfully, but there is nothing to actually log the user in anywhere. This is where we bring in Flask-Login.

    But first, we need a few things for Flask-Login to work.

    We start by adding something called the UserMixin to our User model. The UserMixin will add Flask-Login attributes to our model so Flask-Login will be able to work with it. # from flask_login import UserMixin from . import db class User(UserMixin, db.Model): id = db.Column(db.Integer, primary_key=True) # primary keys are required by SQLAlchemy email = db.Column(db.String(100), unique=True) password = db.Column(db.String(100)) name = db.Column(db.String(1000))

    Then, we need to specify our user loader. A user loader tells Flask-Login how to find a specific user from the ID that is stored in their session cookie. We can add this in our create_app function along with basic init code for Flask-Login. # ... from flask_login import LoginManager def create_app(): ... db.init_app(app) login_manager = LoginManager() login_manager.login_view = 'auth.login' login_manager.init_app(app) from .models import User @login_manager.user_loader def load_user(user_id): # since the user_id is just the primary key of our user table, use it in the query for the user return User.query.get(int(user_id))

    Finally, we can add the login_user function just before we redirect to the profile page to create the session. # from flask_login import login_user from .models import User ... @auth.route('/login', methods=['POST']) def login_post(): # if the above check passes, then we know the user has the right credentials login_user(user, remember=remember) return redirect(url_for('main.profile'))

    With Flask-Login setup, we can finally use the /login route.

    When everything is successful, we will see the profile page.

    Protected Pages

    If your name isn't also Anthony, then you'll see that your name is wrong. What we want is the profile to display the name in the database. So first, we need to protect the page and then access the user's data to get the name.

    To protect a page when using Flask-Login is very simple: we add the @login_requried decorator between the route and the function. This will prevent a user who isn't logged in from seeing the route. If the user isn't logged in, the user will get redirected to the login page, per the Flask-Login configuration.

    With routes that are decorated with the login_required decorator, we then have the ability to use the current_user object inside of the function. This current_user represents the user from the database, and we can access all of the attributes of that user with dot notation. For example,, current_user.password, and and will return the actual values stored in the database for the logged in user.

    Let's use the name of the current user and send it to the template. We then will use that name and display its value. # from flask import Blueprint, render_template from flask_login import login_required, current_user ... @main.route('/profile') @login_required def profile(): return render_template('profile.html', templates/profile.html <!-- templates/profile.html --> ... <h1 class="title"> Welcome, {{ name }}! </h1>

    Once we go to our profile page, we then see that the user's name appears.

    The final thing we can do is update our logout view. We can call the logout_user function in a route for logging out. We have the login_required decorator because it doesn't make sense to logout a user who isn't logged in to begin with. # from flask_login import login_user, logout_user, login_required ... @auth.route('/logout') @login_required def logout(): logout_user() return redirect(url_for('main.index'))

    After we logout and try viewing the profile page again, we see a error message appear. This is because Flask-Login flashes a message for us when the user isn't allowed to access a page.

    One last thing we can do is put if statements in the templates to display only the links relevant to the user. So before the user logins in, they will have the option to login or signup. After they have logged in, they can go to their profile or logout.

    templates/base.html <!-- templates/base.html --> ... <div class="navbar-end"> <a href="{{ url_for('main.index') }}" class="navbar-item"> Home </a> {% if current_user.is_authenticated %} <a href="{{ url_for('main.profile') }}" class="navbar-item"> Profile </a> {% endif %} {% if not current_user.is_authenticated %} <a href="{{ url_for('auth.login') }}" class="navbar-item"> Login </a> <a href="{{ url_for('auth.signup') }}" class="navbar-item"> Sign Up </a> {% endif %} {% if current_user.is_authenticated %} <a href="{{ url_for('auth.logout') }}" class="navbar-item"> Logout </a> {% endif %} </div>


    We've done it! We have used Flask-Login and Flask-SQLAlchemy to build a very basic login system for our app. We covered how to authenticate a user by first creating a user model and storing the user information to later. Then we had to verify the user's password was correct by hashing the password from the form and comparing it to the one stored in the database. Finally, we added authorization to our app by using the @login_required decotor on a profile page so only logged in users can see that page.

    What we created in this tutorial will be sufficient for smaller apps, but if you wish to have more functionality from the beginning, you may want to consider using either the Flask-User or Flask-Security libraries, which are both build on top of the Flask-Login library.

    ProxySQL Native Support for Percona XtraDB Cluster (PXC)

    ProxySQL in its versions up to 1.x did not natively support Percona XtraDB Cluster (PXC). Instead, it relied on the flexibility offered by the scheduler. This approach allowed users to implement their own preferred way to manage the ProxySQL behaviour in relation to the Galera events.

    From version 2.0 we can use native ProxySQL support for PXC.. The mechanism to activate native support is very similar to the one already in place for group replication.

    In brief it is based on the table [runtime_]mysql_galera_hostgroups and the information needed is mostly the same:

    • writer_hostgroup: the hostgroup ID that refers to the WRITER
    • backup_writer_hostgroup: the hostgoup ID referring to the Hostgorup that will contain the candidate servers
    • reader_hostgroup: The reader Hostgroup ID, containing the list of servers that need to be taken in consideration
    • offline_hostgroup: The Hostgroup ID that will eventually contain the writer that will be put OFFLINE
    • active: True[1]/False[0] if this configuration needs to be used or not
    • max_writers: This will contain the MAX number of writers you want to have at the same time. In a sane setup this should be always 1, but if you want to have multiple writers, you can define it up to the number of nodes.
    • writer_is_also_reader: If true [1] the Writer will NOT be removed from the reader HG
    • max_transactions_behind: The number of wsrep_local_recv_queue after which the node will be set OFFLINE. This must be carefully set, observing the node behaviour.
    • comment: I suggest to put some meaningful notes to identify what is what.

    Given the above let us see what we need to do in order to have a working galera native solution.
    I will have three Servers: (Node1) (Node2) (node3)

    As set of Hostgroup, I will have:

    Writer HG-> 100 Reader HG-> 101 BackupW HG-> 102 offHG HG-> 9101

    To set it up

    Servers first:

    INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('',101,3306,1000); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('',101,3306,1000); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('',101,3306,1000);

    Then the galera settings:

    insert into mysql_galera_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup, offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) values (100,102,101,9101,0,1,1,16);

    As usual if we want to have R/W split we need to define the rules for it:

    insert into mysql_query_rules (rule_id,proxy_port,schemaname,username,destination_hostgroup,active,retries,match_digest,apply) values(1040,6033,'windmills','app_test',100,1,3,'^SELECT.*FOR UPDATE',1); insert into mysql_query_rules (rule_id,proxy_port,schemaname,username,destination_hostgroup,active,retries,match_digest,apply) values(1041,6033,'windmills','app_test',101,1,3,'^SELECT.*@@',1); save mysql query rules to disk; load mysql query rules to run;

    Then another important variable… the server version, please do yourself a good service ad NEVER use the default.

    update global_variables set variable_value='5.7.0' where variable_name='mysql-server_version'; LOAD MYSQL VARIABLES TO RUNTIME;SAVE MYSQL VARIABLES TO DISK;

    Finally activate the whole thing:

    save mysql servers to disk; load mysql servers to runtime;

    One thing to note before we go ahead. In the list of servers I had:

    1. Filled only the READER HG
    2. Used the same weight

    This because of the election mechanism ProxySQL will use to identify the writer, and the (many) problems that may be attached to it.

    For now let us go ahead and see what happens when I load this information to runtime.

    Before running the above commands:

    +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | weight | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | MaxConnUsed | Queries | Queries_GTID_sync | Bytes_data_sent | Bytes_data_recv | Latency_us | +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+


    +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | weight | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | MaxConnUsed | Queries | Queries_GTID_sync | Bytes_data_sent | Bytes_data_recv | Latency_us | +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | 1000 | 100 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 501 | | 1000 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 501 | | 1000 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 546 | | 1000 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 467 | | 1000 | 102 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 546 | | 1000 | 102 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 467 | +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ mysql> select * from runtime_mysql_galera_hostgroups \G *************************** 1. row *************************** writer_hostgroup: 100 backup_writer_hostgroup: 102 reader_hostgroup: 101 offline_hostgroup: 9101 active: 0 <----------- note this max_writers: 1 writer_is_also_reader: 1 max_transactions_behind: 16 comment: NULL 1 row in set (0.01 sec)

    As we can see, ProxySQL had taken the nodes from my READER group and distribute them adding node 1 in the writer and node 2 as backup_writer.

    But – there is a but – wasn’t my rule set with Active=0? Indeed it was, and I assume this is a bug (#Issue  1902).

    The other thing we should note is that ProxySQL had elected as writer node 3 (
    As I said before what should we do IF we want to have a specific node as preferred writer?

    We need to modify its weight. So say we want to have node 1 ( as writer we will need something like this:

    INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('',101,3306,10000); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('',101,3306,100); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('',101,3306,100);

    Doing that will give us :

    +--------+-----------+---------------+----------+--------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | weight | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | MaxConnUsed | Queries | Queries_GTID_sync | Bytes_data_sent | Bytes_data_recv | Latency_us | +--------+-----------+---------------+----------+--------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | 10000 | 100 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2209 | | 100 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 546 | | 100 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 508 | | 10000 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2209 | | 100 | 102 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 546 | | 100 | 102 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 508 | +--------+-----------+---------------+----------+--------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+

    If you noticed, given we had set the WEIGHT in node 1 higher, this node will become also the most utilized for reads.
    We probably do not want that, so let us modify the reader weight.

    update mysql_servers set weight=10 where hostgroup_id=101 and hostname='';

    At this point if we trigger the failover, with set global wsrep_reject_queries=all; on node 1.
    ProxySQL will take action and will elect another node as writer:

    +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | weight | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | MaxConnUsed | Queries | Queries_GTID_sync | Bytes_data_sent | Bytes_data_recv | Latency_us | +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | 100 | 100 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 562 | | 100 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 562 | | 100 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 588 | | 100 | 102 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 588 | | 10000 | 9101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 468 | +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+

    Node 3 ( is the new writer and node 1 is in the special group for OFFLINE.
    Let see now what will happen IF we put back node 1.

    +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | weight | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | MaxConnUsed | Queries | Queries_GTID_sync | Bytes_data_sent | Bytes_data_recv | Latency_us | +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | 10000 | 100 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 449 | | 100 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 532 | | 100 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 569 | | 10000 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 449 | | 100 | 102 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 532 | | 100 | 102 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 569 | +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+

    Ooops the READER has come back with the HIGHEST value and as such it will be the most used node, once more. To fix it, we need to re-run the update as before.

    But there is a way to avoid this? In short the answer is NO!
    This, in my opinion, is BAD and is worth a feature request, because this can really put a node on the knees.

    Now this is not the only problem. There is another point that is probably worth discussion, which is the fact ProxySQL is currently doing FAILOVER/FAILBACK.

    Failover, is obviously something we want to have, but failback is another discussion. The point is, once the failover is complete and the cluster has redistributed the incoming requests, doing a failback is an impacting operation that can be a disruptive one too.

    If all nodes are treated as equal, there is no real way to prevent it, while if YOU set a node to be the main writer, something can be done, let us see what and how.
    Say we have:

    INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('',101,3306,1000); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('',101,3306,100); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('',101,3306,100); +--------+-----------+---------------+----------+--------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | weight | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | MaxConnUsed | Queries | Queries_GTID_sync | Bytes_data_sent | Bytes_data_recv | Latency_us | +--------+-----------+---------------+----------+--------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | 1000 | 100 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 470 | | 100 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 558 | | 100 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 613 | | 10 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 470 | | 100 | 102 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 558 | | 100 | 102 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 613 | +--------+-----------+---------------+----------+--------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+

    Let us put the node down
    set global wsrep_reject_queries=all;

    And check:

    +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | weight | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | MaxConnUsed | Queries | Queries_GTID_sync | Bytes_data_sent | Bytes_data_recv | Latency_us | +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | 100 | 100 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 519 | | 100 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 519 | | 100 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 506 | | 100 | 102 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 506 | | 1000 | 9101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 527 | +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+

    We can now manipulate the weight in the special OFFLINE group and see what happen:

    update mysql_servers set weight=10 where hostgroup_id=9101 and hostname=''

    Then I put the node up again:
    set global wsrep_reject_queries=none;

    +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | weight | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | MaxConnUsed | Queries | Queries_GTID_sync | Bytes_data_sent | Bytes_data_recv | Latency_us | +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+ | 100 | 100 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 537 | | 100 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 537 | | 100 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 573 | | 10 | 101 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 458 | | 100 | 102 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 573 | | 10 | 102 | | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 458 | +--------+-----------+---------------+----------+---------+----------+----------+--------+---------+-------------+---------+-------------------+-----------------+-----------------+------------+

    That’s it, the node is back but with no service interruption.

    At this point we can decide if make this node reader like the others, or wait and plan a proper time of the day when we can put it back as writer, while, in the meanwhile it has a bit of load to warm its bufferpool.

    The other point – and important information – is what is ProxySQL is currently checking on Galera? From reading the code Proxy will trap the following:

    • read_only
    • wsrep_local_recv_queue
    • wsrep_desync
    • wsrep_reject_queries
    • wsrep_sst_donor_rejects_queries
    • primary_partition

    Plus the standard sanity checks on the node.

    Finally to monitor the whole situation we can use this:

    mysql> select * from mysql_server_galera_log order by time_start_us desc limit 10; +---------------+------+------------------+-----------------+-------------------+-----------+------------------------+-------------------+--------------+----------------------+---------------------------------+-------+ | hostname | port | time_start_us | success_time_us | primary_partition | read_only | wsrep_local_recv_queue | wsrep_local_state | wsrep_desync | wsrep_reject_queries | wsrep_sst_donor_rejects_queries | error | +---------------+------+------------------+-----------------+-------------------+-----------+------------------------+-------------------+--------------+----------------------+---------------------------------+-------+ | | 3306 | 1549982591661779 | 2884 | YES | NO | 0 | 4 | NO | NO | NO | NULL | | | 3306 | 1549982591659644 | 2778 | YES | NO | 0 | 4 | NO | NO | NO | NULL | | | 3306 | 1549982591658728 | 2794 | YES | NO | 0 | 4 | NO | YES | NO | NULL | | | 3306 | 1549982586669233 | 2827 | YES | NO | 0 | 4 | NO | NO | NO | NULL | | | 3306 | 1549982586663458 | 5100 | YES | NO | 0 | 4 | NO | NO | NO | NULL | | | 3306 | 1549982586658973 | 4132 | YES | NO | 0 | 4 | NO | YES | NO | NULL | | | 3306 | 1549982581665317 | 3084 | YES | NO | 0 | 4 | NO | NO | NO | NULL | | | 3306 | 1549982581661261 | 3129 | YES | NO | 0 | 4 | NO | NO | NO | NULL | | | 3306 | 1549982581658242 | 2786 | YES | NO | 0 | 4 | NO | NO | NO | NULL | | | 3306 | 1549982576661349 | 2982 | YES | NO | 0 | 4 | NO | NO | NO | NULL | +---------------+------+------------------+-----------------+-------------------+-----------+------------------------+-------------------+--------------+----------------------+---------------------------------+-------+

    As you can see above the log table keeps track of what is changed. In this case, it reports that node 1 has wsrep_reject_queries activated, and it will continue to log this until we set wsrep_reject_queries=none.


    ProxySQL galera native integration is a useful feature to manage any Galera implementation, no matter whether it’s Percona PXC, MariaDB cluster or MySQL/Galera.

    The generic approach is obviously a good thing, still it may miss some specific extension like we have in PXC with the performance_schema pxc_cluster_view table.

    I’ve already objected about the failover/failback, and I am here again to remind you: whenever you do a controlled failover REMEMBER to change the weight to prevent an immediate failback.

    This is obviously not possible in the case of a real failover, and, for instance, a simple temporary eviction will cause two downtimes instead only one. Some environments are fine with that others not so.

    Personally I think there should be a FLAG in the configuration, such that we can decide if failback should be executed or not.


    Shinguz: MySQL Enterprise Backup Support Matrix

    MySQL Enterprise Backup (MEB) is a bit limited related to support of older MySQL versions. So you should consider the following release matrix:

    MEB/MySQL Supported  5.5   5.6   5.7   8.0  3.11.x NO x x 3.12.x YES x x 4.0.x NO x 4.1.x YES x 8.0.x YES 8.0.x*

    * MySQL Enterprise Backup 8.0.15 only supports MySQL 8.0.15. For earlier versions of MySQL 8.0, use the MySQL Enterprise Backup version with the same version number as the server.

    MySQL Enterprise Backup is available for download from the My Oracle Support (MOS) website. This release will be available on Oracle eDelivery (OSDC) after the next upload cycle. MySQL Enterprise Backup is a commercial extension to the MySQL family of products.

    As an Open Source alternative Percona XtraBackup for MySQL databases is available.

    Compatibility with MySQL Versions: 3.11, 3.12, 4.0, 4.1, 8.0.

    MySQL Enterprise Backup User's Guide: 3.11, 3.12, 4.0, 4.1, 8.0.

    Taxonomy upgrade extras:  MySQL Enterprise Backup Backup meb enterprise support matrix

    How Network Bandwidth Affects MySQL Performance

    Network is a major part of a database infrastructure. However, often performance benchmarks are done on a local machine, where a client and a server are collocated – I am guilty myself. This is done to simplify the setup and to exclude one more variable (the networking part), but with this we also miss looking at how network affects performance.

    The network is even more important for clustering products like Percona XtraDB Cluster and MySQL Group Replication. Also, we are working on our Percona XtraDB Cluster Operator for Kubernetes and OpenShift, where network performance is critical for overall performance.

    In this post, I will look into networking setups. These are simple and trivial, but are a building block towards understanding networking effects for more complex setups.


    I will use two bare-metal servers, connected via a dedicated 10Gb network. I will emulate a 1Gb network by changing the network interface speed with

    ethtool -s eth1 speed 1000 duplex full autoneg off  command.

    I will run a simple benchmark:

    sysbench oltp_read_only --mysql-ssl=on --mysql-host= --tables=20 --table-size=10000000 --mysql-user=sbtest --mysql-password=sbtest --threads=$i --time=300 --report-interval=1 --rand-type=pareto

    This is run with the number of threads varied from 1 to 2048. All data fits into memory – innodb_buffer_pool_size is big enough – so the workload is CPU-intensive in memory: there is no IO overhead.

    Operating System: Ubuntu 16.04

    Benchmark N1. Network bandwidth

    In the first experiment I will compare 1Gb network vs 10Gb network.

    threads/throughput 1Gb network 10Gb network 1 326.13 394.4 4 1143.36 1544.73 16 2400.19 5647.73 32 2665.61 10256.11 64 2838.47 15762.59 96 2865.22 17626.77 128 2867.46 18525.91 256 2867.47 18529.4 512 2867.27 17901.67 1024 2865.4 16953.76 2048 2761.78 16393.84


    Obviously the 1Gb network performance is a bottleneck here, and we can improve our results significantly if we move to the 10Gb network.

    To see that 1Gb network is bottleneck we can check the network traffic chart in PMM:

    We can see we achieved 116MiB/sec (or 928Mb/sec)  in throughput, which is very close to the network bandwidth.

    But what we can do if the our network infrastructure is limited to 1Gb?

    Benchmark N2. Protocol compression

    There is a feature in MySQL protocol whereby you can see the compression for the network exchange between client and server:

    --mysql-compression=on  for sysbench.

    Let’s see how it will affect our results.

    threads/throughput 1Gb network 1Gb with compression protocol 1 326.13 198.33 4 1143.36 771.59 16 2400.19 2714 32 2665.61 3939.73 64 2838.47 4454.87 96 2865.22 4770.83 128 2867.46 5030.78 256 2867.47 5134.57 512 2867.27 5133.94 1024 2865.4 5129.24 2048 2761.78 5100.46


    Here is an interesting result. When we use all available network bandwidth, the protocol compression actually helps to improve the result.

    threads/throughput 10Gb 10Gb with compression 1 394.4 216.25 4 1544.73 857.93 16 5647.73 3202.2 32 10256.11 5855.03 64 15762.59 8973.23 96 17626.77 9682.44 128 18525.91 10006.91 256 18529.4 9899.97 512 17901.67 9612.34 1024 16953.76 9270.27 2048 16393.84 9123.84


    But this is not the case with the 10Gb network. The CPU resources needed for compression/decompression are a limiting factor, and with compression the throughput actually only reach half of what we have without compression.

    Now let’s talk about protocol encryption, and how using SSL affects our results.

    Benchmark N3. Network encryption

    threads/throughput 1Gb network 1Gb SSL 1 326.13 295.19 4 1143.36 1070 16 2400.19 2351.81 32 2665.61 2630.53 64 2838.47 2822.34 96 2865.22 2837.04 128 2867.46 2837.21 256 2867.47 2837.12 512 2867.27 2836.28 1024 2865.4 1830.11 2048 2761.78 1019.23

    threads/throughput 10Gb 10Gb SSL 1 394.4 359.8 4 1544.73 1417.93 16 5647.73 5235.1 32 10256.11 9131.34 64 15762.59 8248.6 96 17626.77 7801.6 128 18525.91 7107.31 256 18529.4 4726.5 512 17901.67 3067.55 1024 16953.76 1812.83 2048 16393.84 1013.22


    For the 1Gb network, SSL encryption shows some penalty – about 10% for the single thread – but otherwise we hit the bandwidth limit again. We also see some scalability hit on a high amount of threads, which is more visible in the 10Gb network case.

    With 10Gb, the SSL protocol does not scale after 32 threads. Actually, it appears to be a scalability problem in OpenSSL 1.0, which MySQL currently uses.

    In our experiments, we saw that OpenSSL 1.1.1 provides much better scalability, but you need to have a special build of MySQL from source code linked to OpenSSL 1.1.1 to achieve this. I don’t show them here, as we do not have production binaries.

    1. Network performance and utilization will affect the general application throughput.
    2. Check if you are hitting network bandwidth limits
    3. Protocol compression can improve the results if you are limited by network bandwidth, but also can make things worse if you are not
    4. SSL encryption has some penalty (~10%) with a low amount of threads, but it does not scale for high concurrency workloads.

    MySQL Connector/Python 8.0.14+: Changed Expression Handling in mysqlx

    The X DevAPI allows you to work with JSON documents and SQL tables at the same time. Furthermore, the CRUD style API is more intuitive than SQL statements for some programmers. Either way, the X DevAPI allows you to mix JSON documents, SQL tables, CRUD methods, and SQL statements to give you the best of all worlds. In MySQL Connector/Python, the X DevAPI is implemented in the mysqlx module.

    This blog will look at how MySQL Connector/Python handles expressions, and how you in version 8.0.14 and later need to use the mysqlx.expr() method to explicitly define expressions.


    The changed way to work with expressions does not apply when defining fields. In that case, you can still specify the expression inline.

    Expression Handling

    One original feature of the X DevAPI in MySQL Connector/Python was that expressions were automatically handled when you inlined them into statement definitions. For example, you could increase the population like:

    result = countryinfo.modify("Name = :country") \ .set("demographics.Population", "CAST(FLOOR(demographics.Population * 1.10) AS unsigned)") \ .bind("country", country_name) \ .execute()

    Notice the expression in line 3.

    While this was nice, it also caused some problems such as Bug 92416 where regular data could end up being interpreted as an expression by mistake. For example:

    schema = db.create_schema("db1") mycol = schema.create_collection("mycol") mycol.add({"A": "(@)"}).execute()

    In this example (@) would be interpreted as an expression even though it should be taken as a string.

    The solution has been to require the mysqlx.expr() method to be used to define all expressions. This then allows MySQL to interpret all strings as literals. While it does remove the shortcut method, it also removes any ambiguities and thus makes the code safer.


    The mysqlx.expr() method is also available in MySQL Connector/Python 8.0.13 and earlier as well as other connectors. If you have not yet upgraded to the latest version, it is recommended to start using mysqlx.expr() now to avoid problems when upgrading.

    Let’s look at an example to better understand how the change works.


    As an example, consider an application that uses the world_x database and updates the population of a country with 10%. This can be done using the following expression:

    CAST(FLOOR(demographics.Population * 1.10) AS unsigned)

    At first, the application will use the inline method to specify the expression, then we will look at changing this to work in MySQL Connector/Python 8.0.14 and later.

    Inline Expression

    The source code for updating the population using an inline expression can be seen in the following sample program:

    import mysqlx import mysql.connector connect_args = { "host": "", "port": 33060, "user": "pyuser", "password": "Py@pp4Demo", }; print("Using inline expression.") print("MySQL Connector/Python {0}".format(mysql.connector.__version__)) db = mysqlx.get_session(**connect_args) schema = db.get_schema("world_x") countryinfo = schema.get_collection("countryinfo") country_name = "Australia" db.start_transaction() # Get the population of the country population = countryinfo.find("Name = :country") \ .fields("demographics.Population AS Population") \ .bind("country", country_name) before = population.execute() print("Before ...: {0}".format(before.fetch_one()['Population'])) # Update the population result = countryinfo.modify("Name = :country") \ .set("demographics.Population", "CAST(FLOOR(demographics.Population * 1.10) AS unsigned)") \ .bind("country", country_name) \ .execute() after = population.execute() print("After ....: {0}".format(after.fetch_one()['Population'])) # Leave the data in the same state as before the changes db.rollback() db.close()

    The population is updated in the statement defined and executed in lines 30-34. The population is fetched both before and after and printed together with the MySQL Connector/Python version. At the end, the transaction is rolled back, so it is possible to execute the program several times while getting the same output.


    The mysql.connector module is only included in order to print the MySQL Connector/Python version.

    The output using MySQL Connector/Python 8.0.13 is:

    Using inline expression. MySQL Connector/Python 8.0.13 Before ...: 18886000 After ....: 20774600

    This is as expected. However, in MySQL Connector/Python 8.0.14 and later, the result is quite different:

    Using inline expression. MySQL Connector/Python 8.0.15 Before ...: 18886000 After ....: CAST(FLOOR(demographics.Population * 1.10) AS unsigned)

    Now the expression is taken as a literal string – oops.


    This also highlights that you must be careful when working with a schemaless data model. The database will not stop you from putting wrong data types into your documents.

    Let’s look at how this can be fixed.

    Explicit Expressions

    The solution to the issue, we just saw, is to use explicit expressions. You can do that using the mysqlx.expr() method. This returns an expression object that you can use in your statements. The same example as before – but using an explicit expression – is:

    import mysqlx import mysql.connector connect_args = { "host": "", "port": 33060, "user": "pyuser", "password": "Py@pp4Demo", }; print("Using explicit expression.") print("MySQL Connector/Python {0}".format(mysql.connector.__version__)) db = mysqlx.get_session(**connect_args) schema = db.get_schema("world_x") countryinfo = schema.get_collection("countryinfo") country_name = "Australia" db.start_transaction() # Get the population of the country population = countryinfo.find("Name = :country") \ .fields("demographics.Population AS Population") \ .bind("country", country_name) before = population.execute() print("Before ...: {0}".format(before.fetch_one()['Population'])) # Update the population expr = mysqlx.expr("CAST(FLOOR(demographics.Population * 1.10) AS unsigned)") result = countryinfo.modify("Name = :country") \ .set("demographics.Population", expr) \ .bind("country", country_name) \ .execute() after = population.execute() print("After ....: {0}".format(after.fetch_one()['Population'])) # Leave the data in the same state as before the changes db.rollback() db.close()

    The only change is the definition of the expression in line 30 and the use of it in line 32.


    The expression object can be re-used if you need the same expression in several statements.

    Now, MySQL Connector/Python 8.0.13 and 8.0.15 updates the population to the same value. First 8.0.13:

    Using explicit expression. MySQL Connector/Python 8.0.13 Before ...: 18886000 After ....: 20774600

    Then 8.0.15:

    Using explicit expression. MySQL Connector/Python 8.0.15 Before ...: 18886000 After ....: 20774600 Further Reading

    If this blog has caught you interest in MySQL Connector/Python whether you are looking at using the traditional Python Database API specification (PEP 249) or the new X DevAPI, then I have written MySQL Connector/Python Revealed published by Apress.
    The book is available from Apress (print and DRM free ePub+PDF), Amazon (print and Kindle), Barnes & Noble (print), and others.


    Percona Server for MySQL 5.7.25-28 Is Now Available

    Percona is glad to announce the release of Percona Server 5.7.25-28 on February 18, 2019. Downloads are available here and from the Percona Software Repositories.

    This release is based on MySQL 5.7.25 and includes all the bug fixes in it. Percona Server 5.7.25-28 is now the current GA (Generally Available) release in the 5.7 series.

    All software developed by Percona is open-source and free.

    In this release, Percona Server introduces the variable binlog_skip_flush_commands. This variable controls whether or not FLUSH commands are written to the binary log. Setting this variable to ON can help avoid problems in replication. For more information, refer to our documentation.


    If you’re currently using Percona Server 5.7, Percona recommends upgrading to this version of 5.7 prior to upgrading to Percona Server 8.0.

    Bugs fixed
    • FLUSH commands written to the binary log could cause errors in case of replication. Bug fixed #1827: (upstream #88720).
    • Running LOCK TABLES FOR BACKUP followed by STOP SLAVE SQL_THREAD could block replication preventing it from being restarted normally. Bug fixed #4758.
    • The ACCESS_DENIED field of the information_schema.user_statistics table was not updated correctly. Bug fixed #3956.
    • MySQL could report that the maximum number of connections was exceeded with too many connections being in the CLOSE_WAIT state. Bug fixed #4716 (upstream #92108)
    • Wrong query results could be received in semi-join sub queries with materialization-scan that allowed inner tables of different semi-join nests to interleave. Bug fixed #4907 (upstream bug #92809).
    • In some cases, the server using the the MyRocks storage engine could crash when TTL (Time to Live) was defined on a table. Bug fixed #4911
    • Running the SELECT statement with the ORDER BY and LIMIT clauses could result in a less than optimal performance. Bug fixed #4949 (upstream #92850)
    • There was a typo in trottling was replaced with throttling. Bug fixed #240. Thanks to Michael Coburn for the patch.
    • MyRocks could crash while running START TRANSACTION WITH CONSISTENT SNAPSHOT if other transactions were in specific states. Bug fixed #4705,
    • In some cases, mysqld could crash when inserting data into a database the name of which contained special characters (CVE-2018-20324). Bug fixed #5158.
    • MyRocks incorrectly processed transactions in which multiple statements had to be rolled back. Bug fixed #5219.
    • In some cases, the MyRocks storage engine could crash without triggering the crash recovery. Bug fixed #5366.
    • When bootstrapped with undo or redo log encryption enabled on a very fast storage, the server could fail to start. Bug fixed #4958.

    Other bugs fixed: #2455#4791#4855#4996#5268.

    This release also contains fixes for the following CVE issues: CVE-2019-2534, CVE-2019-2529, CVE-2019-2482, CVE-2019-2434.

    Find the release notes for Percona Server for MySQL 5.7.25-28 in our online documentation. Report bugs in the Jira bug tracker.


    How to Migrate from Oracle to MySQL / Percona Server

    Migrating from Oracle to MySQL/Percona Server is not a trivial task. Although it is getting easier, especially with the arrival of MySQL 8.0 and Percona announced Percona Server for MySQL 8.0 GA. Aside from planning for your migration from Oracle to Percona Server, you must ensure that you understand the purpose and functionality for why it has to be Percona Server.

    This blog will focus on Migrating from Oracle to Percona Server as its specific target database of choice. There's a page in the Oracle website about SQL Developer Supplementary Information for MySQL Migrations which can be used as a reference for the planned migration. This blog will not cover the overall process of migration, as it is a long process. But it will hopefully provide enough background information to serve as a guide for your migration process.

    Since Percona Server is a fork of MySQL, almost all features that come along in MySQL are present in Percona Server. So any reference of MySQL here is applicable as well to Percona Server. We previously blogged about migrating Oracle Database to PostgreSQL. I’ll reiterate again the reasons why one would consider migrating from Oracle to an open-source RDBMS such as PostgreSQL or Percona Server/MySQL/MariaDB.

    1. Cost: As you may know Oracle licence cost is very expensive and there is additional cost for some features like partitioning and high availability. So overall it's very expensive.
    2. Flexible open source licensing and easy availability from public cloud providers like AWS.
    3. Benefit from open source add-ons to improve performance.
    Planning and Development Strategy

    Migration from Oracle going to Percona Server 8.0 can be a pain since there's a lot of key factors that needs to be considered and addressed. For example, Oracle can run on a Windows Server machine but Percona Server does not support Windows. Although you can compile it for Windows, Percona itself does not offer any support for Windows. You must also identify your database architecture requirements, since Percona Server is not designed for OLAP (Online Analytical Processing) or data-warehousing applications. Percona Server/MySQL RDBMS are perfect fit for OLTP (Online Transaction Processing).

    Identifying the key aspect of your database architecture, for example if your current Oracle architecture implements MAA (Maximum Available Architecture) with Data Guard ++ Oracle RAC (Real Application Cluster), you should determine its equivalence in Percona Server. There's no straight answer for this within MySQL/Percona Server. However, you can choose from a synchronous replication, an asynchronous replication (Percona XtraDB Cluster is still currently on version 5.7.x), or with Group Replication. Then, there's multiple alternatives that you can implement for your own high-availability solution. For example, (to name a few) using Corosync/Pacemaker/DRBD/Linux stack, or using MHA (MySQL High Availability), or using Keepalived/HaProxy/ProxySQL stack, or plainly rely on ClusterControl which supports Keepalived, HaProxy, ProxySQL, Garbd, and Maxscale for your high-availability solutions.

    On the other side, the question you have also to consider as part of the plan is "How will Percona will provide support and who will help us when Percona Server itself encounters a bug or how high is the urgency when we need help?". One thing to consider as well is budget, if the purpose of migration from enterprise database to an open-source RDBMS is because of cost-cutting.

    There are different options from migration planning to the things you need to do as part of your development strategy. Such options include engaging with experts in the MySQL/Percona Server field and that includes us here at Severalnines. There are lots of MySQL consulting firms that can help you through this since migration from Oracle to MySQL requires a lot of expertise and know-how in the MySQL Server area. This should not be limited to the database but it should cover expertise in scalability, redundancy, backups, high-availability, security, monitoring/observability, recovery and engaging on mission critical systems. Overall, it should have an understanding of your architectural insight without exposing confidentiality of your data.

    Assessment or Preliminary Check

    Backing up your data including configurations or setup files, kernel tunings, automation scripts shall not be left into oblivion. It's an obvious task, but before you migrate, always secure everything first , especially when moving to a different platform.

    You must assess as well that your applications are following the up-to-date software engineering conventions and ensure that they are platform agnostic. These practices can be to your benefit especially when moving to a different database platform, such as Percona Server for MySQL.

    Take note that the operating system that Percona Server requires can be a show-stopper if your application and database run on a Windows Server and the application is Windows dependent; then this could be a lot of work! Always remember that Percona Server is on a different platform: perfection might not be guaranteed but can be achieved close enough.

    Lastly, make sure that the targeted hardware is designed to work feasibly with Percona's server requirements or that it is bug-free at least (see here). You may consider stress testing first with Percona Server before reliably moving off your Oracle Database.

    What You Should Know

    It is worth noting that in Percona Server / MySQL, you can create multiple databases whereas Oracle does not come with that same functionality as MySQL.

    In MySQL, physically, a schema is synonymous with a database. You can substitute the keyword SCHEMA instead of DATABASE in MySQL SQL syntax, for example using CREATE SCHEMA instead of CREATE DATABASE; whilst Oracle has a distinction of this. A schema represents only a part of a database: the tables and other objects owned by a single user. Normally, there is a one-to-one relationship between the instance and the database.

    For example, in a replication setup equivalent in Oracle (e.g. Real Application Clusters or RAC), you have your multiple instances accessing a single database. This lets you start Oracle on multiple servers, but all accessing the same data. However, in MySQL, you can allow access to multiple databases from your multiple instances and can even filter out which databases/schema you can replicate to a MySQL node.

    Referencing from one of our previous blog, the same principle applies when speaking of converting your database with available tools found on the internet.

    There is no such tool that can 100% convert Oracle database into Percona Server / MySQL; some of it will be manual work.

    Checkout the following sections for things that you must be aware of when it comes to migration and verifying the logical SQL result.

    Data Type Mapping

    MySQL / Percona Server have a number of data-types that is almost the same as Oracle but not as rich as compared to Oracle. But since the arrival of the 5.7.8 version of MySQL, is supports for a native JSON data type.

    Below is its data-type equivalent representation (tabular representation is taken from here):

      Oracle MySQL 1 BFILE Pointer to binary file, ⇐ 4G VARCHAR(255) 2 BINARY_FLOAT 32-bit floating-point number FLOAT 3 BINARY_DOUBLE 64-bit floating-point number DOUBLE 4 BLOB Binary large object, ⇐ 4G LONGBLOB 5 CHAR(n), CHARACTER(n) Fixed-length string, 1 ⇐ n ⇐ 255 CHAR(n), CHARACTER(n) 6 CHAR(n), CHARACTER(n) Fixed-length string, 256 ⇐ n ⇐ 2000 VARCHAR(n) 7 CLOB Character large object, ⇐ 4G LONGTEXT 8 DATE Date and time DATETIME 9 DECIMAL(p,s), DEC(p,s) Fixed-point number DECIMAL(p,s), DEC(p,s) 10 DOUBLE PRECISION Floating-point number DOUBLE PRECISION 11 FLOAT(p) Floating-point number DOUBLE 12 INTEGER, INT 38 digits integer INT DECIMAL(38) 13 INTERVAL YEAR(p) TO MONTH Date interval VARCHAR(30) 14 INTERVAL DAY(p) TO SECOND(s) Day and time interval VARCHAR(30) 15 LONG Character data, ⇐ 2G LONGTEXT 16 LONG RAW Binary data, ⇐ 2G LONGBLOB 17 NCHAR(n) Fixed-length UTF-8 string, 1 ⇐ n ⇐ 255 NCHAR(n) 18 NCHAR(n) Fixed-length UTF-8 string, 256 ⇐ n ⇐ 2000 NVARCHAR(n) 19 NCHAR VARYING(n) Varying-length UTF-8 string, 1 ⇐ n ⇐ 4000 NCHAR VARYING(n) 20 NCLOB Variable-length Unicode string, ⇐ 4G NVARCHAR(max) 21 NUMBER(p,0), NUMBER(p) 8-bit integer, 1 <= p < 3 TINYINT (0 to 255) 16-bit integer, 3 <= p < 5 SMALLINT 32-bit integer, 5 <= p < 9 INT 64-bit integer, 9 <= p < 19 BIGINT Fixed-point number, 19 <= p <= 38 DECIMAL(p) 22 NUMBER(p,s) Fixed-point number, s > 0 DECIMAL(p,s) 23 NUMBER, NUMBER(*) Floating-point number DOUBLE 24 NUMERIC(p,s) Fixed-point number NUMERIC(p,s) 25 NVARCHAR2(n) Variable-length UTF-8 string, 1 ⇐ n ⇐ 4000 NVARCHAR(n) 26 RAW(n) Variable-length binary string, 1 ⇐ n ⇐ 255 BINARY(n) 27 RAW(n) Variable-length binary string, 256 ⇐ n ⇐ 2000 VARBINARY(n) 28 REAL Floating-point number DOUBLE 29 ROWID Physical row address CHAR(10) 30 SMALLINT 38 digits integer DECIMAL(38) 31 TIMESTAMP(p) Date and time with fraction DATETIME(p) 32 TIMESTAMP(p) WITH TIME ZONE Date and time with fraction and time zone DATETIME(p) 33 UROWID(n) Logical row addresses, 1 ⇐ n ⇐ 4000 VARCHAR(n) 34 VARCHAR(n) Variable-length string, 1 ⇐ n ⇐ 4000 VARCHAR(n) 35 VARCHAR2(n) Variable-length string, 1 ⇐ n ⇐ 4000 VARCHAR(n) 36 XMLTYPE XML data LONGTEXT

    Data type attributes and options:

    Oracle MySQL BYTE and CHAR column size semantics Size is always in characters   Transactions

    Percona Server uses XtraDB (an enhanced version of InnoDB) as its primary storage engine for handling transactional data; although various storage engines can be an alternative choice for handling transactions such as TokuDB (deprecated) and MyRocks storage engines.

    Whilst there are advantages and benefits to using or exploring MyRocks with XtraDB, the latter is more robust and de facto storage engine that Percona Server is using and its enabled by default, so we'll use this storage engine as the basis for migration with regards to transactions.

    By default, Percona Server / MySQL has autocommit variable set to ON which means that you have to explicitly handle transactional statements to take advantage of ROLLBACK for ignoring changes or taking advantage of using SAVEPOINT.

    It's basically the same concept that Oracle uses in terms of commit, rollbacks and savepoints.

    For explicit transactions, this means that you have to use the START TRANSACTION/BEGIN; <SQL STATEMENTS>; COMMIT; syntax.

    Otherwise, if you have to disable autocommit, you have to explicitly COMMIT all the time for your statements that requires changes to your data.

    Dual Table

    MySQL has the dual compatibility with Oracle which is meant for compatibility of databases using a dummy table, namely DUAL.

    This suits Oracle's usage of DUAL so any existing statements in your application that use DUAL might require no changes upon migration to Percona Server.

    The Oracle FROM clause is mandatory for every SELECT statement, so Oracle database uses DUAL table for SELECT statement where a table name is not required.

    In MySQL, the FROM clause is not mandatory so DUAL table is not necessary. However, the DUAL table does not work exactly the same as it does for Oracle, but for simple SELECT's in Percona Server, this is fine.

    See the following example below:

    In Oracle,

    SQL> DESC DUAL; Name Null? Type ----------------------------------------- -------- ---------------------------- DUMMY VARCHAR2(1) SQL> SELECT CURRENT_TIMESTAMP FROM DUAL; CURRENT_TIMESTAMP --------------------------------------------------------------------------- 16-FEB-19 AM +08:00

    But in MySQL:

    mysql> DESC DUAL; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DUAL' at line 1 mysql> SELECT CURRENT_TIMESTAMP FROM DUAL; +---------------------+ | CURRENT_TIMESTAMP | +---------------------+ | 2019-02-15 20:20:28 | +---------------------+ 1 row in set (0.00 sec)

    Note: the DESC DUAL syntax does not work in MySQL and the results as well differ as CURRENT_TIMESTAMP (uses TIMESTAMP data type) in MySQL does not include the timezone.


    Oracle's SYSDATE function is almost the same in MySQL.

    MySQL returns date and time and is a function that requires () (close and open parenthesis with no arguments required. To demonstrate this below, here's Oracle and MySQL on using SYSDATE.

    In Oracle, using plain SYSDATE just returns the date of the day without the time. But to get the time and date, use TO_CHAR to convert the date time into its desired format; whereas in MySQL, you might not need it to get the date and the time as it returns both.

    See example below.

    In Oracle:

    SQL> SELECT TO_CHAR (SYSDATE, 'MM-DD-YYYY HH24:MI:SS') "NOW" FROM DUAL; NOW ------------------- 02-16-2019 04:39:00 SQL> SELECT SYSDATE FROM DUAL; SYSDATE --------- 16-FEB-19

    But in MySQL:

    mysql> SELECT SYSDATE() FROM DUAL; +---------------------+ | SYSDATE() | +---------------------+ | 2019-02-15 20:37:36 | +---------------------+ 1 row in set (0.00 sec)

    If you want to format the date, MySQL has a DATE_FORMAT() function.

    You can check the MySQL Date and Time documentation for more info.

    ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE TO_DATE

    Oracle's TO_DATE equivalent in MySQL is the STR_TO_DATE() function.

    It’s almost identical to the one in Oracle: it returns the DATE data type, while in MySQL it returns the DATETIME data type.


    SQL> SELECT TO_DATE ('20190218121212','yyyymmddhh24miss') as "NOW" FROM DUAL; NOW ------------------------- 18-FEB-19


    mysql> SELECT STR_TO_DATE('2019-02-18 12:12:12','%Y-%m-%d %H:%i:%s') as "NOW" FROM DUAL; +---------------------+ | NOW | +---------------------+ | 2019-02-18 12:12:12 | +---------------------+ 1 row in set (0.00 sec) SYNONYM

    In MySQL, there's no such support nor any equivalence for SYNONYM in Oracle.

    A possible alternative can be possible with MySQL is using VIEW.

    Although SYNONYM can be used to create an alias of a remote table,



    In MySQL, you can take advantage of using FEDERATED storage engine.


    CREATE TABLE hr_employees ( id INT(20) NOT NULL AUTO_INCREMENT, name VARCHAR(32) NOT NULL DEFAULT '', other INT(20) NOT NULL DEFAULT '0', PRIMARY KEY (id), INDEX name (name), INDEX other_key (other) ) ENGINE=FEDERATED DEFAULT CHARSET=utf8mb4 CONNECTION='mysql://fed_user@remote_host:9306/federated/test_table';

    Or you can simplify the process with CREATE SERVER syntax, so that when creating a table acting as your SYNONYM for accessing a remote table, it will be easier. See the documentation for more info on this.

    Behaviour of Empty String and NULL

    Take note that in Percona Server / MySQL, empty string is not NULL whereas Oracle treats empty string as null values.

    In Oracle:

    SQL> SELECT CASE WHEN '' IS NULL THEN 'Yes' ELSE 'No' END AS "Null Eval" FROM dual; Nul --- Yes

    In MySQL:

    mysql> SELECT CASE WHEN '' IS NULL THEN 'Yes' ELSE 'No' END AS "Null Eval" FROM dual; +-----------+ | Null Eval | +-----------+ | No | +-----------+ 1 row in set (0.00 sec) Sequences

    In MySQL, there's no exact same approach to what Oracle does for SEQUENCE.

    Although there are some posts that are simulating the functionality of this approach, you might be able to try to get the next key using LAST_INSERT_ID() as long as your table's clustered index, PRIMARY KEY, is defined with << is there something missing? >>

    Character String Functions

    Unlike Oracle, MySQL / Percona Server has a handful of string functions but not as many helpful functions built-in to the database.

    It would be too long to discuss it here one-by-one, but you can check the documentation from MySQL and compare this against Oracle's string functions.

    DML Statements

    Insert/Update/Delete statements from Oracle are congruous in MySQL.

    Oracle's INSERT ALL/INSERT FIRST is not supported in MySQL.

    Otherwise, you’d need to state your MySQL queries one-by-one.


    In Oracle:

    SQL> INSERT ALL INTO CUSTOMERS (customer_id, customer_name, city) VALUES (1000, 'Jase Alagaban', 'Davao City') INTO CUSTOMERS (customer_id, customer_name, city) VALUES (2000, 'Maximus Aleksandre Namuag', 'Davao City') SELECT * FROM dual; 2 rows created.

    2 rows created.

    But in MySQL, you have to run the insert one at a time:

    mysql> INSERT INTO CUSTOMERS (customer_id, customer_name, city) VALUES (1000, 'Jase Alagaban', 'Davao City'); Query OK, 1 row affected (0.02 sec) mysql> INSERT INTO CUSTOMERS (customer_id, customer_name, city) VALUES (2000, 'Maximus Aleksandre Namuag', 'Davao City'); Query OK, 1 row affected (0.00 sec)

    The INSERT ALL/INSERT FIRST doesn’t compare to how it is used in Oracle, where you can take advantage of conditions by adding a WHEN keyword in your syntax; there's no equivalent option in MySQL / Percona Server in this case.

    Hence, your alternative solution on this is to use procedures.

    Outer Joins "+" Symbol

    In Oracle, using + operator for left and right joins is not supported at present in MySQL as + operator is only used for arithmetic decisions.

    Hence, if you have + operator in your existing Oracle SQL statements, you need to replace this with LEFT JOIN or RIGHT JOIN.

    You might want to check the official documentation for "Outer Join Simplification" of MySQL.


    Oracle uses START WITH..CONNECT BY for hierarchical queries.

    Starting with MySQL / Percona 8.0, there is support for generating hierarchical data results which uses models such as adjacency list or nested set models. This is called Common Table Expressions (CTE) in MySQL.

    Similar to PostgreSQL, MySQL uses WITH RECURSIVE syntax for hierarchical queries so translate CONNECT BY statement into WITH RECURSIVE statement.

    Check down below on how it differs from ORACLE and in MySQL / Percona Server.

    In Oracle:

    SELECT, cp.title, CONCAT(c2.title, ' > ' || cp.title) as path FROM category cp INNER JOIN category c2 ON cp.parent_id = WHERE cp.parent_id IS NOT NULL START WITH >= 1 CONNECT BY NOCYCLE PRIOR;

    And in MySQL:

    WITH RECURSIVE category_path (id, title, path) AS ( SELECT id, title, title as path FROM category WHERE parent_id IS NULL UNION ALL SELECT, c.title, CONCAT(cp.path, ' > ', c.title) FROM category_path AS cp JOIN category AS c ON = c.parent_id ) SELECT * FROM category_path ORDER BY path; PL/SQL in MySQL / Percona?

    MySQL / Percona RDBMS has a different approach than Oracle's PL/SQL.

    MySQL uses stored procedures or stored functions, which is similar to PL/SQL and syntax using BEGIN..END syntax.

    Oracle's PL/SQL is compiled before execution when it is loaded into the server, while MySQL is compiled and stored in the cache when it's invoked.

    You may want to checkout this documentation as a reference guide on converting your PL/SQL to MySQL.

    Migration Tools

    I did some research for any tools that could be a de facto standard for migration but I couldn’t find a good answer.

    Though, I did find sqlines and it looks simple but promising.

    While I didn’t deep-dive into it, the website offers a handful of insights, which could help you on migrating from Oracle to MySQL/Percona Server. There are also paid tools such as this and this.

    I've also searched through github but found nothing much more appealing as a resolution to the problem. Hence, if you're aiming to migrate from Oracle and to Amazon, they have AWS Schema Conversion Tool for which migrating from Oracle to MySQL is supported.

    Overall, the reason why migration is not an easy thing to do is mainly because Oracle RDBMS is such a beast with lots of features that Percona Server / MySQL or MariaDB RDBMS still do not have.

    Anyhow, if you find or know of any tools that you find helpful and beneficial for migrating from Oracle to MySQL / Percona Server, please leave a comment on this blog!


    As part of your migration plan, testing is a vital task that plays a very important role and affects your decision with regards to migration.

    The tool dbdeployer (a port of MySQL Sandbox) is a very helpful tool that you can take advantage of. This is pretty easy for you to try and test different approaches and saves you time, rather than setting up the whole stack if your purpose is to try and test the RDBMS platform first.

    For testing your SQL stored routines (functions or procedures), triggers, events, I suggest you use these tools mytap or the Google's Unit Testing Framework.

    Percona as well offers a number of tools that are available for download on their website. Checkout Percona Toolkit here. You can cherry-pick the tools according to your needs especially for testing and production-usage tasks.

    Related resources  Basic Administration Comparison Between Oracle, MSSQL, MySQL, PostgreSQL  Monitoring Your Databases with MySQL Enterprise Monitor  Database High Availability Comparison - MySQL / MariaDB Replication vs Oracle Data Guard

    Overall, things that you need to keep-in-mind as your guidelines when doing a test for your MySQL Server are:

    • After your installation, you need to consider doing some tuning. Checkout this Percona blog for help.
    • Do some benchmarks and stress-load testing for your configuration setup on your current node. Checkout mysqlslap and sysbench which can help you with this. Also check out our blog "How to Benchmark Performance of MySQL & MariaDB using SysBench".
    • Check your DDL's if they are correctly defined such as data-types, constraints, clustered and secondary indexes, or partitions, if you have any.
    • Check your DML especially if syntax are correct and are saving the data correctly as expected.
    • Check out your stored routines, events, trigger to ensure they run/return the expected results.
    • Verify that your queries running are performant. I suggest you take advantage of open-source tools or try our ClusterControl product. It offers monitoring/observability especially of your MySQL / Percona Server. You can use ClusterControl here to monitor your queries and its query plan to make sure they are performant.
    Tags:  MySQL oracle migration3 percona server for mysql

    Deprecation of TLSv1.0 2019-02-28

    Ahead of the PCI move to deprecate the use of ‘early TLS’, we’ve previously taken steps to disable TLSv1.0.

    Unfortunately at that time we encountered some issues which led us to rollback these changes. This was to allow users of operating systems that did not – yet – support TLSv1.1 or higher to download Percona packages over TLSv1.0.

    Since then, we have been tracking our usage statistics for older operating systems that don’t support TLSv1.1 or higher at We now receive very few legitimate requests for these downloads.

    Consequently,  we are ending support for TLSv1.0 on all Percona web properties.

    While the packages will still be available for download from, we are unlikely to update them as the OS’s are end-of-life (e.g. RHEL5). Also, in future you will need to download these packages from a client that supports TLSv1.1 or greater.

    For example EL5 will not receive an updated version of OpenSSL to support versions greater than TLSv1.1. PCI has called for the deprecation of ‘early TLS’ standards. Therefore you should upgrade any EL5 installations to EL6 or greater as soon as possible. As noted in this support policy update by Red Hat, EL5 stopped receiving support under extended user support (EUS) in March 2015.

    To continue to receive updates for your OS and for any Percona products that you use, you need to update to more recent versions of CentOS, Scientific Linux, and RedHat Enterprise Linux.

    Photo by Kevin Noble on Unsplash

    7 JavaScript Playgrounds to Use in 2019

    The importance of online code editing platforms cannot be overemphasized. As much as we love our local IDE's, one too many times we find ourselves needing to quickly share and or collaborate with a friend or colleague in our local projects.

    In cases like this, online playgrounds give us that added functionality to write and run codes as well as share snippets and have colleagues test it out and give us feedback. In this post, we'll be looking at the top 7 online code editors for JavaScript in no particular order.


    CodePen is an online social development environment for front-end developers. It comes fully equipped with all the features you'll need to build, test, share, collaborate and even deploy your websites.

    Plans and Features CodePen plans are flexible enough to accommodate whatever needs you might have. First, there are individual and team plans. Under the individual plans, you have a range of other plans that have been carefully structured to offer you a wide range of choices depending on what features are more important to you and on how much you can afford.

    The free plan has all you need to perform usual operations like creating pens, sharing snippets, running tests and so much more. If you're developing for experimental purposes or just building out your project for development purposes, the free plan has all you need.

    The PRO plan, however, adds a little bit to what you can do with CodePen. With CodePen PRO, you get unlimited privacy, asset hosting space, live view, embedded themes and so much more. On the PRO plan, you won't miss your local IDE. If you have specific needs for these features and you've got a few dollars to spare, then these plans are totally worth the price.

    You can visit the pricing page to have a closer look at what each plan offers to help you decide. However, that's not all, as we mentioned earlier, there's also a team plan for people working in teams. The price of the team plan is subject to the number of members in your team. However, it comes with a standard $12/month/member price.


    CodeSandbox is an online editor that allows developers to focus on writing code while it handles all the necessary background processes and configurations. Unlike CodePen, CodeSandbox focuses more on building and sharing code demos that contain back-end components. The editor is optimized to analyze npm dependencies, show custom error messages, and also make projects searchable by npm dependency.

    CodeSandbox offers dedicated sandboxes to help developers quickly get started on developing with their favorite tools to build web applications. With CodeSandbox, all you need to do is open the browser, select your preferred development tool and start writing code. It abstracts all the backgrounds tasks and configurations so that you only need to worry about writing your code. With Condesandbox, we can do all of the following and more:

    Github integration With Codesanbox 2.0, you can commit, create repositories and open pull requests right from within CodeSandbox.

    Host static files CodeSandbox hosts all the files in the public directory for you. Previously this functionality didn't exist on CodeSandbox, but now, you can add static images, web workers, etc. As a result, you can imitate all the functionalities of your local development server as you now have full access to the index.html file.

    Real-time collaboration With CodeSandbox, you can create a project and open a Live Session to generate a live URL you can share to colleagues. Through the URL, all your colleagues can work together on the project at the same time. This may seem difficult to manage when the number grows, and you can't track what each person is doing. As a result, you can switch to a Classroom Mode where you can specify who can edit the sandbox and who can only view.

    Visual studio code integration CodeSandbox has a feature called VSCode in browser. It allows you access to VSCode features like keybindings, user snippets, breadcrumbs etc. All you need to do is copy your settings files directly from VSCode to CodeSandbox, and you will have control of all the files on your sandbox. These are only a handful of the features you'll get with CodeSandbox, there are more, like working in teams, Sandbox containers etc. I invite you to check them out their docs page for more.

    Back-end support

    It is worthy to note that CodeSandbox is one of the few online playgrounds that has support for back-end languages like Node.js. What's more? It has npm support. As a result of it, you can install any npm package you require in seconds.


    StackBlitz is a VSCode powered online playground for web developers. It offers developers the ability to create projects in just one click. The collaboration features of StackBlitz makes it possible to share a created project with colleagues through a unique project URL. Seamless project setup Like other playgrounds we've already mentioned, StackBlitz automatically takes care of all the background processes involved in setting up projects; like installing dependencies, compiling, bundling, etc. This way it abstracts the setup process for you so you can focus on what you do best, writing code. Stackblitz is renowned for its ability to handle Angular and React projects, all you need to set up a new of them is the click of a button.

    VSCode similarity Because VSCode powers StackBlitz, it comes packed with all the Aesthetic features we love in VSCode, giving you the look and feel of your local IDE.

    TypeScript support StackBlitz has TypeScript support and TypeScript auto-completion, a feature that is not found in other IDE's like Plunker.

    NPM support With StackBlitz, you can import any npm package into your project, just like you would in VSCode. Better still, you can copy snippets from documentation pages and blogs into the editor and it'll automatically detect the missing packages and prompt you to install them.

    Image credit to Eric Simons.

    Offline support Thanks to StackBlitz in-browser development server, you can continue writing code and editing your work even when offline. This gives you the superpower to build offline while leveraging the power of the online platform, killing two birds with one stone. We can't possibly go over all the perks, but you're welcome to see them yourself on the website.

    JS Fiddle

    JSFiddle is an online playground for creating, testing and showcasing collaborational HTML, CSS and JavaScript code snippets, known as 'fiddles'. It is one of the earliest playgrounds that laid the foundation for the creation of other modern-day playgrounds. At the moment, it may seem a bit basic compared to modern playgrounds like CodePen; however, it still performs basic playground operations like testing, sharing, and collaboration. As an added advantage, JSFiddle can also perform complex ajax simulations. JSFiddle lets you quickly get started by providing access to unique boilerplate templates for diverse technologies. As a result, it'll take you just about the click of a button to generate a boilerplate code for React, Vue or whichever technology you want.

    As much as JSFiddle behaves alike with other playgrounds, it has certain features that make it equally unique in its own way. Unlike most others, with JSFiddle, you can customize, add and use other JavaScript tools and frameworks easily. With the Togetherjs support, JSSFiddle offers an interactive user interface that lets users collaborate and share fiddles with colleagues. Moreso, it has inbuilt Git support. You can also use JSFiddle for any of the following:

    • Bug reporting (test-case) for Github Issues
    • Presenting code answers on Stack Overflow
    • Live code collaboration
    • Code snippets hosting
    JS BIN

    JSBin is a live playground for Html, CSS and JavaScript and a range of other preprocessors like jade, markdown and much more. It provides specific editor panels with an intuitive user interface for user interactivity. With JSBin, users can create bins, share, collaborate and test codes.

    JSBin is mostly focused on the ability to share code. It shares not just the code but also the complete output result of the code in the bin. It's real-time collaboration features lets colleagues view and see changes in real-time as you type in the editor panels.

    Though JSBin opens into a default Html code on start, there are many libraries available for you. You can add as many libraries as is required for your project to build your desired product.

    With JSBin, all created bins have a unique sharing Url that other users can join with to collaborate, fork and or edit the bin, while your original bin remains intact. Moreso, with JSBin, you can perform the following operations and more

    • Export your bins as a gist
    • live reload bins in editor and full preview
    • Download bins
    • Save snapshots of bins
    • Templatize bins
    • Archive bins
    • etc

    JSBin also has a PRO plan that adds more functionality to what you can do. It gives an additional layer of features to extend your bin functionality. With The JSBin PRO, you can

    • SSL embeds
    • Custom embed CSS and editor settings
    • Sandbox mode - does not save the bin
    • Private bins
    • Dropbox
    • Vanity URLs - blog post
    • Asset hosting

    Scrimba is an entirely different kind of online playground than what we've seen so far. It offers you the ability to pause the video and edit the instructor's code and see the result in the editor.

    In-video interaction This provides an extra layer of interactivity to the usual playground functionality. Scrimba lets you take up an instructors demo and build it up into whatever use case you desire. In the playground mode, you can interact with the instructor's code however you please, and you can edit, copy, paste and basically perform any interactive operation you desire.

    Cross-platform Scrimba is completely cross-platform. It adapts to any screen size and arranges contents in both portrait and landscape mode to match your screens resolution requirements, this way, it constantly serves you the best visual output.

    Console and dependencies Scrimba has an in-built console that both the user and the instructor can access to run commands and debug code. You can log messages just the same way you would in any other editor. It also lets you install external dependencies into your project. What's better? the Scrimba team has designed the console in a way that you can see the console output with opening the console panel.

    Analytics Scrimbar has a sidebar notes features that track all the changes you make in the editor. This is a handy feature that gives you the ability to keep track of all your changes and refer to them in the future when the need be.


    Liveweaver is an online playground for Html, CSS, and JavaScript for web designers and developers. Like other editors we've come across, Liveweaver gives users the ability to create, test, and share code with colleagues. It opens into distinct editor panels for Html, CSS, JavaScript and Output panels respectively. One remarkable feature of Liveweaver is its simplicity. A newbie could easily open up Liveweaver, create a small app and run it without reading up any prior documentation.

    Moreso, Liveweaver supports a wide range of third-party libraries like Bootstrap, jQuery, Threejs etc. This helps developers build more robust applications without library support hinderances.

    Liveweave is free to use and comes packed with over 20 handy JavaScript libraries. With Liveweave, you can do all of the following and more

    • Toggle live preview in editor panel
    • Toggle light and dark mode for better viewing
    • built-in ruler for measurements and aesthetics
    • Collaborate with colleagues easily with the TeamUp feature
    • Download weaves easily with a button click
    • Supports SVG
    • etc

    In this post we have discussed five top online JavaScript editors you can use in 2019. Each playground has it's unique abilities so its difficult to choose a "best" one amongst them. We can, however, choose the one whose features best solves our needs. Disclaimer: This five online playgrounds discussed here are entirely my choice based on my own experience with them. There are other amazing online playgrounds like Plunker, CSS Deck, Dabblet, etc. If you have other playgrounds in mind, please do share them with us in the comments below. Hopefully, this post would help you pick the most suitable one for your needs.