Planet MySQL

Invisible Indexes – MySQL 8.0

MySQL 8.0 has a rich set of features. One of the feature which interests DBA’s more is invisible index

What is index in MySQL ?

  • The indexes are used to find rows with specific column values quickly. Without an index, MySQL must begin with the first row and then read through the entire table to find the relevant rows.


What is invisible index ?

  • The invisible index is ability to mark an index as unavailable for use by optimizer.
  • In MySQL 5.7 and other previous versions the indexes are visible by a default.
  • To control the index visibility for a new index ,use a  visible or invisible key words as a part of the index creation.

How to add a invisible index on existing table ?

Syntax :

alter table table-name alter index index-name invisible (key-word)

Example :

mysql> alter table mydbops_test alter index idx_age invisible;
Query OK, 0 rows affected (2.43 sec)
Records: 0 Duplicates: 0 Warnings:0

Table Structure :

mysql> show create table mydbops_test\G *************************** 1. row *************************** Table: mydbops_test Create Table: CREATE TABLE `mydbops_test` ( `name` varchar(50) DEFAULT NULL, `age` int(10) DEFAULT NULL, `address` varchar(50) DEFAULT NULL, KEY `idx_name` (`name`), KEY `idx_age` (`age`) /*!80000 INVISIBLE */ ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 1 row in set (0.00 sec)

Where it can be used ?

The invisible index can be used on below cases.

Case 1 : (Dropping an unused index )

→ The indexes are great when the optimizer can find an optimal execution plan to execute them.

→ In heavy indexes on tables it will hurt the performance of modifications like (insert, update) queries.And they can also hurt the performance of reads as the optimizer needs to evaluate them for plan selection.

→ In MySQL 5.7 sys schema was introduced,using this sys schema we can find un used index easily.

→ Here the main drawback is difficult to judge cases where there are redundant indexes, and you expect that dropping an index should result in one of the other candidates being selected.

mysql> select * from sys.schema_unused_indexes where object_name='City'; +---------------+-------------+----------------+ | object_schema | object_name | index_name | +---------------+-------------+----------------+ | world | City | CountryCode | | world | City | idx_District | | world | City | idx_combo | | world | City | idx_Population | +---------------+-------------+----------------+

Example Query :

SELECT District, SUM(Population) FROM City WHERE District like ‘%York%’ and Population BETWEEN 670000 AND 700000 GROUP BY District;

Explain Plan :

+----+-------------+-------+------------+-------+------------------------+--------------+---------+------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+-------+------------------------+--------------+---------+------+------+----------+-------------+ | 1 | SIMPLE | City | NULL | index | idx_District,idx_combo | idx_District | 20 | NULL | 40 | 11.23 | Using where | +----+-------------+-------+------------+-------+------------------------+--------------+---------+------+------+----------+-------------+

→  The query is using index (idx_District)  ,but the (idx_District) index shows as un-used index.

+---------------+-------------+----------------+ | object_schema | object_name | index_name | +---------------+-------------+----------------+ | world | City | CountryCode | | world | City | idx_District | +---------------+-------------+----------------+

→ If this index was dropped my query performance will be degraded.

→ In this case we can use invisible index.When an index goes unseen, the optimizer can’t use it.

→ The index is still actively maintained,kept up-to-date as data is modified, but no queries will be permitted to make use this index.

→ In this case the idx_District index will not use so i mark an index as unavailable for use by the Optimizer..

Case 2 : (Evaluating the Optimizer behaviour)

→ You have many indexes, but you are not sure which one is not in use. You can change an index to invisible to note if there is any performance degradation. If it will affect the query performance, you can change it back immediately.

→ In some cases, the only one query can use that specific index for example (fulltext index),in that case invisible index could be a great solution.

Example :

In this case i am not sure which index will be used by application queries . 

Example Query :

select a.ID,a.Name,a.Population,b.Language,b.IsOfficial from City as a,CountryLanguage as b where b.CountryCode=a.ID and b.Percentage=0.9 and b.Language=’English’;

Table Structure :

mysql> show create table CountryLanguage\G *************************** 1. row *************************** Table: CountryLanguage Create Table: CREATE TABLE `CountryLanguage` (`ID` int(11) NOT NULL AUTO_INCREMENT, `CountryCode` char(3) NOT NULL DEFAULT '',   `Language` char(30) NOT NULL DEFAULT '',   `IsOfficial` enum('T','F') NOT NULL DEFAULT 'F',   `Percentage` float(4,1) NOT NULL DEFAULT '0.0',   PRIMARY KEY (`ID`),  KEY `CountryCode` (`CountryCode`), KEY `idx_Percentage` (`Percentage`),   KEY `idx_Language` (`Language`), CONSTRAINT `countryLanguage_ibfk_1` FOREIGN KEY (`CountryCode`) REFERENCES `Country` (`code`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec) +----+-------------+-------+------------+--------+----------------------------------+--------------+---------+---------------------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+--------+----------------------------------+--------------+---------+---------------------+------+----------+-------------+ | 1 | SIMPLE | b | NULL | ref | PRIMARY,CountryCode,idx_Language | idx_Language | 30 | const | 6201 | 10.00 | Using where | | 1 | SIMPLE | a | NULL | eq_ref | PRIMARY | PRIMARY | 4 | world.b.CountryCode | 1 | 100.00 | Using where | +----+-------------+-------+------------+--------+----------------------------------+--------------+---------+---------------------+------+----------+-------------+

Then i try to disable the (idx_Language) index,after disable this index it will use (idx_Percentage) index the number of rows scanning also reduced.

mysql> alter table CountryLanguage alter index idx_Language invisible Query OK, 0 rows affected (0.15 sec) Records: 0  Duplicates: 0  Warnings: 0

Explain Plan :

mysql> explain select a.ID,a.Name,a.Population,b.Language,b.IsOfficial from City as a,CountryLanguage as b where b.CountryCode=a.ID and Percentage=0.9 and Language=’English’;

+----+-------------+-------+------------+--------+-------------------------------------------------+----------------+---------+---------------------+------+----------+-----------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+--------+-------------------------------------------------+----------------+---------+---------------------+------+----------+-----------------------+ | 1 | SIMPLE | b | NULL | ref | PRIMARY,CountryCode,idx_Percentage,idx_Language | idx_Percentage | 4 | const | 10 | 6.10 | Using index condition | | 1 | SIMPLE | a | NULL | eq_ref | PRIMARY | PRIMARY | 4 | world.b.CountryCode | 1 | 100.00 | Using where | +----+-------------+-------+------------+--------+-------------------------------------------------+----------------+---------+---------------------+------+----------+-----------------------+ 2 rows in set, 0 warnings (0.00 sec)

Using Force Index :

→  In this case i have hide the combo index and try to force the index to optimizer.

Example :

mysql> explain SELECT District, SUM(Population) FROM City force index (idx_combo) WHERE District like '%York%' and Population BETWEEN 670000 AND 700000 GROUP BY District; ERROR 1176 (42000): Key 'idx_combo' doesn't exist in table 'City'

It throws the error,this index is not visible by the optimizer.

Note :

  • The primary key index can’t be made invisible.
  • If there is no primary key in a table but it have a UNIQUE key. In this case the first UNIQUE key is treated as primary key so we can’t be made invisible

→  In MySQL 8.0.3 they have implemented selected sessions to activate the index and measure the effect of this invisible index.
→  In this case we have to set a optimizer_switch to make the index visible, for session level.

Example :

mysql> set session optimizer_switch = 'use_invisible_indexes=on'; Query OK, 0 rows affected (4.28 sec)

Find Invisible indexes using information Schema :

→ The information about whether an index is visible or invisible is available from the information_schema.statistics table IS_VISIBLE ( Column ) or you can use show indexes.

mysql> select table_schema,table_name,index_name,is_visible from information_schema.statistics where table_name='City'; +--------------+------------+----------------+------------+ | TABLE_SCHEMA | TABLE_NAME | INDEX_NAME | IS_VISIBLE | +--------------+------------+----------------+------------+ | world | City | CountryCode | YES | | world | City | idx_combo | NO | | world | City | idx_combo | NO | | world | City | idx_District | YES | | world | City | idx_Population | YES | | world | City | PRIMARY | YES | +--------------+------------+----------------+------------+

Conclusion :

→ The invisible indexes are a great new feature in MySQL 8.0.It will more help for MySQL users.In MySQL 5.7 we are using performance schema to find out the unused indexes and removed from table.

→ In MySQL 8.0 the invisible index will be more helpful to check the query performance and index usage.

Credits : Photo by Robert McLay on Unsplash

MySQL Performance : Testing 8.0 with less blood..

This is just a short reminder about what to keep in mind when you're preparing some MySQL 8.0 performance testing (or any other 8.0 evaluation) and want to do it "with less blood" ;-))

So far, here is the list :

  • 8.0 is using UTF8 by default, so if you're expecting to compare apples-to-apples, configure it with "latin1" as it was before to compare to 5.7/5.6/etc. (or configure them all to UTF8 if your target is to compare UTF8)..
  • binlog is enabled by default, so mind to switch it OFF if it's not in your target..
  • SSL is ON by default (switch it OFF if not your target)
  • auto UNDO truncate is ON by default (if you prefer to avoid any periodic spikes in background of flushing activity due UNDO auto truncate, just switch this features OFF (while you'll still be able to involve the same truncate manually whenever you need it))
  • there is a new default authentication plugin (and if you want to see your old apps still working with 8.0, just initialize your MySQL instance + use in your config file the old plugin instead (NOTE: you may still switch plugins via ALTER))
  • InnoDB doublewrite fix is still NOT part of the published code, so unless your target is to really show how this missed fix is impacting your workload, switch it OFF (but still mind to bombard Sunny with complaining messages about how this fix is important for you ;-))
And now all these points in action :

MySQL 8.0 Server initialization :

$ ./bin/mysqld --user=mysql --datadir=./data --initialize-insecure \ --default-authentication-plugin=mysql_native_password to add to your my.conf file before you start your MySQL instance : [mysqld] ... ssl=0 default_authentication_plugin=mysql_native_password skip_log_bin=1 character_set_server=latin1 collation_server=latin1_swedish_ci skip-character-set-client-handshake innodb_undo_log_truncate=off innodb_doublewrite=0

And once started, you'll see your MySQL 8.0 instance running "similar to before", allowing you to evaluate it with the same tools you were using until now. After what, please, take your time to learn what "all these differences" are meaning, why they were introduced, and how properly to use them..

PURGE BINARY LOGS with a relative time

Sometimes you want to reduce disk usage on certain servers by adjusting the time that binary logs are kept.  Also, some installations of MySQL and MariaDB have suffered from a very-hard-to-catch bug where the binary logs end up not getting automatically expired (basically, the expire_logs_days option doesn’t always work effectively).

A workaround can be scripted, but typically the script would specify the exact datetime to which the logs need to be kept.  The reference manual and examples all do this too, quite explicitly, noting:

The datetime expression is in the format ‘YYYY-MM-DD hh:mm:ss’.

However, the actual command syntax is phrased as follows:

PURGE { BINARY | MASTER } LOGS { TO ‘log_name’ | BEFORE datetime_expr }

and that indicates much more flexibility in the parser: “datetime_expr” means that you can put in an arbitrary temporal expression!

So let’s test that, with a functional equivalent of expire_logs_days=14:


And yep, that works (and the extra parenthesis around the expression are not required, I just did that to show clearly what the expression is).

Now, I’m not the first person to use this construct, there are several posts online from recent years that use an expression with PURGE BINARY LOGS.  I’m not sure whether allowing datetime_expr is a modification that was made in the parser at some point, or whether it was always possible.  Fact is, the reference manual text (MariaDB as well as MySQL) only provide examples with an absolute ISO datetime: ‘YYYY-MM-DD HH:MM:SS’.

Check Out the Percona Live 2018 Live Stream!

Announcing the Percona Live 2018 live stream.

This year at Percona Live Open Source Database Conference 2018 we are live streaming the Keynote Talks on Day 1 and 2.

Percona is streaming the keynotes on Tuesday, April 24, 2018, and Wednesday, April 25, 2018 beginning at 9 AM PDT (both days). The keynote speakers include people from VividCortex, Upwork, Oracle, Netflix and many more. The keynote panels feature a cloud discussion and a cool technologies showcase.

Use the live stream link if you don’t want to miss a keynote, but can’t be at the main stage. The link for the live stream is:

The list of keynote talks and speakers for each day is:

Day 1

Day 2

The post Check Out the Percona Live 2018 Live Stream! appeared first on Percona Database Performance Blog.

MySQL Enterprise Edition Database Firewall – Control and Monitor SQL Statement Executions

As of MySQL 5.6.24, MySQL Enterprise Edition includes MySQL Enterprise Firewall, an application-level firewall (it runs within the mysql database process) that enables database administrators to permit or deny SQL statement execution based on matching against whitelists of accepted statement patterns. This helps harden MySQL Server against attacks such as SQL injection or attempts to exploit applications by using them outside of their legitimate query workload characteristics.

Each MySQL account registered with the firewall has its own whitelist of statement patterns (a tokenized representation of a SQL statement), enabling protection to be tailored per account. For a given account, the firewall can operate in recording or protecting mode, for training in the accepted statement patterns or protection against unacceptable statements. The diagram illustrates how the firewall processes incoming statements in each mode.

MySQL Enterprise Firewall Operation


If you do not have a MySQL Enterprise Edition license, you may download a trial version of the software via Oracle eDelivery. The MySQL Firewall is included in the MySQL Product Pack, specifically for MySQL Database 5.6.24 or higher.

MySQL Enterprise Firewall has these components:

  • A server-side plugin named MYSQL_FIREWALL that examines SQL statements before they execute and, based on its in-memory cache, renders a decision whether to execute or reject each statement.
  • Server-side plugins named MYSQL_FIREWALL_USERS and MYSQL_FIREWALL_WHITELIST implement INFORMATION_SCHEMA tables that provide views into the firewall data cache.
  • System tables named firewall_users and firewall_whitelist in the mysql database provide persistent storage of firewall data.
  • A stored procedure named sp_set_firewall_mode() registers MySQL accounts with the firewall, establishes their operational mode, and manages transfer of firewall data between the cache and the underlying system tables.
  • A set of user-defined functions provides an SQL-level API for synchronizing the cache with the underlying system tables.
  • System variables enable firewall configuration and status variables provide runtime operational information.


Installing the Firewall

Installing the firewall is fairly easy. After you install MySQL version 5.6.24 or greater, you simply execute an SQL script that is located in the $MYSQL_HOME/share directory. There are two versions of the script, one for Linux and one for Windows (the firewall isn’t supported on the Mac yet).

The scripts are named win_install_firewall.sql for Windows and linux_install_firewall.sql for linux. You may execute this script from the command line or via MySQL Workbench. For the command line, be sure you are in the directory where the script is located.

shell> mysql -u root -p mysql < win_install_firewall.sql Enter password: (enter root password here)

The script create the firewall tables, functions, stored procedures and installs the necessary plugins. The script contains the following:

# Copyright (c) 2015 Oracle and/or its affiliates. All rights reserved. # Install firewall tables USE mysql; CREATE TABLE IF NOT EXISTS mysql.firewall_whitelist( USERHOST VARCHAR(80) NOT NULL, RULE text NOT NULL) engine= MyISAM; CREATE TABLE IF NOT EXISTS mysql.firewall_users( USERHOST VARCHAR(80) PRIMARY KEY, MODE ENUM ('OFF', 'RECORDING', 'PROTECTING', 'RESET') DEFAULT 'OFF') engine= MyISAM; INSTALL PLUGIN mysql_firewall SONAME 'firewall.dll'; INSTALL PLUGIN mysql_firewall_whitelist SONAME 'firewall.dll'; INSTALL PLUGIN mysql_firewall_users SONAME 'firewall.dll'; CREATE FUNCTION set_firewall_mode RETURNS STRING SONAME 'firewall.dll'; CREATE FUNCTION normalize_statement RETURNS STRING SONAME 'firewall.dll'; CREATE AGGREGATE FUNCTION read_firewall_whitelist RETURNS STRING SONAME 'firewall.dll'; CREATE AGGREGATE FUNCTION read_firewall_users RETURNS STRING SONAME 'firewall.dll'; delimiter // CREATE PROCEDURE sp_set_firewall_mode (IN arg_userhost VARCHAR(80), IN arg_mode varchar(12)) BEGIN IF arg_mode = "RECORDING" THEN SELECT read_firewall_whitelist(arg_userhost,FW.rule) FROM mysql.firewall_whitelist FW WHERE FW.userhost=arg_userhost; END IF; SELECT set_firewall_mode(arg_userhost, arg_mode); if arg_mode = "RESET" THEN SET arg_mode = "OFF"; END IF; INSERT IGNORE INTO mysql.firewall_users VALUES (arg_userhost, arg_mode); UPDATE mysql.firewall_users SET mode=arg_mode WHERE userhost = arg_userhost; IF arg_mode = "PROTECTING" OR arg_mode = "OFF" THEN DELETE FROM mysql.firewall_whitelist WHERE USERHOST = arg_userhost; INSERT INTO mysql.firewall_whitelist SELECT USERHOST,RULE FROM INFORMATION_SCHEMA.mysql_firewall_whitelist WHERE USERHOST=arg_userhost; END IF; END // delimiter ;

After you run the script, the firewall should be enabled. You may verify it by running this statement:

mysql> SHOW GLOBAL VARIABLES LIKE 'mysql_firewall_mode'; +-------------------------------+-------+ | Variable_name | Value | +-------------------------------+-------+ | mysql_firewall_max_query_size | 4096 | | mysql_firewall_mode | ON | | mysql_firewall_trace | OFF | +-------------------------------+-------+

Testing the Firewall

To test the firewall, you may use a current mysql user, but we are going to create a test user for this example – webuser@localhost. (The user probably doesn’t need all privileges, but for this example we will grant everything to this user)

CREATE USER 'webuser'@'localhost' IDENTIFIED BY 'Yobuddy!'; 'GRANT ALL PRIVILEGES ON *.* TO 'webuser'@'localhost' WITH GRANT OPTION'

OPTIONAL: For our test, we will be using the sakila schema provided by MySQL. You may download the sakila database schema (requires MySQL 5.0 or later) at If you don’t want to use the sakila database, you may use your own existing database or create a new database.

After downloading the sakila schema, you will have two files, named sakila-schema.sql and sakila-data.sql. Execute the sakila-schema.sql first, and then sakila-data.sql to populate the database with data. If you are using the command line, simply do the following: (substitute UserName for a mysql user name)

# mysql -uUserName -p < sakila-schema.sql # mysql -uUserName -p < sakila-data.sql

After creating the sakila schema and importing the data, we now set the firewall to record those queries which we want to allow:

mysql> CALL `mysql`.`sp_set_firewall_mode`("webuser@localhost","RECORDING") +-----------------------------------------------+ | read_firewall_whitelist(arg_userhost,FW.rule) | +-----------------------------------------------+ | Imported users: 0 Imported rules: 0 | +-----------------------------------------------+ 1 row in set (0.14 sec) +-------------------------------------------+ | set_firewall_mode(arg_userhost, arg_mode) | +-------------------------------------------+ | OK | +-------------------------------------------+ 1 row in set (0.22 sec) Query OK, 5 rows affected (0.28 sec)

We can check to see the firewall mode via this statement, to be sure we are in the recording mode:

mysql> SELECT * FROM MYSQL.FIREWALL_USERS; +-------------------+------------+ | USERHOST | MODE | +-------------------+------------+ | webuser@localhost | RECORDING | +-------------------+------------+ 1 row in set (0.02 sec)

Now that we have recording turned on, let’s run a few queries:

mysql> use sakila Database changed mysql> show tables; +----------------------------+ | Tables_in_sakila | +----------------------------+ | actor | | actor_info | | address | | category | | city | | country | | customer | | customer_list | | film | | film_actor | | film_category | | film_list | | film_text | | inventory | | language | | nicer_but_slower_film_list | | payment | | rental | | sales_by_film_category | | sales_by_store | | staff | | staff_list | | store | +----------------------------+ 23 rows in set (0.00 sec) mysql> select * from actor limit 2; +----------+------------+-----------+---------------------+ | actor_id | first_name | last_name | last_update | +----------+------------+-----------+---------------------+ | 1 | PENELOPE | GUINESS | 2006-02-15 04:34:33 | | 2 | NICK | WAHLBERG | 2006-02-15 04:34:33 | +----------+------------+-----------+---------------------+ 2 rows in set (0.13 sec) mysql> select first_name, last_name from actor where first_name like 'T%'; +------------+-----------+ | first_name | last_name | +------------+-----------+ | TIM | HACKMAN | | TOM | MCKELLEN | | TOM | MIRANDA | | THORA | TEMPLE | +------------+-----------+ 4 rows in set (0.00 sec)

We turn off the recording by turning on the protection mode:

mysql> CALL `mysql`.`sp_set_firewall_mode`("webuser@localhost","PROTECTING"); +-------------------------------------------+ | set_firewall_mode(arg_userhost, arg_mode) | +-------------------------------------------+ | OK | +-------------------------------------------+ 1 row in set (0.00 sec)

We can check to see the firewall mode via this statement:

mysql> SELECT * FROM MYSQL.FIREWALL_USERS; +-------------------+------------+ | USERHOST | MODE | +-------------------+------------+ | webuser@localhost | PROTECTING | +-------------------+------------+ 1 row in set (0.02 sec)

And we can look at our whitelist of statements:

mysql> SELECT * FROM MYSQL.FIREWALL_WHITELIST; +-------------------+-------------------------------------------------------------------+ | USERHOST | RULE | +-------------------+-------------------------------------------------------------------+ | webuser@localhost | SELECT * FROM actor LIMIT ? | | webuser@localhost | SELECT SCHEMA ( ) | | webuser@localhost | SELECT first_name , last_name FROM actor WHERE first_name LIKE ? | | webuser@localhost | SHOW TABLES | +-------------------+-------------------------------------------------------------------+ 4 rows in set (0.00 sec)

The firewall is now protecting against non-whitelisted queries. We can execute a couple of the queries we previously ran, which should be allowed by the firewall.

mysql> show tables; +----------------------------+ | Tables_in_sakila | +----------------------------+ | actor | | actor_info | | address | | category | | city | | country | | customer | | customer_list | | film | | film_actor | | film_category | | film_list | | film_text | | inventory | | language | | nicer_but_slower_film_list | | payment | | rental | | sales_by_film_category | | sales_by_store | | staff | | staff_list | | store | +----------------------------+ 23 rows in set (0.01 sec)

Now we run two new queries, which should be blocked by the firewall.

mysql> select * from rental; ERROR 1045 (42000): Firewall prevents statement mysql> select * from staff; ERROR 1045 (42000): Firewall prevents statement

The server will write an error message to the log for each statement that is rejected. Example:

2015-03-21T22:59:05.371772Z 14 [Note] Plugin MYSQL_FIREWALL reported: 'ACCESS DENIED for webuser@localhost. Reason: No match in whitelist. Statement: select * from rental '

You can use these log messages in your efforts to identify the source of attacks.

To see how much firewall activity you have, you may look look at the status variables:

mysql> SHOW GLOBAL STATUS LIKE 'Firewall%'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Firewall_access_denied | 42 | | Firewall_access_granted | 55 | | Firewall_cached_entries | 78 | +-------------------------+-------+

The variables indicate the number of statements rejected, accepted, and added to the cache, respectively.

The MySQL Enterprise Firewall Reference is found at


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] and on LinkedIn. Tony is the author of Twenty Forty-Four: The League of Patriots 

Visit for more information.

MySQL Document Store CRUD Quick Start

This post serves as a quick start guide on how to use the MySQL Document Store with the official Node.js connector. It mainly highlights how easy it is to leverage CRUD-style operations and getting up and running with the X DevAPI.

Before you jump in

Make sure you have Node.js 7.6.0 (or higher) and MySQL 8.0.11 (or higher) installed on your machine.

Setting up your Node.js project

First, using the command line, let’s start by creating a directory for our sample application.

$ mkdir myproject $ cd myproject

Create a package.json manifest file using the following command:

$ npm init -y

Next, install the MySQL Node.js Connector from npm:

$ npm install --save-exact @mysql/xdevapi

Starting the MySQL Server

For additional tips on how to install and secure your MySQL server, check the documentation.

  1. Download (if you have not yet) MySQL 8.0
  2. Install and set up a basic (and insecure) server process.

$ mysqld --initialize-insecure $ mysqld

We are using 

--initialize-insecure here for the sake of simplicity, however, make sure you use --initialize  (and set up users and privileges accordingly) on production environments and/or “real-world” scenarios. Connecting to MySQL

Create a new

app.js file and write the following code for establishing a connection to the MySQL server using the myproject database/schema (which will be created by default).const mysqlx = require('@mysql/xdevapi'); const assert = require('assert'); // Database/Schema Name const dbName = 'myproject'; // Connection URL const url = `mysqlx://root@localhost:33060/${dbName}`; const main = async function () { try { // Establish the server connection const session = await mysqlx.getSession(url); console.log('Successful server connection'); const db = session.getSchema(dbName); session.close(); } catch (err) { console.error(err.stack); process.exit(1); } }; main();


getSession method should return a JavaScript Promise which resolves with a Session object containing the connection details or fails with a Node.js Error if the connection could not be established.

Run the app using the following command:

$ node app.js

You should be able to see a

Successful server connection  message printed in the console. Inserting documents

Use the following code to create (or re-use if it exists)

documents  collection and add three new documents to it.const insertDocuments = async function (db) { // Create or re-use the collection const collection = await db.createCollection('documents', { ReuseExistingObject: true }); // Add some documents const result = await collection.add([{a : 1}, {a : 2}, {a : 3}]).execute(); assert.equal(3, result.getAffectedItemsCount()); console.log('Inserted 3 documents into the collection'); return result; };


add method returns a Promise which resolves with a Result  object containing details such as:
  • the number of items affected by the operation
  • a list of document _ids auto-generated by the server
  • any server-side warnings resulting from the operation

Add an

insertDocuments call in the existing main function like the following:const main = async function () { try { // Establish the server connection const session = await mysqlx.getSession(url); console.log('Successful server connection'); const db = session.getSchema(dbName); await insertDocuments(db); session.close(); } catch (err) { console.error(err.stack); process.exit(1); } };

Running the app should yield the following output:

Successful server connection Inserted 3 documents into the collection

Retrieving all documents

You can use the following code to retrieve all the existing documents in the collection.

const findDocuments = async function (db) { const docs = []; // Get the local collection instance const collection = db.getCollection('documents'); // Get all documents from the collection await collection.find().execute(function (doc) { docs.push(doc); }); console.log('Found the following documents'); console.log(docs); return docs; };

To collect and process records from the result set, you should provide a callback in the

execute method.

Use the

findDocuments method in the main function.const main = async function () { try { // Establish the server connection const session = await mysqlx.getSession(url); console.log('Successful server connection'); const db = session.getSchema(dbName); await insertDocuments(db); await findDocuments(db); session.close(); } catch (err) { console.error(err.stack); process.exit(1); } };

Finding documents with a given criteria

You can narrow down the results by specifying a query criteria like the following:

const findDocuments = async function (db) { const docs = []; // Get the local collection instance const collection = db.getCollection('documents'); // Find documents based on a specific criteria await collection.find('a = 3').execute(function (doc) { docs.push(doc); }); console.log('Found the following documents'); console.log(docs); return docs; };

The result set will now include just the documents containing

'a': 3. Updating documents

To modify an existing document i.e. updating specific properties or adding new ones, you can also provide a criteria (or

true to modify all) for the operation.const updateDocument = async function (db) { // Get the local collection instance const collection = db.getCollection('documents'); // Update document with a equal to 2, set b equal to 1 const result = await collection.modify('a = 2').set('b', 1).execute(); assert.equal(1, result.getAffectedItemsCount()); console.log('Updated the document with the field a equal to 2'); return result; };

Update the

main function to use the updateDocument method like the following:const main = async function () { try { // Establish the server connection const session = await mysqlx.getSession(url); console.log('Successful server connection'); const db = session.getSchema(dbName); await insertDocuments(db); await updateDocument(db); session.close(); } catch (err) { console.error(err.stack); process.exit(1); } };

Removing documents from a collection can also be done based on a given criteria (or

true  to remove all). The following code removes the document where the field a is greater than 2.const removeDocument = async function (db) { // Get the local collection instance const collection = db.getCollection('documents'); // Delete document where a is greater than 2 const result = await collection.remove('a > 2').execute() assert.equal(1, result.getAffectedItemsCount()); console.log('Removed the document with the field a greater than 2'); return result; };

To test this behavior, call the

removeDocument method in the main function.const main = async function () { try { // Establish the server connection const session = await mysqlx.getSession(url); console.log('Successful server connection'); const db = session.getSchema(dbName); await insertDocuments(db); await updateDocument(db); await removeDocument(db); session.close(); } catch (err) { console.error(err.stack); process.exit(1); } };

Creating secondary indexes

You can create a secondary index for any property of a document in a collection using the following code:

const createIndex = async function (db) { // Get the local collection instance const collection = db.getCollection('documents'); // Create the index await collection.createIndex('idx_a', {fields: [{field: '$.a', type: 'INT'}]}) console.log('Created an INT index idx_a for the field a'); };

Just like before, update the

main function to use the createIndex method.const main = async function () { try { // Establish the server connection const session = await mysqlx.getSession(url); console.log('Successful server connection'); const db = session.getSchema(dbName); await insertDocuments(db); await createIndex(db); session.close(); } catch (err) { console.error(err.stack); process.exit(1); } };

Wrapping up

This is just an overview of some of the features and API methods for taping into the MySQL document store using the Node.js connector and the X Dev API. For additional details, check the following links:

Upgrading to MySQL 8.0 : Default Authentication Plugin Considerations

If you are upgrading your server to MySQL 8.0 and observe that your application is experiencing error related caching_sha2_password plugin, it is likely because your clients/connectors does not (yet) support caching_sha2_password plugin. To resolve this issue, you may consider using mysql_native_password as default authentication for MySQL 8.0 server.…

MariaDB 5.5.60 now available

The MariaDB Foundation is pleased to announce the immediate availability of MariaDB 5.5.60. This is a stable (GA) release. See the release notes and changelog for details. Download MariaDB 5.5.60 Release Notes Changelog What is MariaDB 5.5? MariaDB APT and YUM Repository Configuration Generator Contributors to MariaDB 5.5.60 Alexander Barkov (MariaDB Corporation) Alexey Botchkov (MariaDB […]

The post MariaDB 5.5.60 now available appeared first on

MySQL High availability with HAProxy, Consul and Orchestrator


In this post we will explore one approach to MySQL high availability with HAProxy, Consul and Orchestrator.
Let’s briefly go over each piece of the puzzle first:
– HAProxy is usually installed on the application servers or an intermediate connection layer, and is in charge of connecting the application to the appropriate backend (reader or writer). The most common deployment I’ve seen is to have separate ports for writes (which are routed to the master) and reads (which are load balanced over a pool of slaves).
– Orchestrator’s role is to monitor the topology and perform auto recovery as needed.
The key piece here is how we can make HAProxy aware that a topology change has happened, and the answer lies within Consul (and Consul templates).
– Consul is meant to be told the identity of the new master by Orchestrator. By leveraging Consul templates, we can then in turn propagate that information to HAProxy.

Proof of concept

For this POC, I installed 3 test servers which run both MySQL and Consul: mysql1, mysql2 and mysql3. On server mysql3 I also installed HAProxy, Orchestrator and Consul-template.

Installing Consul

1. Install Consul on mysql1, mysql2 and mysql3

$ sudo yum -y install unzip $ sudo useradd consul $ sudo mkdir -p /opt/consul $ sudo touch /var/log/consul.log $ cd /opt/consul $ sudo wget $ sudo unzip $ sudo ln -s /opt/consul/consul /usr/local/bin/consul $ sudo chown consul:consul -R /opt/consul* /var/log/consul.log

2. Bootstrap the Consul cluster from one node. I’ve picked mysql3 here:

$ sudo vi /etc/consul.conf.json { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "mysql3", "server": true, "ui": true, "bootstrap": true, "client_addr": "", "advertise_addr": "" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &'

3. Start Consul on mysql1 and have it join the cluster

$ sudo vi /etc/consul.conf.json { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "mysql1", "server": true, "ui": true, "bootstrap": false, "client_addr": "", "advertise_addr": "" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &' $ consul join

4. Start Consul on mysql2 and have it join the cluster

$ sudo vi /etc/consul.conf.json { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "mysql2", "server": true, "ui": true, "bootstrap": false, "client_addr": "", "advertise_addr": "" } $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d > /var/log/consul.log &' $ consul join

At this point we have a working 3 node consul cluster. We can test writing k/v pairs to it and retrieving them back:

$ consul kv put foo bar Success! Data written to: foo $ consul kv get foo bar Configuring Orchestrator to write to Consul

Luckily, Orchestrator has built-in support for Consul so there is little assembly required there. The only caveat is we need to have Orchestrator populate the values in Consul manually by calling orchestrator-client the first time. This is because Orchestrator will only write the values each time there is a master change.

1. Configure Orchestrator to write to Consul on each master change. Add the following lines to Orchestrator config.

$ vi /etc/orchestrator.conf.json "KVClusterMasterPrefix": "mysql/master", "ConsulAddress": "",

2. Restart Orchestrator

$ service orchestrator restart

3. Populate the current master value manually

$ orchestrator-client -c submit-masters-to-kv-stores

4. Check the stored values from command line

$ consul kv get mysql/master/testcluster mysql1:3306 Using Consul template to manage haproxy

Since we have HAProxy running on mysql3, we need to install Consul template on that host to manage haproxy config. The idea is to configure Consul template to dynamically update HAProxy config file template, and reload HAProxy when there are changes to its configuration.
For HAProxy, I am setting up two different pools here, the master is reachable through HAProxy via port 3307, while the slaves are accesible over 3308.

1. Install Consul template on mysql3

$ mkdir /opt/consul-template $ cd /opt/consul-template $ sudo wget $ sudo unzip $ sudo ln -s /opt/consul-template/consul-template /usr/local/bin/consul-template

2. Create a template for HAProxy config file

$ vi /opt/consul-template/templates/haproxy.ctmpl global log local0 log local1 notice maxconn 4096 chroot /usr/share/haproxy user haproxy group haproxy daemon defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend writer-front bind *:3307 mode tcp default_backend writer-back frontend stats-front bind *:80 mode http default_backend stats-back frontend reader-front bind *:3308 mode tcp default_backend reader-back backend writer-back mode tcp option httpchk server master {{key "mysql/master/testcluster"}} check port 9200 inter 12000 rise 3 fall 3 backend stats-back mode http balance roundrobin stats uri /haproxy/stats stats auth user:pass backend reader-back mode tcp balance leastconn option httpchk server slave1 check port 9200 inter 12000 rise 3 fall 3 server slave2 check port 9200 inter 12000 rise 3 fall 3 server master check port 9200 inter 12000 rise 3 fall 3

3. Create consul template config file

$ vi /opt/consul-template/config/consul-template.cfg consul { auth { enabled = false } address = "" retry { enabled = true attempts = 12 backoff = "250ms" max_backoff = "1m" } ssl { enabled = false } } reload_signal = "SIGHUP" kill_signal = "SIGINT" max_stale = "10m" log_level = "info" wait { min = "5s" max = "10s" } template { source = "/opt/consul-template/templates/haproxy.ctmpl" destination = "/etc/haproxy/haproxy.cfg" command = "sudo service haproxy reload || true" command_timeout = "60s" perms = 0600 backup = true wait = "2s:6s" }

4. Give sudo permissions to consul-template so it can reload haproxy

$ sudo vi /etc/sudoers consul ALL=(root) NOPASSWD:/usr/bin/lsof, ...,/sbin/service haproxy reload

5. Start consul template

$ nohup /usr/local/bin/consul-template -config=/opt/consul-template/config/consul-template.cfg > /var/log/consul-template/consul-template.log 2>&1 &

And that is all the pieces we need. The next step is doing a master change (e.g. via Orchestrator GUI) and seeing the effects:

[root@mysql3 config]$ tail -f /var/log/consul-template/consul-template.log 2018/04/17 12:56:25.863912 [INFO] (runner) rendered "/opt/consul-template/templates/haproxy.ctmpl" => "/etc/haproxy/haproxy.cfg" 2018/04/17 12:56:25.864024 [INFO] (runner) executing command "sudo service haproxy reload || true" from "/opt/consul-template/templates/haproxy.ctmpl" => "/etc/haproxy/haproxy.cfg" 2018/04/17 12:56:25.864078 [INFO] (child) spawning: sudo service haproxy reload Redirecting to /bin/systemctl reload haproxy.service

What happened? Orchestrator updated the K/V in Consul, and Consul template detected the change and updated the haproxy config file in turn, reloading haproxy after.


HAProxy is still being widely used as a proxy/load balancer in front of MySQL, so it’s nice to be able to combine it with Orchestrator and Consul to put together a high availability solution.

While this is a viable alternative, for a new deployment I usually recommend going with with ProxySQL instead. For one, you have the benefit of graceful switchover without returning any errors to the application. The setup is also a bit easier as there are less moving parts with ProxySQL Cluster (one could get rid of Consul). Finally, having a SQL aware proxy opens up more interesting possibilities like r/w splitting and query mirroring.



NOWAIT, WAIT and SKIP LOCKED are syntaxes added in MySQL 8.0 and MariaDB 10.3. The idea came from AliSQL (MySQL fork by Alibaba). It was revisited in MySQL, and I am not aware if MariaDB used the original implementation.

While MySQL and MariaDB syntaxes are similar, there are important differences and the compatibility is only apparent. This article discusses these differences.


This syntax is only available in MariaDB. It means that, if a row or table that we want to read is write-locked, we can wait up to the specified number of seconds. If the lock is not released after the timeout occurs, the query will fail.


If a table or row we need to read is write-locked, the query will not be queued; instead, it will fail immediately.


  • MariaDB supports this syntax for some DDL statements (ALTER TABLE and its shortcuts), LOCK TABLES, and SELECT. MySQL only supports it for SELECT.
  • MySQL only supports this syntax in combination with FOR UPDATE or FOR SHARE. In order to introduce an incompatibility, they sacrificed the support of this feature for SELECTs in SERIALIZABLE mode, that have an implicit LOCK IN SHARE MODE clause. Fortunately this is an edge case, but it is another case where Oracle marketing strategies affect users in a bad way.
  • MySQL implements FOR UPDATE OF and FOR SHARE OF. This is interesting, and not only for the NOWAIT feature, because it allows us to JOIN multiple tables without locking them all. Thanks, Oracle engineers.
  • MySQL and MariaDB report different error codes and messages.
    MySQL says: ERROR 3572 (HY000): Statement aborted because lock(s) could not be acquired immediately and NOWAIT is set
    MariaDB says: ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

This is only in MySQL. It excludes locked rows from the resultset, making our queries fast. The documentation warns us that the resultset will be inconsistent. This is implicit in the future, but it is worth emphatizing. However, consistency is not always so important, and skipping rows seems to me a great way to solve some performance problems.

My conclusions

MariaDB implementation of NOWAIT is clearly more complete. They have WAIT syntax to set a timeout; they implemented this syntax in more SQL statements; and the syntax is compatible with implicit LOCK IN SHARE MODE.

Despite this, IMHO, MySQL wins. They have SKIP LOCKED, which is very interesting. Also the above mentioned FOR UPDATE OF syntax is a nice feature.

Once again, Oracle spent some resources to add incompatibilities. This does not affect the quality of MySQL, but it’s still a damage to the community, which should be able to use both MySQL and MariaDB in the same environments, with the same tools and the same libraries. But the distance between these DBMS’s is growing constantly.



MySQL InnoDB Cluster – What’s new in the 8.0 GA Release

The MySQL Development Team is proud to announce the second GA release of InnoDB cluster!

We listened carefully to the feedback we got from the previous major release (1.0 GA) and the last release candidate (8.0.4 RC) and incorporated many of the suggested changes.…

MySQL InnoDB Cluster 8.0 GA is Available Now!

The MySQL Development Team is proud and happy to announce the General Availability release of InnoDB Cluster 8.0! This is an extremely exciting release of our integrated, native, full stack High Availability (HA) solution for MySQL.

InnoDB Cluster provides an out-of-the-box and easy to use built-in HA and Scaling solution for MySQL by tightly integrating the following GA components:

  • MySQL 8.0.11+ Servers with Group Replication, to provide the data replication mechanism within InnoDB clusters, ensuring fault tolerance, automated failover, and elasticity.

MyRocks, malloc and fragmentation -- a strong case for jemalloc

While trying to reproduce a MyRocks performance problem I ran a test using a 4gb block cache and tried both jemalloc and glibc malloc. The test server uses Ubuntu 16.04 which has glibc 2.23 today. The table below lists the VSZ and RSS values for the mysqld process after a test table has been loaded. RSS with glibc malloc is 2.6x larger than with jemalloc. MyRocks and RocksDB are much harder on an allocator than InnoDB and this test shows the value of jemalloc.

VSZ(gb) RSS(gb) malloc    7.9     4.8    jemalloc-3.6.0 13.6    12.4    glibc-2.23
I am not sure that it is possible to use a large RocksDB block cache with glibc malloc, where large means that it gets about 80% of RAM.

I previously shared results for MySQL and for MongoDB. There have been improvements over the past few years to make glibc malloc perform better on many-core servers. I don't know whether that work also made it better at avoiding fragmentation.


Let's start by combining one new MariaDB feature -- VALUES (5) which is the standard equivalent of the old non-standard "SELECT 5" -- with another new feature in MariaDB 10.3.5, the FOR ... END FOR statement.

mariadb>BEGIN NOT ATOMIC > DECLARE v INT DEFAULT 0; > FOR a IN (VALUES (7)) DO SET v = v + 1; END FOR; > SELECT v; >END; Error 2013 (HY000) Lost connection to MySQL server during query

Oops. However, the MariaDB folks now know about this, it's bug MDEV-15940" and they know about the other bugs that I'll mention in this post, so there's no problem provided you wait for the next version.

FOR ... END FOR works prettily so I decided to describe it. The official documentation isn't out yet so this could change.

FOR x IN (subquery) DO ... END FOR



The SELECT will return '1' because the SET statement happened once, because SELECT 5 returns 1 row. We must put (SELECT 5) inside parentheses because there must be a way to find where it ends -- the word DO is not reserved and therefore is useless as a marker for the parser.

FOR x IN cursor DO ... END FOR



FOR cn_record IN cn means "loop with cursor cn, which was declared earlier". And cn_record according to MariaDB is a row variable that can be used within the loop, so cn_record.s1 is the value of s1 within a given row of table t.

Given that table t has 3 rows containing (1),(2),(3), and given that the obvious intent is that v2 will end up with the sum, you might think the SELECT will return v2 = 6, eh? Well, you're thinking right, it does. This is a way to go through a cursor, with each iteration doing an automatic FETCH.

And, since there's a CONTINUE HANDLER that says we'll set v3 = 1 when there are no more rows to fetch, you might think the SELECT will return v3 = 1, eh? Well, you're thinking reasonably (I think), but it doesn't. The handler isn't brought into play, the NOT FOUND condition is automatically cleared when the loop ends.

Summary: this kind of FOR does automatic cursor-open and cursor-fetch, but not automatic cursor-close. To me that looked like a bug, the way I interpret the standard document, "CLOSE cn;" is supposed to happen. And MariaDB agrees, it's bug MDEV-15941.

FOR x IN low-number .. high-number DO ... END FOR


SET @v = 0; CREATE PROCEDURE p() BEGIN FOR a IN 1 .. 3 DO SET @v = @v + a; END FOR; SELECT @v; END;

MariaDB 10.3 comes with a boatload of Oracle/PostgreSQL syntax, often known as "PL/SQL" (their deviation from the standard "SQL/PSM" syntax which MySQL/MariaDB have followed in the past). But all the PL/SQL stuff is supplied if and only if you say

SET @@sql_mode='oracle';

The FOR ... END FOR statement is an exception, it works in the default mode too. That's deliberate. And it's equivalent to

SET a = 1; WHILE a

So the SELECT in the above example displays '6'.

The ".." is a new operator, vaguely similar to the "range operators" in languages like Ruby and Perl. But SQL already has a range operator, it's called BETWEEN. And now MariaDB won't be able to accept the SQL Server syntax where ".." is for qualifiers ("a..b" means "schema a + default table + column b").

I encountered a problem during the test, with:
FOR a IN (SELECT 1) .. 3 DO SET @v = @v + a; END FOR;
Technically this should be legal because "low-number" can be any expression, including a subquery. In this case, though, subqueries mean something else, so it is not legal. This is now bug MDEV-15944".

I enthusiastically recommend: don't use this, use WHILE.

Differences from the standard

My first example looked like this:

FOR a IN (SELECT 5) DO SET v = v + 1; END FOR;

In standard SQL it would have looked like this:

FOR (VALUES (5)) DO SET v = v + 1; END FOR;

... The standard doesn't require mentioning "a" when there is no need to refer to "a" inside the loop. The fact that the standard would use "VALUES (5)" is, as we've seen, something that MariaDB will soon support too.

My second example looked like this:

FOR cn_record IN cn DO SET v2 = v2 + cn_record.s1; END FOR;

In standard SQL it would have looked like this:

FOR cn_record AS cn DO SET v2 = v2 + s1; END FOR;

So it's a difference between "IN" and "AS", and between "cn_record.s1" and "s1". However, I could have said "cn_record.s1" in standard SQL too, it's optional when there's no ambiguity.

There are serious matters that underlie that innocent-looking difference with "cn_record.", about which I now digress.

Shadowing, a Digression

In the following, should "SELECT v FROM t;" display 1, or 2, or 3?


Answer: in MariaDB it's 2. Inner scope beats outer scope, and variable declaration beats table definition. This is an old MySQL flaw that MariaDB has inherited (Bug #5967 Stored procedure declared variable used instead of column). The workaround, if you actually want "column v of table t", is to use a qualifier, as in "SELECT t.v FROM t;".

But what if you wanted to refer to the first declared variable? That would be easy too, in standard SQL you would qualify with the label, as in "SELECT a.v FROM t;". Unfortunately -- another old MySQL flaw that MariaDB has inherited -- you can't use [label.]variable_name for qualification.

As a result of these two flaws, we could have trouble in our FOR ... END FOR loop if we used unqualified names like "s1". Therefore in MariaDB you have to say "cn_record.s1" if you want the FOR variable, that is, qualification is compulsory. That's a solution.

But the solution depends on a third flaw.

In standard SQL in this FOR loop "cn_record" becomes a label, and that's why "cn_record.s1" would be legal -- cn_record is a label qualifier. That's not the case with Oracle/MariaDB, where "cn_record" is a variable with type = ROW -- cn_record is a row-variable qualifier. The effect is the same for the particular example that I used, but that's just luck.

The flaws do not mean that the implementor did something wrong for FOR ... END FOR, rather it means that the blame lies in MySQL's history. It would be grossly wrong to blame the standard, which has no flaws.

Differences from the Standard, continued

My third example looked like this:

FOR a IN 1 .. 3 DO SET @v = @v + a; END FOR;

In standard SQL it would have looked like this:

[empty prairie with crickets chirping in the darkness]

For more about the original plan for adding FOR ... END FOR to MySQL in a standard way, go to "WL#3309: Stored Procedures: FOR statement" and click "High-level architecture".


Of course we've changed Ocelot's GUI client for MySQL/MariaDB so that it recognizes most of the new syntax in MariaDB 10.3. We haven't changed the debugger, though, so it won't handle statements containing FOR ... END FOR.

MySQL 8.0 Resource Groups

MySQL 8.0 is out. Is this a great piece of news? No.

But MySQL 8.0 is wonderful, and this IS a great piece of news!

It has lots of interesting features, really. Oracle advertised some of them very well with talks at conferences, blog posts, etc. However I am very pleased by a features that they did not advertised at all: resource groups.

The documentation describes them in detail, but here is a small recap.

As we all know, MySQL has system (background) threads, and user (foreground) threads. Until now, nothing could be done to change their priority. All we could do was to tune InnoDB concurrency tickets to make sure that long running queries don’t prevent other queries from using CPU time. Basically, what we do is splitting the execution of a query to improve concurrency.

Now we can also affect threads priority. By default, all threads have the same priority. We can increase the priority of system threads, or decrease the priority of user threads. In any case, no user thread can have more priority than any system thread. Furthermore, we can limit the execution of some threads to one or more virtual CPUs of our choice.

How are these things done? By creating a resource group and setting THREAD_PRIORITY and VCPU attributes. THREAD_PRIORITY is a number (-20..0 for system threads, 0..19 for user threads; lower numbers have higher priority). VCPU is a virtual CPU number (see which ones are available with lscpu), or a range, or a list of numbers/ranges. Then, we can assign running threads to one of the groups we created.

For example, to create a group for system threads:


To assign an existing thread to that group, check its id with SHOW PROCESSLIST, and then run something like:


Some possible ideas:

  • Increase priority of replication threads to reduce slave lag;
  • Increase priority of the even scheduler;
  • Decrease priority of long-running analytical queries.

I plan to spend some time to experiment what happens if we change the priority of specific system threads on a busy server. I expect this feature to be helpful during peaks of CPU usage.

Unfortunately, resource groups can only be assigned to running threads. There is no way to tell MySQL that connections from a certain user should use a certain group. However, we can achieve the same result by making a change in the applications. After connecting, the application can run:



MySQL Document Store Document IDs.

Yesterday I was presenting on the MySQL Document Store and was asked if the _id fields created by the server as an InnoDB primary key is a UUID.  I knew that it was not a UUID but I had to hit the documentations ( to find out what the document ID really is -- a very interesting piece of information.
The Details If you are inserting a document lacking a _id key, the server generates a value. The _id is 32 bits of a unique prefix (4 bytes), a time stamp (8 bytes), and serial number (16 bytes). The prefix is assigned by the InnoDB Cluster to help ensure uniqueness across a cluster. The timestamp is the encoded startup time of the server.  The serial numbers uses the auto increment offset and auto increment increment server variables .  From the manual page:
This document ID format ensures that: The primary key value monotonically increments for inserts originating from a single server instance, although the interval between values is not uniform within a table. When using multi-primary Group Replication or InnoDB cluster, inserts to the same table from different instances do not have conflicting primary key values; assuming that the instances have the auto_increment_* system variables configured properly. PropertiesOnce set, the _id can not be set to another value. Inserting your own value overrides the server assignment. And if you attempt to insert a document with the same _id you the server will generate a duplicate primary key error.

The _id values must be always increasing and sequential for optimal InnoDB performance. The server will keep track of theses numbers across restarts.

The generated _id values for each table/collection  are unique across instances to avoid primary key conflicts and minimize transaction certification in multi-primary Group Replication or InnoDB cluster environments.

Required?So, you are adding a document to a collection and you get an ERROR: 5115!  That means in the following cast that the _id key/value pair is needed:

JS> db.foobar.add(
-> {
-> Data: "This is a test!"
-> }
-> )
ERROR: 5115: Document is missing a required field

JS> db.foobar.add( { Data: "This is a test!" , -> _id: "first" } )
Query OK, 1 item affected (0.0076 sec)

Fedora Install unixODBC

Encountered a problem while running the RODBC library from the R prompt as the root user, as follows:

> install.packages('RODBC')

It failed with the following library dependency:

checking for unistd.h... yes checking sql.h usability... no checking sql.h presence... no checking for sql.h... no checking sqlext.h usability... no checking sqlext.h presence... no checking for sqlext.h... no configure: error: "ODBC headers sql.h and sqlext.h not found" ERROR: configuration failed for package ‘RODBC’ * removing ‘/usr/lib64/R/library/RODBC’ The downloaded source packages are in ‘/tmp/RtmpdT1gay/downloaded_packages’ Updating HTML index of packages in '.Library' Making 'packages.html' ... done Warning message: In install.packages("RODBC") : installation of package ‘RODBC’ had non-zero exit status

I installed unixODBC-devel and unixODBC-gui-qt libraries to fix the library dependencies with the following command as the root user:

yum install -y unixODBC*

It should show you the following when it installs the unixODBC-devel and unixODBC-gui-qt libraries:

Loaded plugins: langpacks, refresh-packagekit You need to be root to perform this command. [student@localhost ~]$ su - root Password: Last login: Fri Apr 20 21:18:56 PDT 2018 on pts/1 [root@localhost ~]# yum install -y unixODBC* Loaded plugins: langpacks, refresh-packagekit cassandra/signature | 819 B 00:00 cassandra/signature | 2.9 kB 00:00 !!! fedora/20/x86_64/metalink | 3.3 kB 00:00 mysql-connectors-community | 2.5 kB 00:00 mysql-tools-community | 2.5 kB 00:00 mysql56-community | 2.5 kB 00:00 [Errno 14] HTTP Error 404 - Not Found Trying other mirror. updates/20/x86_64/metalink | 3.1 kB 00:00 Package unixODBC-2.3.2-4.fc20.x86_64 already installed and latest version Resolving Dependencies --> Running transaction check ---> Package unixODBC-devel.x86_64 0:2.3.2-4.fc20 will be installed ---> Package unixODBC-gui-qt.x86_64 0:0-0.8.20120105svn98.fc20 will be installed --> Processing Dependency: for package: unixODBC-gui-qt-0-0.8.20120105svn98.fc20.x86_64 --> Processing Dependency: for package: unixODBC-gui-qt-0-0.8.20120105svn98.fc20.x86_64 --> Processing Dependency: for package: unixODBC-gui-qt-0-0.8.20120105svn98.fc20.x86_64 --> Processing Dependency: for package: unixODBC-gui-qt-0-0.8.20120105svn98.fc20.x86_64 --> Running transaction check ---> Package qt.x86_64 1:4.8.6-30.fc20 will be installed --> Processing Dependency: qt-common = 1:4.8.6-30.fc20 for package: 1:qt-4.8.6-30.fc20.x86_64 --> Processing Dependency: qt-settings for package: 1:qt-4.8.6-30.fc20.x86_64 ---> Package qt-assistant-adp.x86_64 0:4.6.3-6.fc20 will be installed ---> Package qt-x11.x86_64 1:4.8.6-30.fc20 will be installed --> Processing Dependency: for package: 1:qt-x11-4.8.6-30.fc20.x86_64 --> Processing Dependency: for package: 1:qt-x11-4.8.6-30.fc20.x86_64 --> Running transaction check ---> Package clucene09-core.x86_64 0:0.9.21b-13.fc20 will be installed ---> Package libmng.x86_64 0:1.0.10-12.fc20 will be installed ---> Package qt-common.noarch 1:4.8.6-30.fc20 will be installed ---> Package qt-settings.noarch 0:20-18.fc20 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: unixODBC-devel x86_64 2.3.2-4.fc20 updates 55 k unixODBC-gui-qt x86_64 0-0.8.20120105svn98.fc20 fedora 624 k Installing for dependencies: clucene09-core x86_64 0.9.21b-13.fc20 updates 300 k libmng x86_64 1.0.10-12.fc20 fedora 166 k qt x86_64 1:4.8.6-30.fc20 updates 4.7 M qt-assistant-adp x86_64 4.6.3-6.fc20 fedora 257 k qt-common noarch 1:4.8.6-30.fc20 updates 5.8 k qt-settings noarch 20-18.fc20 updates 19 k qt-x11 x86_64 1:4.8.6-30.fc20 updates 12 M Transaction Summary ================================================================================ Install 2 Packages (+7 Dependent packages) Total download size: 18 M Installed size: 56 M Downloading packages: (1/9): libmng-1.0.10-12.fc20.x86_64.rpm | 166 kB 00:01 (2/9): clucene09-core-0.9.21b-13.fc20.x86_64.rpm | 300 kB 00:01 (3/9): qt-4.8.6-30.fc20.x86_64.rpm | 4.7 MB 00:00 (4/9): qt-common-4.8.6-30.fc20.noarch.rpm | 5.8 kB 00:00 (5/9): qt-settings-20-18.fc20.noarch.rpm | 19 kB 00:00 (6/9): qt-assistant-adp-4.6.3-6.fc20.x86_64.rpm | 257 kB 00:00 (7/9): qt-x11-4.8.6-30.fc20.x86_64.rpm | 12 MB 00:01 (8/9): unixODBC-devel-2.3.2-4.fc20.x86_64.rpm | 55 kB 00:00 (9/9): unixODBC-gui-qt-0-0.8.20120105svn98.fc20.x86_64.rpm | 624 kB 00:01 -------------------------------------------------------------------------------- Total 4.1 MB/s | 18 MB 00:04 Running transaction check Running transaction test Transaction test succeeded Running transaction (shutdown inhibited) Installing : libmng-1.0.10-12.fc20.x86_64 1/9 Installing : qt-settings-20-18.fc20.noarch 2/9 Installing : 1:qt-common-4.8.6-30.fc20.noarch 3/9 Installing : 1:qt-4.8.6-30.fc20.x86_64 4/9 Installing : clucene09-core-0.9.21b-13.fc20.x86_64 5/9 Installing : 1:qt-x11-4.8.6-30.fc20.x86_64 6/9 Installing : qt-assistant-adp-4.6.3-6.fc20.x86_64 7/9 Installing : unixODBC-gui-qt-0-0.8.20120105svn98.fc20.x86_64 8/9 Installing : unixODBC-devel-2.3.2-4.fc20.x86_64 9/9 Verifying : clucene09-core-0.9.21b-13.fc20.x86_64 1/9 Verifying : unixODBC-gui-qt-0-0.8.20120105svn98.fc20.x86_64 2/9 Verifying : 1:qt-x11-4.8.6-30.fc20.x86_64 3/9 Verifying : 1:qt-4.8.6-30.fc20.x86_64 4/9 Verifying : qt-settings-20-18.fc20.noarch 5/9 Verifying : 1:qt-common-4.8.6-30.fc20.noarch 6/9 Verifying : unixODBC-devel-2.3.2-4.fc20.x86_64 7/9 Verifying : qt-assistant-adp-4.6.3-6.fc20.x86_64 8/9 Verifying : libmng-1.0.10-12.fc20.x86_64 9/9 Installed: unixODBC-devel.x86_64 0:2.3.2-4.fc20 unixODBC-gui-qt.x86_64 0:0-0.8.20120105svn98.fc20 Dependency Installed: clucene09-core.x86_64 0:0.9.21b-13.fc20 libmng.x86_64 0:1.0.10-12.fc20 qt.x86_64 1:4.8.6-30.fc20 qt-assistant-adp.x86_64 0:4.6.3-6.fc20 qt-common.noarch 1:4.8.6-30.fc20 qt-settings.noarch 0:20-18.fc20 qt-x11.x86_64 1:4.8.6-30.fc20 Complete!

After installing the unixODBC-devel and unixODBC-gui-qt libraries, I installed the RODBC library from the R prompt, having launched the R environment as the root user:

> install.packages('RODBC')

Installing the RODBC library should install cleanly and generate the following output:

Installing package into ‘/usr/lib64/R/library’ (as ‘lib’ is unspecified) trying URL '' Content type 'application/x-gzip' length 1163967 bytes (1.1 MB) ================================================== downloaded 1.1 MB * installing *source* package ‘RODBC’ ... ** package ‘RODBC’ successfully unpacked and MD5 sums checked checking for gcc... gcc -m64 -std=gnu99 checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc -m64 -std=gnu99 accepts -g... yes checking for gcc -m64 -std=gnu99 option to accept ISO C89... none needed checking how to run the C preprocessor... gcc -m64 -std=gnu99 -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking sql.h usability... yes checking sql.h presence... yes checking for sql.h... yes checking sqlext.h usability... yes checking sqlext.h presence... yes checking for sqlext.h... yes checking for library containing SQLTables... -lodbc checking for SQLLEN... yes checking for SQLULEN... yes checking size of long... 8 configure: creating ./config.status config.status: creating src/Makevars config.status: creating src/config.h ** libs gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I. -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RODBC.c -o RODBC.o gcc -m64 -std=gnu99 -shared -L/usr/lib64/R/lib -Wl,-z,relro -o RODBC.o -lodbc -L/usr/lib64/R/lib -lR installing to /usr/lib64/R/library/RODBC/libs ** R ** inst ** preparing package for lazy loading ** help *** installing help indices converting help for package ‘RODBC’ finding HTML links ... done RODBC-internal html RODBC-package html odbc html odbcClose html odbcConnect html odbcDataSources html odbcGetInfo html odbcSetAutoCommit html setSqlTypeInfo html sqlColumns html sqlCopy html sqlDrop html sqlFetch html sqlQuery html sqlSave html sqlTables html sqlTypeInfo html ** building package indices ** installing vignettes ** testing if installed package can be loaded * DONE (RODBC) Making 'packages.html' ... done The downloaded source packages are in ‘/tmp/RtmpdT1gay/downloaded_packages’ Updating HTML index of packages in '.Library' Making 'packages.html' ... done

I hope that helps anybody who runs into the library dependency problems.

MySQL adjustment bureau

When maintainng any piece of software, we usually deal with two kind of actions:

  • bug fixing,
  • new features.
bugs and features

A bug happens when there is an error in the software, which does not behave according to the documentation or the specifications. In short, it's a breech of contract between the software maintainer and the users. The promise, i.e. the software API that was published at every major version, is broken, and the software must be reconciled with the expectations and fixed, so that it behaves again as the documentation says. When we fix a bug in this way, we increment the revision number of the software version (e.g. 1.0.0 to 1.0.1. See semantic versioning).

New features, in turn, can be of two types:

  • backward compatible enhancements, which add value to the software without breaking the existing functionality. This is the kind of change that requires an increment of the minor indicator in the version (for example: 1.1.15 to 1.2.0.)
  • Incompatible changes that break the existing behavior and require users to change their workflow. This kind of change requires bumping up the major number in the version (as in 2.1.3 to 3.0.0.)
Not a bug, nor a feature, but an adjustment.

The above concepts seem simple enough: you either fix something that's broken or add new functionality.

However, when maintaining a tool that has the purpose of helping users to deal with another software (as it is the case of dbdeployer that helps users to deploy MySQL databases) there is yet another category of changes that don't fall into the standard categories: it's what happens when the software being helped (MySQL) changes its behavior, which would break the normal functioning of the helping tool, giving the maintainer a difficult choice:

  • shall I modify the tool's interface to adapt to the new behavior, breaking existing procedures?
  • or shall I adapt the tool's functioning behind the scenes to keep the interface unchanged?

My philosophy with dbdeployer (and MySQL-Sandbox before it) is to preserve the tool's interface, so that users don't have to change existing procedures. I call this kind of changes adjustments, because they are not bugs, as they are not a consequence of a coding error, and not a feature, as the intervention is not a conscious decision to add new functionality, but an emergency operation to preserve the status quo. You can think of this category as a capricious change in specifications, which so often happens to software developers, with the difference that the one changing the specs is not the user, but a third party who doesn't know, or care, about our goal of preserving the API integrity.

For example, from MySQL 8.0.3 to 8.0.4 there was a change in the default authentication plugin. Instead of mysql_native_password, MySQL 8.0.4 uses caching_sha2_password. The immediate side effect for MySQL-Sandbox and dbdeployer was that replication doesn't work out of the box. A possible solution would be to force the old authentication plugin, but this would not allow users to test the new one. Since the main reason to use a tool like dbdeployer is to experiment with new releases safely, I had to keep the default behavior. Thus, I left the default plugin in place, and changed the way the replication works. It's an ugly workaround actually, but allows users to see the new behavior without losing existing functionality.
To complete the adjustment, I added a new option --native-auth-plugin, which would deploy using the old mysql_native_password. In total, the adjustment consists of a behind-the-scenes change, almost undetectable by users, and a new option to keep using the familiar authentication if users want it.

From the point of view of semantic versioning, this kind of change is a backward-compatible modification of the API, which warrants an increase of the minor number of the version.

Another example: when MySQL went from 8.0.4 to 8.0.11, it introduced a deal breaker change: the X Plugin is now loaded by default. This is easy for users of MySQL as a document store, as they don't need to enable the plugin manually, but bad news for anyone else, as the server is opening a port and a socket that many users may not choose to open voluntarily. What's worse, when installing more sandboxes of version 8.0.11 in the same host (for example in replication), one will succeed in reserving the plugin port and socket, while the others will have the error log populated with surprising errors about a socket being already in use.

The solution is similar to the previous one. When dbdeployer detect MySQL 8.0.11 or newer, it adds options to customize the mysqlx plugin port and socket, thus allowing a frictionless deployment where the new functionality is available to the brave experimenters. At the same time, I added a new option (--disable-mysqlx) for the ones who really don't want an extra port and socket in their servers, not even for testing.

These adjustment are usually costly additions. While the added code is not that much, they require extra tests, which are often complex and require more time to write and execute them. The process to add an adjustment goes mostly like this:

  • I dedicate my morning walk to think about the fix. Sometimes the fix requires several walks, while I decide the less intrusive solution.
  • If the walk has been fruitful, writing the code requires just a few minutes. If I missed something, I iterate.
  • Then the more difficult part: writing meaningful tests that prove that the adjustment is correct and it doesn't introduce side effects in any MySQL version. And of course the option that reintroduces the old behavior must be tested too.
  • A positive side effect of this exercise is that often I realize that I was missing a test for an important behavior and then I write down that as well. The test suite included 6,000+ tests 1 month ago, and now it has almost doubled.

The Final Countdown: Are You Ready for Percona Live 2018?

It’s hard to believe Percona Live 2018 starts on Monday! We’re looking forward to seeing everyone in Santa Clara next week! Here are some quick highlights to remember:

  • In addition to all the amazing sessions and keynotes we’ve announced, we’ll be hosting the MySQL Community Awards and the Lightning Talks on Monday during the Opening Reception.
  • We’ve also got a great lineup of demos in the exhibit hall all day Tuesday and Wednesday – be sure to stop by and learn more about open source database products and tools.
  • On Monday, we have a special China Track now available from Alibaba Cloud, PingCAP and Shannon Systems. We’ve just put a $20.00 ticket on sale for that track, and if you have already purchased any of our other tickets, you are also welcome to attend those four sessions.
  • Don’t forget to make your reservation at the Community Dinner. It’s a great opportunity to socialize with everyone and Pythian is always a wonderful host!

Thanks to everyone who is sponsoring, presenting and attending! The community is who makes this event successful and so much fun to be a part of!

The post The Final Countdown: Are You Ready for Percona Live 2018? appeared first on Percona Database Performance Blog.

Restore a table / database from full backup – Yet another way

Restore requests are common and so are the restores of specific entities: a database, or one or more table(s). This has been discussed a lot and we have plenty of tools and solutions already available.
In this blog post we will cover an interesting solution that I came across when I received a restoration request from a client with a specific scenario.

The scenario? Well, the client was on a Windows server with 400GB of mysqldump and wanted to restore a table.

As Linux players we already know of some tools and techniques to export a table or database from mysqldump – for example, using sed command or using the script mysqldumpsplitter (based on sed itself). But on Windows we are powerless by not being able to use sed (we’re sad without sed.) Also, there was no cygwin to ease up the pain.

We had to come-up with a solution that works on Windows as well. During this discussion, my Pythian colleague, Romuller, suggested a cool but simple trick which enlightens us and offers one more way of exporting or recovering a table from a full mysqldump.

So the trick here is as follows:
– Create a user that has very specific grants, limited to one or more table(s) or database(s) that we need to restore.
– Load mysqldump into the database with that user provide with –force. The option –force will ignore all the errors that will occur due to lack of privileges of the new user we created specifically for restore.

Easy right? Database Consultants like to KISS ;).

Let’s give it a try.

I selected a table “stories” & create the “bad” situation by dropping that table.

mysql> use test Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +------------------------+ | Tables_in_test | +------------------------+ ... | stories | ... +------------------------+ mysql> select count(*) from stories; +----------+ | count(*) | +----------+ | 881 | +----------+ 1 row in set (0.02 sec) mysql> drop table stories; Query OK, 0 rows affected (0.29 sec)

Let’s begin the recovery phase now following the grants method.
1. Create the user with limited grants only on test.stories table.

mysql> grant all privileges on test.stories to 'stories'@localhost identified by 'X'; Query OK, 0 rows affected, 1 warning (0.03 sec) mysql> show warnings; +---------+------+------------------------------------------------------------------------------------------------------------------------------------+ | Level | Code | Message | +---------+------+------------------------------------------------------------------------------------------------------------------------------------+ | Warning | 1287 | Using GRANT for creating new user is deprecated and will be removed in future release. Create new user with CREATE USER statement. | +---------+------+------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec)

Wait, there is a warning. We see this warning in MySQL 5.7.6 onward and it says GRANT commands will be deprecated in favour of CREATE USER statement to create new users. So, we shall have following practice to be ready for MySQL 8 :)

CREATE USER 'stories'@’localhost’ identified with mysql_native_password by ‘X'; grant all privileges on test.stories to 'stories'@'localhost';

2. Load the mysqldump using the same user with –force.

[root@mysql1c ~]# cat fuldump.sql | mysql -ustories -pX test --force mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1044 (42000) at line 22: Access denied for user 'stories'@'localhost' to database 'archive' ... ERROR 1142 (42000) at line 420: ALTER command denied to user 'stories'@'localhost' for table 'emp_new' ... ERROR 1142 (42000) at line 1966: ALTER command denied to user 'stories'@'localhost' for table 'user_address'

3. Verify table is restored:

mysql> show tables; +------------------------+ | Tables_in_test | +------------------------+ ... | stories | ... +------------------------+ mysql> select count(*) from stories; +----------+ | count(*) | +----------+ | 881 | +----------+ 1 row in set (0.00 sec)

When you compare the table that is being restored to the other one, mysqldump is smaller. This method may take a lot of time just ignoring errors due to –force option. Of course, in most cases you will end up reading the whole file. If our table appears early in the mysqldump, we may monitor the progress and kill the process as well. Otherwise, it may make more sense to try and install Cygwin or move the backup to a Linux Box to extract a database object from the backup file.

Hope this helps.