Planet MySQL

MySQL Connector/ODBC 5.3.13 has been released

Dear MySQL users,

MySQL Connector/ODBC 5.3.13, a new version of the ODBC driver for the
MySQL database management system, has been released.

The available downloads include both a Unicode driver and an ANSI
driver based on the same modern codebase. Please select the driver
type you need based on the type of your application – Unicode or ANSI.
Server-side prepared statements are enabled by default. It is suitable
for use with any MySQL version from 5.6.

This is the sixth release of the MySQL ODBC driver conforming to the
ODBC 3.8 specification. It contains implementations of key 3.8
features, including self-identification as a ODBC 3.8 driver,
streaming of output parameters (supported for binary types only), and
support of the SQL_ATTR_RESET_CONNECTION connection attribute (for the
Unicode driver only).

The release is now available in source and binary form for a number of
platforms from our download pages at

For information on installing, please see the documentation at

Changes in MySQL Connector/ODBC 5.3.13 (2019-04-29, General Availability)

Bugs Fixed

* Connector/ODBC 5.3 is now built with MySQL client library
5.7.26, which includes OpenSSL 1.0.2R. Issues fixed in
the new OpenSSL version are described at (Bug

* An exception was emitted when fetching contents of a
BLOB/TEXT records after executing a statement as a
server-side prepared statement with a bound parameter.
The workaround is not using parameters or specifying
NO_SSPS=1 in the connection string; this allows the
driver to fetch the data. (Bug #29282638, Bug #29512548,
Bug #28790708, Bug #93895, Bug #94545, Bug #92078)

On Behalf of Oracle/MySQL Release Engineering Team,
Hery Ramilison

MySQL 8.0 GIS Units of Measure - Meter, foot, Clarke's yard, or Indian Foot

The ST_DISTANCE function has been upgraded in MySQL 8.0.16 to allow you to specify the unit of measure between to locations.  Previously you had to convert from meters to what you desired but now you can use the INFORMATION_SCHEMA.ST_UNITS_OF_MEASURE table to help you get many of the more popular measurements (foot, yard, statue mile, nautical mile, fathom) and some ones that are new to me (chain, link, various feet).   However some measures are omitted (furlong,smoot) that may have some relevance in your life.

select * from information_schema.ST_UNITS_OF_MEASURE;
Fetching table and column names from `mysql` for auto-completion... Press ^C to stop.
| UNIT_NAME                            | UNIT_TYPE | CONVERSION_FACTOR   | DESCRIPTION |
| metre                                | LINEAR    |                   1 |             |
| millimetre                           | LINEAR    |               0.001 |             |
| centimetre                           | LINEAR    |                0.01 |             |
| German legal metre                   | LINEAR    |        1.0000135965 |             |
| foot                                 | LINEAR    |              0.3048 |             |
| US survey foot                       | LINEAR    | 0.30480060960121924 |             |
| Clarke's yard                        | LINEAR    |        0.9143917962 |             |
| Clarke's foot                        | LINEAR    |        0.3047972654 |             |
| British link (Sears 1922 truncated)  | LINEAR    |          0.20116756 |             |
| nautical mile                        | LINEAR    |                1852 |             |
| fathom                               | LINEAR    |              1.8288 |             |
| US survey chain                      | LINEAR    |   20.11684023368047 |             |
| US survey link                       | LINEAR    |  0.2011684023368047 |             |
| US survey mile                       | LINEAR    |  1609.3472186944375 |             |
| Indian yard                          | LINEAR    |  0.9143985307444408 |             |
| kilometre                            | LINEAR    |                1000 |             |
| Clarke's chain                       | LINEAR    |       20.1166195164 |             |
| Clarke's link                        | LINEAR    |      0.201166195164 |             |
| British yard (Benoit 1895 A)         | LINEAR    |           0.9143992 |             |
| British yard (Sears 1922)            | LINEAR    |  0.9143984146160288 |             |
| British foot (Sears 1922)            | LINEAR    |  0.3047994715386762 |             |
| Gold Coast foot                      | LINEAR    |  0.3047997101815088 |             |
| British chain (Sears 1922)           | LINEAR    |  20.116765121552632 |             |
| yard                                 | LINEAR    |              0.9144 |             |
| British link (Sears 1922)            | LINEAR    |  0.2011676512155263 |             |
| British foot (Benoit 1895 A)         | LINEAR    |  0.3047997333333333 |             |
| Indian foot (1962)                   | LINEAR    |           0.3047996 |             |
| British chain (Benoit 1895 A)        | LINEAR    |          20.1167824 |             |
| chain                                | LINEAR    |             20.1168 |             |
| British link (Benoit 1895 A)         | LINEAR    |         0.201167824 |             |
| British yard (Benoit 1895 B)         | LINEAR    |  0.9143992042898124 |             |
| British foot (Benoit 1895 B)         | LINEAR    | 0.30479973476327077 |             |
| British chain (Benoit 1895 B)        | LINEAR    |  20.116782494375872 |             |
| British link (Benoit 1895 B)         | LINEAR    |  0.2011678249437587 |             |
| British foot (1865)                  | LINEAR    | 0.30480083333333335 |             |
| Indian foot                          | LINEAR    | 0.30479951024814694 |             |
| Indian foot (1937)                   | LINEAR    |          0.30479841 |             |
| Indian foot (1975)                   | LINEAR    |           0.3047995 |             |
| British foot (1936)                  | LINEAR    |        0.3048007491 |             |
| Indian yard (1937)                   | LINEAR    |          0.91439523 |             |
| Indian yard (1962)                   | LINEAR    |           0.9143988 |             |
| Indian yard (1975)                   | LINEAR    |           0.9143985 |             |
| Statute mile                         | LINEAR    |            1609.344 |             |
| link                                 | LINEAR    |            0.201168 |             |
| British yard (Sears 1922 truncated)  | LINEAR    |            0.914398 |             |
| British foot (Sears 1922 truncated)  | LINEAR    | 0.30479933333333337 |             |
| British chain (Sears 1922 truncated) | LINEAR    |           20.116756 |             |
47 rows in set (0.0019 sec)

2019 MySQL Community Contributor Award Program

Building the Vitess community has been our pride and joy. Being able to contribute to the MySQL community, even more so. Vitess’ Sugu Sougoumarane has been nominated by the MySQL group for Oracle’s 2019 MySQL Community Contributor Award Program, where he joins folks like Shlomi Noach, Peter Zaitsev, Gabriela D’Ávila Ferrara, Giuseppe Maxia and many other active MySQL community members recognised for their contributions to MySQL. Criteria for the nominations included: most active code contributor, bug report,most active MySQL blogger, people who play an active role in translating or documenting MySQL articles, people who provide feedback on DMR releases, Labs release, or change proposal, as well as anyone in the community who did really useful work that ought to be thanked publicly.

Rotating binary log master key online

Starting on version 8.0.16, MySQL server introduces a new command that allows
for the binary log master key rotation, online!

When binary log encryption is enabled, the binary log master key can be rotated online by using the following new command:


This new command can be used to rotate the binary log master key periodically or whenever you suspect that a key might have been compromised.…

MySQL 8.0.16 Replication Enhancements

MySQL 8.0.16 has been released last Thursday. In it, you can find some new replication features. Here is a quick summary. Follow-up blog posts will provide details about these features.

  • Large Messages Fragmentation Layer for Group Replication. Tiago Vale’s work, introduces message fragmentation to the Group Communication Framework.

SQL Update Query Example | SQL Update Statement Tutorial

SQL Update Query Example | SQL Update Statement Tutorial is today’s topic. The SQL UPDATE statement is used to modify the existing records in a table. You need to be very careful when updating records in a table. SQL WHERE clause in the UPDATE statement specifies which record(s) that should be updated. If you omit the WHERE clause completely, then all records in the table will be updated!

SQL Update Query Example

You need to specify which record needs to be updated via WHERE clause, otherwise all the rows would be affected. We can update the single column as well as multiple columns using the UPDATE statement as per our requirement.

SQL Update Syntax

The syntax is following.

UPDATE table_name SET column1 = value1, column2 = value2, ... WHERE condition;

In the above query, the SET statement is used to set the new values to a particular column and the WHERE clause condition is used to select the rows for which the columns needed to be updated.

See the following example.

I have the following table called Apps. 

select * from Apps

The output is following.


So, we have a table with six rows. Now, we will write an update query to modify one or multiple rows.

If you do not know how to create a table, then check out SQL Create Table and SQL Insert Query tutorials.

Update one row in SQL

Now, let’s write an update query that can affect only one row.

UPDATE Apps SET CreatorName = 'Tony Stark', AppName= 'IronSpider' WHERE AppID = 1

Here, we are updating the row whose AppID = 1. We are updating the CreatorName and AppName.

It will not return anything. So, if we need to check the output, write the following query.

select * from Apps

See the below output.


See, we have updated the row whose AppID = 1.

Update multiple rows in SQL

Let’s write a query where we update multiple rows.

UPDATE Apps SET CreatorName = 'CaptainAmerica', AppName= 'Shield' WHERE AppID IN('4', '5');

The above query will update the rows whose AppIDs are and 5.

Now, check the output using the below query.

You can see that AppID 4 and 5 have updated values.

Update All records in SQL

We can write an SQL Update query in which we update all the rows if we do not specify the WHERE condition. See the below query.

UPDATE Apps SET CreatorName = 'Thor', AppName= 'Asguard'

It will update the CreatorName and AppName for all the six rows. Verify the output by the following query.


Conclusively, SQL Update Query Example | SQL Update Statement Tutorial is over.

The post SQL Update Query Example | SQL Update Statement Tutorial appeared first on AppDividend.

React & Axios JWT Authentication Tutorial with PHP & MySQL Server: Signup, Login and Logout

In this tutorial, we'll learn how to use React to build login, signup and logout system and Axios to send API calls and handle JWT tokens. For building the PHP application that implements the JWT-protected REST API, check out PHP JWT Authentication Tutorial. We'll be using the same application built in the previous tutorial as the backend for our React application we'll be building in this tutorial. Prerequisites You will need to have the following prerequisites to follow this tutorial step by step: Knowledge of JavaScript, Knowledge of React, Knowledge of PHP, PHP, Composer and MySQL installed on your development machine, Node.js and NPM installed on your system. That's it. Let's get started! Cloning the PHP JWT App Our example application implements JWT Authentication. It exposes three endpoints api/login.php api/register.php api/protected.php How to Run the PHP App First clone the GitHub repository: $ git clone Next, navigate inside the project's folder and run the following commands to install the PHP dependencies and start the development server: $ cd php-jwt-authentication-example $ composer install $ php -S Enabling CORS Since we'll be making use of two frontend and backend apps - The React/Webpack development server and the PHP server which are running from two different ports in our local machine (considered as two different domains) we'll need to enable CORS in our PHP app. Open the api/register.php, api/login.php and api/protected.php files and add the following CORS header to enable any domain to send HTTP requests to these endpoints: <?php header("Access-Control-Allow-Origin: *"); > Installing create-react-app Let's start by installing the create-react-app tool which will be used to create the React project. Open a new terminal and run the following command: $ npm install -g create-react-app create-react-app is the official tool created by the React team to quickly start developing React apps. Creating a React Project Let's now generate our React project. In your terminal, run the following command: $ create-react-app php-react-jwt-app This will generate a React project with a minimal directory structure. Installing Axios & Consuming JWT REST API We'll be using JWT for sending HTTP requests to our PHP JWT REST API so we'll need to install it first. Go back to your terminal and run the following commands to install Axios from npm: $ cd php-react-jwt-app $ npm install axios --save As of this writing, this will install axios v0.18.0. Next, let's create a component that encapsulates the code for communicating with the JWT REST API. In the src/ folder, create an utils folder then create a JWTAuth.js file inside of it: $ mkdir utils $ touch JWTAuth.js Open the src/utils/JWTAuth.js file and add the following code: import axios from 'axios'; const SERVER_URL = ""; We import axios and define the SERVER_URL variable that contains the URL of the JWT authentication server. Next, define the login() method which will be used to log users in: const login = async (data) => { const LOGIN_ENDPOINT = `${SERVER_URL}/api/login.php`; try { let response = await, data); if(response.status === 200 && &&{ let jwt =; let expire_at =; localStorage.setItem("access_token", jwt); localStorage.setItem("expire_at", expire_at); } } catch(e){ console.log(e); } } First, we construct the endpoint by concatenating the server URL with the /api/login.php path. Next, we send a POST request to the login endpoint with the data passed as a parameter to the login() method. Next, if the response is successful, we store the JWT token and expiration date in the local storage. Note: Since Axios, returns a Promise, we use the async/await syntax to make our code look synchronous. Next, define the register() method which creates a new user in the database: const register = async (data) => { const SIGNUP_ENDPOINT = `${SERVER_URL}/api/register.php`; try { let response = await axios({ method: 'post', responseType: 'json', url: SIGNUP_ENDPOINT, data: data }); } catch(e){ console.log(e); } } We first construct the endpoint by concatenating the server URL with the /api/register.php path. Next, we use Axios to send a POST request to the register endpoint with the data passed as a parameter to the method. Note: We use the async/await syntax to avoid working with Promises. Finally, let's define the logout() method which simply removes the JWT access token and expiration date from the local storage: const logout = () => { localStorage.removeItem("access_token"); localStorage.removeItem("expire_at"); } We use the removeItem() method of localStorage to remove the access_token and expire_at keys. Now, we need to export these methods so they can be imported from the other React components: export { login, register, logout } Calling the JWTAuth Methods in the React Component Let's now make sure our login system works as expected. Open the src/App.js file and import the login(), register() and logout() methods from the src/utils/JWTAuth.js file: import { login, register, logout } from "./utils/JWTAuth.js"; Next, define a login() method in the App component as follows: class App extends Component { async login(){ let info = { email: "", password: "123456789" }; await login(info); } This methods simply calls the login() method of JWTAuth.js with hardcoded user information to log the user in. Next, define the register() method as follows: async register(){ let info = { first_name: "kaima", last_name: "Abbes", email: "", password: "123456789" }; await register(info); } Note: We don't need to wrap the logout() method since we don't have to pass any parameters to the method. Finally, update the render() method to create the buttons for login, register and logout: render() { return ( <div className="container"> <div className="row"> <h1>React JWT Authentication Example</h1> <button className="btn btn-primary" onClick = { this.register }>Sign up</button> <button className="btn btn-primary" onClick = { this.login }>Log in</button> <button className="btn btn-primary" onClick = { logout }>Log out</button> </div> </div> ); } You should be able to use these buttons to test the register(), login() and logout() methods. Note: We used Bootstrap for styling the UI. In the next tutorial, we'll build the actual login and register UIs with forms to get the user's information and submit them to the PHP JWT authentication server. Conclusion In this tutorial, we've seen how to implement JWT authentication in React with Axios, PHP and MySQL.

Fun with Bugs #84 - On Some Public Bugs Fixed in MySQL 5.7.26

Oracle released minor MySQL Server versions in all supported branches on April 25, 2019. MySQL 5.7.26 is just one of them, but recently I prefer to ignore MySQL 8 releases (after checking that I can build them from source code at least somewhere, even if it takes 18G+ of disk space and that they work in basic tests), as there are more chances for MySQL 5.7 bug fixes to affect me (and customers I care about) directly.

So, in this yet another boring blog post (that would never be a reason for any award) I plan to concentrate on bugs reported in public MySQL bugs database and fixed in MySQL 5.7.26. As usual I name bug reporters explicitly and give links to their remaining currently active bug reports, if any. This time the list is short enough, so I do not even split it by categories:
  • Bug #93164 - "Memory leak in innochecksum utility detected by ASan". This bug was reported by Yura Sorokin from Percona, who also had contributed a patch (for some reason this is not mentioned in the official release notes).
  • Bug #90402 - "innodb async io error handling in io_event". Wei Zhao found yet another case when wrong data type was used in the code and I/O error was not handled, and this could lead even to crashes. He had submitted a patch.
  • Bug #89126 - "create table panic on innobase_parse_hint_from_comment". Nice bug report with a patch from Yan Huang. Note also detailed analysis and test case provided by Marcelo Altmann in the comment. It's a great example of cooperation of all sides: Oracle MySQL developers, bugs verification team, bug reporter and other community users.
  • Bug #92241 - "alter partitioned table add auto_increment diff result depending on algorithm". Yet another great finding from Shane Bester himself!
  • Bug #94247 - "Contribution: Fix fractional timeout values used with WAIT_FOR_EXECUTED_GTI ...". This bug report was created based on pull request from Dirkjan Bussink, who had suggested a patch to fix the problem. Note the comment from Shlomi Noach that refers to Bug #94311 (still private).
  • Bug #85158 - "heartbeats/fakerotate cause a forced sync_master_info". Note MTR test case contributed by Sveta Smirnova and code analysis in a comment from Vlad Lesin (both from Percona at that time) in this bug report from Trey Raymond.
  • Bug #92690 - "Group Replication split brain with faulty network". I do not care about group replication (I have enough Galera in my life instead), but I could not skip this report by Przemyslaw Malkowski from Percona, with detailed steps on how to reproduce. Note comments from other community members. Yet another case to show that good bug reports attract community feedback and are fixed relatively fast.
  • Bug #93750 - "Escaping of column names for GRANT statements does not persist in binary logs". Clear and simple bug report from Andrii Ustymenko. I wonder why it was not found by internal testing/QA. Quick test shows that MariaDB 10.3.7, for example, is not affected:
    c:\Program Files\MariaDB 10.3\bin>mysql -uroot -proot -P3316 test
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 9
    Server version: 10.3.7-MariaDB-log binary distribution

    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

    MariaDB [test]> create table t_from(id int primary key, `from` int, c1 int);
    Query OK, 0 rows affected (0.582 sec)

    MariaDB [test]> create user 'user01'@'%' identified by 'user01';
    Query OK, 0 rows affected (0.003 sec)

    MariaDB [test]> grant select (`id`,`from`) on `test`.`t_from` to 'user01'@'%';
    Query OK, 0 rows affected (0.054 sec)

    MariaDB [test]> show master status;
    | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    | pc-PC-bin.000007 |      852 |              |                  |
    1 row in set (0.030 sec)

    MariaDB [test]> show binlog events in 'pc-PC-bin.000007';
    | Log_name         | Pos | Event_type        | Server_id | End_log_pos | Info
    | pc-PC-bin.000007 |   4 | Format_desc       |         1 |         256 | Server
    ver: 10.3.7-MariaDB-log, Binlog ver: 4                             |
    | pc-PC-bin.000007 | 256 | Gtid_list         |         1 |         299 | [0-1-42
    ]                                                                  |
    | pc-PC-bin.000007 | 299 | Binlog_checkpoint |         1 |         342 | pc-PC-b
    in.000007                                                          |
    | pc-PC-bin.000007 | 708 | Query             |         1 |         852 | use `te
    st`; grant select (`id`,`from`) on `test`.`t_from` to 'user01'@'%' |

    9 rows in set (0.123 sec)
  • Bug #73936 - "If the storage engine supports RBR, unsafe SQL statementes end up in binlog". Nice bug report with MTR test case by Santosh Praneeth Banda. Note that last comment about the fix mentions only MySQL 8.0.15, not a single work about the fix in MySQL 5.7.26 (or anything about MySQL 5.6.x while the bug was reported for 5.6).
  • Bug #93341 - "Check for tirpc needs improvement". The need for improvement of CMake check was noted by Terje Røsten.
  • Bug #91803 - "mysqladmin shutdown does not wait for MySQL to shut down anymore". This regression bug (without a "regression" tag) was reported by Christian Roser.
  • Bug #91541 - ""Flush status" statement adds twice to global values ". Yura Sorokin contributed a detailed anlysis, MTR test case and a patch in this bug reported by Carlos Tutte.
  • Bug #90351 - "GLOBAL STATUS variables drift after rollback". Zsolt Parragi contibuted a patch to this bug found and reported by Iwo P. For some reason this contribution is not highlighted in the release notes.
  • Bug #81441 - "Warning about localhost when using skip-name-resolve". One of many bug reports from Monty Solomon in which he (and other community members like Jean-François Gagné) had to spend a lot of efforts and fight with a member of bugs verification team to get the bug accepted as a real code bug and then get it fixed in all versions affected.
  • Bug #90902 - "Select Query With Complex Joins Leaks File Handles". This bug was reported by James Wilson. I still wonder if MySQL 5.6 was affected. Bug reports says nothing about this (while I expect all supported GA versions to be checked when the bug is verified, and the results of such check clearly documented).
The future looks bright for MySQL 5.7 To summarize:
  1. Consider upgrade to 5.7.26 if you use complex joins, partitioned tables with auto_increment columns or rely on InnoDB or replication a lot.
  2. It's good to see crashing bugs that do not end up as hidden/"security", maybe because they are reported with patches...
  3. It's good to see examples of cooperation of several community users contributing to the same bug report!
  4. Percona engineers contribute a lot to MySQL, both in form of bug reports, patches and by helping other community users to make their point and get their bugs fixed fast.
  5. There are still things to improve in a way Oracle egnineers handle bugs verification, IMHO.
  6.  It's also a bit strange to see only one optimizer-related fix in this release. It means that either MySQL optimizer is already near perfect and there are no bugs to fix (check yourself, but I see 123 bugs here), or that nobody cares that much about MySQL optimizer in 5.7 these days.
  7. It seems for some bugs fixed in previous MySQL 8.0.x minor release there is no extra check/updates in public comments about the versions with the fix when it is released in MySQL 5.6 or 5.7.

MySQL Shell 8.0.16 – What’s New?

The MySQL Development team is proud to announce a new version of the MySQL Shell which includes the following features:

  • Addition of a reporting framework:
    • API to register custom reports.
    • Shell command to display a specific report (\show).
    • Shell command to monitor a specific report (\watch).
… Facebook Twitter Google+ LinkedIn

MySQL Shell 8.0.16: User Defined Reports

In my blog yesterday, I wrote about the new reporting framework in MySQL Shell. It is part of the 8.0.16 release. I also noted that it includes the possibility to create your own custom reports and use those with the \show and \watch commands. This blog will explore how you can create a report and register it, so it automatically is available when you start MySQL Shell.

The help text for the example sessions report.Background

You can write the code that generates the report in either JavaScript or Python. The reports can be used from either language mode – even SQL – irrespective of which language you choose, so go with what you are most comfortable with.

Once you have written your code, you save it in the init.d folder (does not exist by default) inside the user configuration folder. By default this is at the following location depending on whether you are on Microsoft Windows or Linux:

  • Microsoft Windows: %AppData%MySQL\mysqlsh
  • Linux: ~/.mysqlsh

You can overwrite this path with the MYSQLSH_USER_CONFIG_HOME environment variable.

You are free to choose any file name, but a good rule is to name the file the same as the report. However, it is required that you use .py as the file name extension if you wrote the report in Python and .js if you used JavaScript.

At that point, you need to register the report, so you can use it through the reporting framework. You do that using the shell.registerReport() method from inside the same file that has the report code. It takes four arguments: the name of the report, the report type, the function generating the report (as a function object), and optional a dictionary with the description. I will not go into the details of these argument here beyond providing an example of using it. The manual has a quite detailed section on registering your report including what the arguments are.


See the manual for a detailed discussion of the registration of user defined reports.

One thing that is worth discussing a bit as it influences how the report content should be formatted is the report type. This can have one of three values:

  • list: The content of the report is returned as a list of lists constituting the rows of a table. The \show and \watch commands can then show the data either using the familiar tabular format or in vertical. The decision of which display format to use can be made when running the report.
  • report: The report content is returned in YAML.
  • print: The report code print the output directly.

The report and print types are the more flexible, but the list type works well with query results.

This can all feel very abstract. The best way to actually understand how it works is to write an example report to go through the steps.

Example Custom Report

The custom report, I will create is based on the one in the reference manual, but modified to allow you to choose what to sort by. The example should help make it clearer how to create your own reports.

The example is quite simple and could be generated using the built-in query report, but it serves as a good starting point to understand the mechanics of custom reports, and even simple reports like this provides a way to have your report logic saved in one place and easily accessible from within MySQL Shell. The example is written in Python, but a report generating the same result written in JavaScript would look similar (although not identical).

Download the Source

You do not need to copy and paste all the code snippets if you want to try this example. You can download the entire file from below and extract the file with the report source code.

sessionsDownload The Report Function

The first thing is to define the report itself. This report is called sessions, so the function with the code is also called sessions. This is not required, but it is best practice:

sort_allowed = { 'thread': 'thd_id', 'connection': 'conn_id', 'user': 'user', 'db': 'db', 'latency': 'statement_latency', 'memory': 'current_memory', } def sessions(session, args, options):

First a dictionary is defined with the keys specifying the allowed values for the --sort option and the values as what will actually be used for the ordering. Then there is the definition of the reporting function itself. The function takes three arguments:

  • session: A MySQL Shell session object. This gives you access to all of the session properties and methods when you create the report.
  • args: A list of any additional arguments passed to the the report. This is what the query report uses to get the query that will be executed. This report does not use any such arguments, so anything passed this way will be ignored.
  • options: This is a dictionary with named options. This report will support two such named options:
    • --limit or -l which sets the maximum number of rows to retrieve. The option will use the limit key in the dictionary. The default is not to impose any limit.
    • --sort or -s which chooses what to sort by. The option will use the sort key in the dictionary. The report will support ordering by thread, connection, user, db, latency, and memory. The default is to sort by latency.

You can choose different names for the arguments if you prefer.

The next thing is to define the query that will retrieve the result that will be used in the report. You can do this in several ways. If you want to execute an SQL query, you can use session.sql() (where session is the name of the session object in your argument list). However, it is simpler to code the query using the X DevAPI as that makes it trivial to customize the query, for example with the limit option and what to order by.

sys = session.get_schema('sys') session_view = sys.get_table('x$session') query = 'thd_id', 'conn_id', 'user', 'db', 'sys.format_statement(current_statement) AS statement', 'sys.format_time(statement_latency) AS latency', 'format_bytes(current_memory) AS memory')

The statement will query the sys.x$session view. This is the non-formatted version of sys.session. The reason for using this is to allow custom sorting of the result set according to the --sort option. The view is obtained using the session.get_schema() method first to get a schema object for the sys schema, then the get_table() method of the schema object.

The query can then be defined from the table (view in this case) object by using the select() method. The arguments are the columns that should be included in the result. As you can see, it if possible to manipulate the columns and rename them.

Want to Learn More?

If you want to learn more about the MySQL X DevAPI and how to use the Python version of it, then I have written MySQL Connector/Python Revealed published by Apress. The book is available from Apress (print and DRM free ePub+PDF), Amazon (print and Kindle), Barnes & Noble (print), and others.

The X DevAPI makes it trivial to modify the query with the options the report are invoked with. First handle the --sort option:

# Set what to sort the rows by (--sort) try: order_by = options['sort'] except SystemError: order_by = 'latency' if order_by not in sort_allowed: raise ValueError( 'Unknown sort value: "{0}". Supported values: {1}' .format(order_by, sort_allowed.keys())) if order_by in ('latency', 'memory'): direction = 'DESC' else: direction = 'ASC' query.order_by('{0} {1}'.format( sort_allowed[order_by], direction)) # If ordering by latency, ignore those statements with a NULL latency # (they are not active) if order_by == 'latency': query.where('statement_latency IS NOT NULL')

If the --sort option is not provided, then a SystemError exception is raised. The first part of the snippet handles this, and ensures that the report default to ordering by the latency. Then, it is checked if the provided value is one of the supported values.

The next step is to decide whether to sort in descending or ascending order. You can of course add another option for this, but here the logic is contained within the report choosing descending when sorting by latency or memory usage; otherwise ascending.

The final step is to tell MySQL what to order by which is done in lines 34-35 by invoking the order_by() method. This is where the programmatic approach of the X DevAPI makes it easier to gradually put the query together compared to working directly with the SQL statement.

This report adds a little extra logic to the query. If the result is ordered by latency, only queries that are currently executing (the latency IS NOT NULL are included). This is one of the advantages of creating a custom report rather than writing the query ad-hoc as you can include logic like that.

The --limit option is handled in a similar way:

# Set the maximum number of rows to retrieve is --limit is set. try: limit = options['limit'] except SystemError: limit = 0 if limit > 0: query.limit(limit)

There is not much to note about this code snippet. In line 48 the limit is applied (if the value is greater than 0) by invoking the limit() method. Finally, the query can be executed and the report generated:

result = query.execute() report = [result.get_column_names()] for row in result.fetch_all(): report.append(list(row)) return {'report': report}

The execute() method is used to tell MySQL that the query can be executed. This returns a result object. The get_column_names() method of the result object can be used to get the column names. Then, the rows are added by iterating over them. As you can see, there is only one report list: the first element is a list with the column headers, the remaining are the row values.


The first element in the report list contains the column headers. The remaining elements contain the values.

Finally, the result is returned as a dictionary. That is it for generating the report, but it should also be registered.

Registering the Report

The registration of the report is done in the same file as where the report function was defined. You perform the registration by calling the shell.register_report() method:

shell.register_report( 'sessions', 'list', sessions, { 'brief': 'Shows which sessions exist.', 'details': ['You need the SELECT privilege on sys.session view and the ' + 'underlying tables and functions used by it.'], 'options': [ { 'name': 'limit', 'brief': 'The maximum number of rows to return.', 'shortcut': 'l', 'type': 'integer' }, { 'name': 'sort', 'brief': 'The field to sort by. Allowed values are: {0}'.format( sort_allowed.keys()), 'shortcut': 's', 'type': 'string' } ], 'argc': '0' } )

The first argument is the name of the report, ‘sessions’, then the report type. The third argument is the function itself. Then comes a dictionary describing the report.

There are two parts to the dictionary: the two first arguments with a description of the report – first a short (brief) description, then more details. Then a list of the options that the report supports. The final argument is the number of additional arguments.

Now, you are ready to test the report.

Testing the Report

First the report must be installed. If you do not already have the init.d directory, create it under %AppData%MySQL\mysqlsh if you are on Microsoft Windows or under ~/.mysqlsh if you are on Linux. Then copy into the directory.

Now, start MySQL Shell and the report is ready to be used:

mysql-js> \show Available reports: query, sessions. mysql-js> \show sessions --help sessions - Shows which sessions exist. You need the SELECT privilege on sys.session view and the underlying tables and functions used by it. Usage: \show sessions [OPTIONS] \watch sessions [OPTIONS] Options: --help Display this help and exit. --vertical, -E Display records vertically. --limit=integer, -l The maximum number of rows to return. --sort=string, -s The field to sort by. Allowed values are: ['latency', 'thread', 'db', 'connection', 'user', 'memory'] mysql-js> \show sessions +--------+---------+---------------+------+-------------------------------------------------------------------+----------+------------+ | thd_id | conn_id | user | db | statement | latency | memory | +--------+---------+---------------+------+-------------------------------------------------------------------+----------+------------+ | 65 | 28 | mysqlx/worker | NULL | SELECT `thd_id`,`conn_id`,`use ... ER BY `statement_latency` DESC | 38.09 ms | 965.58 KiB | +--------+---------+---------------+------+-------------------------------------------------------------------+----------+------------+ mysql-js> \show sessions -E *************************** 1. row *************************** thd_id: 65 conn_id: 28 user: mysqlx/worker db: NULL statement: SELECT `thd_id`,`conn_id`,`use ... ER BY `statement_latency` DESC latency: 35.49 ms memory: 968.88 KiB

Notice how the help text has been generated from the information that was provided when the report was registered, and how the -E option can be used to turn the tabular output format into the vertical format. Note also that the report is invoked from JavaScript mode and still works even though the report is written in Python – MySQL Shell will automatically handle that for you and ensure the report is executed using the correct interpreter.

It is left as an exercise for the reader to add the --sort and --limit options and to use the report with the \watch command.


On Microsoft Windows, it sometimes happens that when an option is not explicitly passed to the report, then the options dictionary is still set with a value. You can avoid that by providing the options explicitly.

One related feature that is worth covering before finishing is the shell.reports object.

The shell.reports Object

This far the \show and \watch commands have been used to invoke the reports, but there is a lower level way to do it – using the shell.reports object. It is also a very useful way to explore which reports are available.

Let’s start with the latter – exploring reports – as that also shows you how the shell.reports object work. As usual in MySQL Shell, it has built-in help:

mysql-py> NAME reports - Gives access to built-in and user-defined reports. SYNTAX shell.reports DESCRIPTION The 'reports' object provides access to built-in reports. All user-defined reports registered using the shell.register_report() method are also available here. The reports are provided as methods of this object, with names corresponding to the names of the available reports. All methods have the same signature: Dict report(Session session, List argv, Dict options), where: - session - Session object used by the report to obtain the data. - argv (optional) - Array of strings representing additional arguments. - options (optional) - Dictionary with values for various report-specific options. Each report returns a dictionary with the following keys: - report (required) - List of JSON objects containing the report. The number and types of items in this list depend on type of the report. For more information on a report use:'report_name'). FUNCTIONS help([member]) Provides help about this object and it's members query(session, argv) Executes the SQL statement given as arguments. sessions(session[, argv][, options]) Shows which sessions exist.

This includes a list of the functions available – and notice that the two reports that exist, query and sessions, are among the functions. You can also use the help() function with the report name as a string argument to get the report specific help.

If you invoke one of the report functions, you execute the report. This is much similar to invoking the report using the \show command, but it will be the raw report result that is returned. Let’s try it both for the query and sessions reports:

mysql-py> shell.reports.query(shell.get_session(), ["SELECT NOW()"]) { "report": [ [ "NOW()" ], [ "2019-04-27 15:53:21" ] ] } mysql-py> shell.reports.sessions(shell.get_session(), [], {'limit': 10, 'sort': 'latency'}) { "report": [ [ "thd_id", "conn_id", "user", "db", "statement", "latency", "memory" ], [ 66, 29, "mysqlx/worker", null, "SELECT `thd_id`,`conn_id`,`use ... ment_latency` DESC LIMIT 0, 10", "39.76 ms", "886.99 KiB" ] ] }

It is not often this is needed, but in case you want to manually manipulate the output, it can be useful.


If you use JavaScript mode, then use shell.getSession() instead of shell.get_session() to get a session object to pass to the report.

That is all. Now over to you to create your own reports.

Load Balanced ProxySQL in Google Cloud

There are three different ways ProxySQL can direct traffic between your application and the backend MySQL services.

  1. Locally, on the MySQL servers.
  2. Between the MySQL servers and the application.
  3. Colocated on the application servers themselves.

Without going through too much detail – each has its own limitations. In the first form, the application needs to know about all MySQL servers at any given point in time. With the third form, a large number of application servers, especially in the age of Kubernetes, where apps can simply recycle easily or be scaled up and down, backend connections can increase exponentially leading to issues.

In the second form, load balancing between a pool of ProxySQL servers is normally the challenge. Do you load balance the load balancers? While there are approaches like balancing from the application, similar to how the MongoDB drivers works, the application still needs to know and maintain a list of healthy backend proxies.

Google Cloud Platform’s (GCP) internal load balancer is a software based managed service which is implemented via virtual networking. This means, unlike physical load balancers, it is designed not to be a single point of failure.

We have played with Internal Load Balancers (ILB) and ProxySQL using the architecture below. There’s a few steps and items involved to be explained.

VM Instance Group

An instance group will be created to run the ProxySQL services. This needs to be a managed instance group so they are distributed between multiple zones. A managed instance group can auto scale, which you might or might not want. For example, a problematic ProxySQL instance can easily be replaced with a templatized VM instance.

Health Check

Health checks are the tricky part. GCP’s internal load balancer supports HTTP(S), SSL and TCP health checks. In this case, as long as ProxySQL is responding on the service port or admin port the service is up, right? Yes, but this is not necessarily enough since a port may respond but the instance can be misconfigured and return errors.

With ProxySQL you have to treat it like an actual MySQL instance (i.e. login and issue a query). On the other hand, the load balancer should be agnostic and does not necessarily need to know which backends do or do not work. The availability of backends should be left to ProxySQL as much as possible.

One way to achieve this is to use dummy rewrite rules inside ProxySQL. In the example below, we’ve configured an account called

percona that is assigned to a non-existent hostgroup. What we are doing is simply rewriting SELECT 1  queries to return an OK result.mysql> INSERT INTO mysql_query_rules (active, username, match_pattern, OK_msg) - > VALUES (1, 'percona', 'SELECT 1', '1'); Query OK, 1 row affected (0.00 sec) mysql> LOAD MYSQL QUERY RULES TO RUNTIME; Query OK, 0 rows affected (0.00 sec) mysql> SAVE MYSQL QUERY RULES TO DISK; Query OK, 0 rows affected (0.01 sec)

[root@west1-proxy-group-9bgs ~]# mysql -upercona -ppassword -P3306 -h127.1 ... mysql> SELECT 1; Query OK, 0 rows affected (0.00 sec) 1

It does not solve the problem though where ILB only supports primitive TCP check and HTTP checks. We still need a layer where the response from ProxySQL will be properly translated to ILB in a form it will understand. My personal preference is to expose an HTTP service that queries ProxySQL and responds to ILB HTTP based health check. It provides additional flexibility like being able to check specific or all backends.

Firewall Rules

Health checks to ProxySQL instances comes from a specific set of IP ranges. In our case, these would be and Firewall ports needs to be open from these ranges to either the HTTP or TCP ports in the ProxySQL instances.

In our next post, we will use Orchestrator to manage cross region replication for high availability.

Bye Bye to mysql_upgrade, change to skip_grant_tables, and One Year of MySQL 8.0 GA

The MySQL 8.0.16 Release Notes are very interesting and sadly not read enough. One thing that may have escaped attention is that you no longer have to run mysql_upgrade after updating the binaries.

Let me repeat: you no longer have to run mysql_upgrade after updating the binaries. 

From the release notes:
Previously, after installation of a new version of MySQL, the MySQL server automatically upgrades the data dictionary tables at the next startup, after which the DBA is expected to invoke mysql_upgrade manually to upgrade the system tables in the mysql schema, as well as objects in other schemas such as the sys schema and user schemas.
The server, starting 8.0.16, does the work previously done by mysql_upgrade for itself.  And mysql_upgrade itself is deprecated.

I have had to help too many folks who either forgot to run mysql_upgrade after an upgrade or did not know they could not run it properly do to a lack of permissions from their unprivileged user account. 

One Year of 8.0And speaking of MySQL 8.0, it has been out for OVER one year now.  Woot!

Skip_grant_tablesAnother change to note concerns the much abused skip_grant_tables option.

Previously, if the grant tables were corrupted, the MySQL server wrote a message to the error log but continued as if the --skip-grant-tables option had been specified. This resulted in the server operating in an unexpected state unless --skip-grant-tables had in fact been specified. Now, the server stops after writing a message to the error log unless started with --skip-grant-tables.

MySQL InnoDB Cluster – What’s new in the 8.0.16 release

The MySQL Development Team is very happy to announce a new 8.0 GA Maintenance Release of InnoDB Cluster – 8.0.16!

In addition to important bug fixes, 8.0.16 brings very useful new features!

This blog post will cover MySQL Shell and the AdminAPI, for detailed information of what’s new in MySQL Router stay tuned for an upcoming blog post!…

Facebook Twitter Google+ LinkedIn

MySQL Shell 8.0.16: Built-in Reports

Readers of my blog know that I like how MySQL Shell allows you to customize it and use it’s Python and JavaScript support to create custom libraries with tools that help with your daily tasks and even creating auto-refreshing reports. Lefred has even taken this a step further and started to port Innotop to MySQL Shell.

One disadvantage of my example of auto-refreshing reports and the Innotop port is they both rely on the curses Python module to refresh the screen. While avoiding to reinvent the wheel is usually a good thing, and the curses library is both powerful and easy to use, it is not well supported on Microsoft Windows. The good news is that in MySQL 8.0.16 and later, you can also get auto-refreshing reports with a new built-in reporting framework in MySQL Shell. This blog shows how this framework works.

Example of using the \watch command to generate a auto-refreshing report.Built-In Features

The great thing with the built-in framework is that you can start using it even without coding as it comes with a pre-configured report. The framework consists of three parts:

  • \show: This is the most basic command which runs a report once and displays the result.
  • \watch: This is similar to the watch command on Linux, where a command (report in this case) is repeatedly executed with the screen refreshed to show the new result.
  • shell.registerReport(): This method can be used to register custom reports. The details of custom reports will be saved for a later blog.

The \show command is a good place to start.

The \show Command

You can get more information about how the \show command works and reports in general using the built-in help system:

mysql-js> \h \show NAME \show - Executes the given report with provided options and arguments. SYNTAX \show <report_name> [options] [arguments] DESCRIPTION The report name accepted by the \show command is case-insensitive, '-' and '_' characters can be used interchangeably. Common options: - --help - Display help of the given report. - --vertical, -E - For 'list' type reports, display records vertically. The output format of \show command depends on the type of report: - 'list' - displays records in tabular form (or vertically, if --vertical is used), - 'report' - displays a YAML text report, - 'print' - does not display anything, report is responsible for text output. If executed without the report name, lists available reports. Note: user-defined reports can be registered with shell.registerReport() method. EXAMPLES \show Lists available reports, both built-in and user-defined. \show query show session status like 'Uptime%' Executes 'query' report with the provided SQL statement. \show query --vertical show session status like 'Uptime%' As above, but results are displayed in vertical form. \show query --help Displays help for the 'query' report. SEE ALSO Additional entries were found matching \show The following topics were found at the SQL Syntax category: - SHOW For help on a specific topic use: \? <topic> e.g.: \? SHOW

This already gives a lot of information, not only about the \show command, but also about reports. Reports can be in one of three formats (more on that in a later blog), if they are using the list format (which the query report discussed below uses), you can get the output in tabular format (the default) or vertical using the --vertical or -e option. And finally, you can get more information about known reports by running the report with the --help option, and you can get a list of known reports running \show without arguments:

mysql-js> \show Available reports: query.

Let’s take a closer look at the query report.

The Query Report

The query report is a very simple report that take a query and runs it. You can get the help text for it by executing \show query --help:

mysql-js> \show query --help query - Executes the SQL statement given as arguments. Usage: \show query [OPTIONS] [ARGUMENTS] \watch query [OPTIONS] [ARGUMENTS] Options: --help Display this help and exit. --vertical, -E Display records vertically. Arguments: This report accepts 1-* arguments.

So, to run it, you simply provide the query as an argument – you can do this either just providing the query as is or as a quoted string. Let’s say you want to use the following query for the report:

SELECT conn_id, sys.format_statement(current_statement) AS statement, format_pico_time(statement_latency) AS latency FROM sys.x$session ORDER BY statement_latency DESC LIMIT 10

This will show the longest running queries limited to 10 queries. Note that is uses the new format_pico_time() function that replaces the sys.format_time() function in MySQL 8.0.16. Newlines are not allowed in the query when generating the report, so the command becomes:

mysql-js> \show query SELECT conn_id, sys.format_statement(current_statement) AS statement, format_pico_time(statement_latency) AS latency FROM sys.x$session ORDER BY statement_latency DESC LIMIT 10 +---------+-------------------------------------------------------------------+----------+ | conn_id | statement | latency | +---------+-------------------------------------------------------------------+----------+ | 8 | SELECT conn_id, sys.format_sta ... tatement_latency DESC LIMIT 10 | 33.34 ms | | 4 | NULL | 0 ps | +---------+-------------------------------------------------------------------+----------+

Granted, this is not particular useful – you could just have executed the query on its own. However, if you consider the \watch command instead, it become more useful.


The \show command is more useful for more complex reports that does more than just execute a single query or executes a complex query where the report functions as a stored query.

The \watch Command

The \watch the command supports two additional arguments on its own:

  • --interval=float, -i float: The amount of time in seconds to wait between displaying the result of the report until the report is run again. Valid values are 0.1 second to 86400 seconds (one day).
  • --nocls: Do not clear the screen between iterations of the report. This will make the subsequent output be displayed below the previous output. This can for example be useful for reports returning a single line of output and you that way have the history of the report up the screen.

Report may also add options of their own. The query report for example accepts one argument, which is the query to execute. Other reports may accept other arguments.

Otherwise, you start the report the same way as when using \show. For example, to run the query every five seconds:

mysql-js> \watch query --interval=5 SELECT conn_id, sys.format_statement(current_statement) AS statement, format_pico_time(statement_latency) AS latency FROM sys.x$session ORDER BY statement_latency DESC LIMIT 10

That’s it. If you want to stop the report again, use CTRL+c and the report will stop after the next refresh.


The report framework in MySQL Shell 8.0.16 gives a nice starting point for generating reports. The built-in query function may not be the most fancy you can think of, but it is very easy way to quickly make a query run repeatedly at set intervals. However, the real power of the report framework is that you now have a framework to create cross-platform custom reports. That will be the topic of a later blog.

Benchmarking Manual Database Deployments vs Automated Deployments

There are multiple ways of deploying a database. You can install it by hand, you can rely on the widely available infrastructure orchestration tools like Ansible, Chef, Puppet or Salt. Those tools are very popular and it is quite easy to find scripts, recipes, playbooks, you name it, which will help you automate the installation of a database cluster. There are also more specialized database automation platforms, like ClusterControl, which can also be used to automated deployment. What would be the best way of deploying your cluster? How much time you will actually need to deploy it?

First, let us clarify what we want to do. Let’s assume we will be deploying Percona XtraDB Cluster 5.7. It will consist of three nodes and for that we will use three Vagrant virtual machines running Ubuntu 16.04 (bento/ubuntu-16.04 image). We will attempt to deploy a cluster manually, then using Ansible and ClusterControl. Let’s see how the results will look like.

Manual Deployment Repository Setup - 1 minute, 45 seconds.

First of all, we have to configure Percona repositories on all Ubuntu nodes. Quick google search, ssh into the virtual machines and running required commands takes 1m45s

We found the following page with instructions:

and we executed steps described in “DEB-BASED GNU/LINUX DISTRIBUTIONS” section. We also ran apt update, to refresh apt’s cache.

Installing PXC Nodes - 2 minutes 45 seconds

This step basically consists of executing:

root@vagrant:~# apt install percona-xtradb-cluster-5.7

The rest is mostly dependent on your internet connection speed as packages are being downloaded. Your input will also be needed (you’ll be passing a password for the superuser) so it is not unattended installation. When everything is done, you will end up with three running Percona XtraDB Cluster nodes:

root 15488 0.0 0.2 4504 1788 ? S 10:12 0:00 /bin/sh /usr/bin/mysqld_safe mysql 15847 0.3 28.3 1339576 215084 ? Sl 10:12 0:00 \_ /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --wsrep-provider=/usr/lib/galera3/ --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/ --socket=/var/run/mysqld/mysqld.sock --wsrep_start_position=00000000-0000-0000-0000-000000000000:-1 Configuring PXC nodes - 3 minutes, 25 seconds

Here starts the tricky part. It is really hard to quantify experience and how much time one would need to actually understand what is needed to be done. What is good, google search “how to install percona xtrabdb cluster” points to Percona’s documentation, which describes how the process should look like. It still may take more or less time, depending on how familiar you are with the PXC and Galera in general. Worst case scenario you will not be aware of any additional required actions and you will connect to your PXC and start working with it, not realizing that, in fact, you have three nodes, each forming a cluster of its own.

Let’s assume we follow the recommendation from Percona and time just those steps to be executed. In short, we modified configuration files as per instructions on the Percona website, we also attempted to bootstrap the first node:

root@vagrant:~# /etc/init.d/mysql bootstrap-pxc mysqld: [ERROR] Found option without preceding group in config file /etc/mysql/my.cnf at line 10! mysqld: [ERROR] Fatal error in defaults handling. Program aborted! mysqld: [ERROR] Found option without preceding group in config file /etc/mysql/my.cnf at line 10! mysqld: [ERROR] Fatal error in defaults handling. Program aborted! mysqld: [ERROR] Found option without preceding group in config file /etc/mysql/my.cnf at line 10! mysqld: [ERROR] Fatal error in defaults handling. Program aborted! mysqld: [ERROR] Found option without preceding group in config file /etc/mysql/my.cnf at line 10! mysqld: [ERROR] Fatal error in defaults handling. Program aborted! * Bootstrapping Percona XtraDB Cluster database server mysqld ^C

This did not look correct. Unfortunately, instructions weren’t crystal clear. Again, if you don’t know what is going on, you will spend more time trying to understand what happened. Luckily, comes very helpful (although not the first response on the list that we got) and you should realise that you miss [mysqld] section header in your /etc/mysql/my.cnf file. Adding this on all nodes and repeating the bootstrap process solved the issue. In total we spent 3 minutes and 25 seconds (not including googling for the error as we noticed immediately what was the problem).

Configuring for SST, Bringing Other Nodes Into the Cluster - Starting From 8 Minutes to Infinity

The instructions on Percona web site are quite clear. Once you have one node up and running, just start remaining nodes and you will be fine. We tried that and we were unable to see more nodes joining the cluster. This is where it is virtually impossible to tell how long it will take to diagnose the issue. It took us 6-7 minutes but to be able to do it quickly you have to:

  1. Be familiar with how PXC configuration is structured: root@vagrant:~# tree /etc/mysql/ /etc/mysql/ ├── conf.d │ ├── mysql.cnf │ └── mysqldump.cnf ├── my.cnf -> /etc/alternatives/my.cnf ├── my.cnf.fallback ├── my.cnf.old ├── percona-xtradb-cluster.cnf └── percona-xtradb-cluster.conf.d ├── client.cnf ├── mysqld.cnf ├── mysqld_safe.cnf └── wsrep.cnf
  2. Know how the !include and !includedir directives work in MySQL configuration files
  3. Know how MySQL handles the same variables included in multiple files
  4. Know what to look for and be aware of configurations that would result in node bootstrapping itself to form a cluster on its own

The problem was related to the fact that instructions did not mention any file except for /etc/mysql/my.cnf where, in fact, we should have been modifying /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf. That file contained empty variable:


and such configuration forces node to bootstrap as it does not have information about other nodes to join to. We set that variable in /etc/mysql/my.cnf but later wsrep.cnf file was included, overwriting our setup.

This issue might be a serious blocker for people who are not really familiar with how MySQL and Galera works, resulting even in hours if not more of debugging.

Total Installation Time - 16 minutes (If You Are MySQL DBA Like I Am)

We managed to install Percona XtraDB Cluster in 16 minutes. You have to keep in mind a couple of things - we did not tune the configuration. This is something which will require more time and knowledge. PXC node comes with some simple configuration, related mostly to binary logging and Galera writeset replication. There is no InnoDB tuning. If you are not familiar with MySQL internals, this is hours if not days of reading and familiarizing yourself with internal mechanisms. Another important thing is that this is a process you would have to re-apply for every cluster you deploy. Finally, we managed to identify the issue and solve it very fast due to our experience with Percona XtraDB Cluster and MySQL in general. Casual user will most likely spend significantly more time trying to understand what is going on and why.

Ansible Playbook

Now, on to automation with Ansible. Let’s try to find and use an ansible playbook, which we could reuse for all further deployments. Let’s see how long will it take to do that.

Configuring SSH Connectivity - 1 minute

Ansible requires SSH connectivity across all the nodes to connect and configure them. We generated a SSH key and manually distributed it across the nodes.

Finding Ansible Playbook - 2 minutes 15 seconds

The main issue here is that there are so many playbooks available out there that it is impossible to decide what’s best. As such, we decided to go with top 3 Google results and try to pick one. We decided on as it seems to be more configurable than the remaining ones.

Cloning Repository and Installing Ansible - 30 seconds

This is quick, all we needed was to

apt install ansible git git clone Preparing Inventory File - 1 minute 10 seconds

This step was also very simple, we created an inventory file using example from documentation. We just substituted IP addresses of the nodes to what we have configured in our environment.

Preparing a Playbook - 1 minute 45 seconds

We decided to use the most extensive example from the documentation, which includes also a bit of the configuration tuning. We prepared a correct structure for the Ansible (there was no such information in the documentation):

/root/pxcansible/ ├── inventory ├── pxcplay.yml └── roles └── ansible-role-XtraDB-Cluster

Then we ran it but immediately we got an error:

root@vagrant:~/pxcansible# ansible-playbook pxcplay.yml [WARNING]: provided hosts list is empty, only localhost is available ERROR! no action detected in task The error appears to have been in '/root/pxcansible/roles/ansible-role-XtraDB-Cluster/tasks/main.yml': line 28, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - name: "Include {{ ansible_distribution }} tasks" ^ here We could be wrong, but this one looks like it might be an issue with missing quotes. Always quote template expression brackets when they start a value. For instance: with_items: - {{ foo }} Should be written as: with_items: - "{{ foo }}"

This took 1 minute and 45 seconds.

Fixing the Playbook Syntax Issue - 3 minutes 25 seconds

The error was misleading but the general rule of thumb is to try more recent Ansible version, which we did. We googled and found good instructions on Ansible website. Next attempt to run the playbook also failed:

TASK [ansible-role-XtraDB-Cluster : Delete anonymous connections] ***************************************************************************************************************************************************************************************************************** fatal: [node2]: FAILED! => {"changed": false, "msg": "The PyMySQL (Python 2.7 and Python 3.X) or MySQL-python (Python 2.X) module is required."} fatal: [node3]: FAILED! => {"changed": false, "msg": "The PyMySQL (Python 2.7 and Python 3.X) or MySQL-python (Python 2.X) module is required."} fatal: [node1]: FAILED! => {"changed": false, "msg": "The PyMySQL (Python 2.7 and Python 3.X) or MySQL-python (Python 2.X) module is required."}

Setting up new Ansible version and running the playbook up to this error took 3 minutes and 25 seconds.

Fixing the Missing Python Module - 3 minutes 20 seconds

Apparently, the role we used did not take care of its prerequisites and a Python module was missing for connecting to and securing the Galera cluster. We first tried to install MySQL-python via pip but it became apparent that it will take more time as it required mysql_config:

root@vagrant:~# pip install MySQL-python Collecting MySQL-python Downloading (108kB) 100% |████████████████████████████████| 112kB 278kB/s Complete output from command python egg_info: sh: 1: mysql_config: not found Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-zzwUtq/MySQL-python/", line 17, in <module> metadata, options = get_config() File "/tmp/pip-build-zzwUtq/MySQL-python/", line 43, in get_config libs = mysql_config("libs_r") File "/tmp/pip-build-zzwUtq/MySQL-python/", line 25, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found ---------------------------------------- Command "python egg_info" failed with error code 1 in /tmp/pip-build-zzwUtq/MySQL-python/

That is provided by MySQL development libraries so we would have to install them manually, which was pretty much pointless. We decided to go with PyMySQL, which did not require other packages to install. This brought us to another issue:

TASK [ansible-role-XtraDB-Cluster : Delete anonymous connections] ***************************************************************************************************************************************************************************************************************** fatal: [node3]: FAILED! => {"changed": false, "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1698, u\"Access denied for user 'root'@'localhost'\")"} fatal: [node2]: FAILED! => {"changed": false, "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1698, u\"Access denied for user 'root'@'localhost'\")"} fatal: [node1]: FAILED! => {"changed": false, "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1698, u\"Access denied for user 'root'@'localhost'\")"} to retry, use: --limit @/root/pxcansible/pxcplay.retry

Up to this point we spent 3 minutes and 20 seconds.

Fixing “Access Denied” Error - 18 minutes 55 seconds

As per error, we did ensure that MySQL config is prepared correctly and that it included correct user and password to connect to the database. This, unfortunately, did not work as expected. We did investigate further and found that the role did not create root user properly, even though it marked the step as completed. We did a short investigation but decided to make the manual fix instead of trying to debug the playbook, which would take way more time than the steps which we did. We just created manually users root@ and root@localhost with correct passwords. This allowed us to pass this step and onto another error:

TASK [ansible-role-XtraDB-Cluster : Start the master node] ************************************************************************************************************************************************************************************************************************ skipping: [node1] skipping: [node2] skipping: [node3] TASK [ansible-role-XtraDB-Cluster : Start the master node] ************************************************************************************************************************************************************************************************************************ skipping: [node1] skipping: [node2] skipping: [node3] TASK [ansible-role-XtraDB-Cluster : Create SST user] ****************************************************************************************************************************************************************************************************************************** skipping: [node1] skipping: [node2] skipping: [node3] TASK [ansible-role-XtraDB-Cluster : Start the slave nodes] ************************************************************************************************************************************************************************************************************************ fatal: [node3]: FAILED! => {"changed": false, "msg": "Unable to start service mysql: Job for mysql.service failed because the control process exited with error code. See \"systemctl status mysql.service\" and \"journalctl -xe\" for details.\n"} fatal: [node2]: FAILED! => {"changed": false, "msg": "Unable to start service mysql: Job for mysql.service failed because the control process exited with error code. See \"systemctl status mysql.service\" and \"journalctl -xe\" for details.\n"} fatal: [node1]: FAILED! => {"changed": false, "msg": "Unable to start service mysql: Job for mysql.service failed because the control process exited with error code. See \"systemctl status mysql.service\" and \"journalctl -xe\" for details.\n"} to retry, use: --limit @/root/pxcansible/pxcplay.retry

For this section we spent 18 minutes and 55 seconds.

Fixing “Start the Slave Nodes” Issue (part 1) - 7 minutes 40 seconds

We tried a couple of things to solve this problem. We tried to specify node using its name, we tried to switch group names, nothing solved the issue. We decided to clean up the environment using the script provided in the documentation and start from scratch. It did not clean it but just made things even worse. After 7 minutes and 40 seconds we decided to wipe out the virtual machines, recreate the environment and start from scratch hoping that when we add the Python dependencies, this will solve our issue.

Fixing “Start the Slave Nodes” Issue (part 2) - 13 minutes 15 seconds

Unfortunately, setting up Python prerequisites did not help at all. We decided to finish the process manually, bootstrapping the first node and then configuring SST user and starting remaining slaves. This completed the “automated” setup and it took us 13 minutes and 15 seconds to debug and then finally accept that it will not work like the playbook designer expected.

Further Debugging - 10 minutes 45 seconds

We did not stop there and decided that we’ll try one more thing. Instead of relying on Ansible variables we just put the IP of one of the nodes as the master node. This solved that part of the problem and we ended up with:

TASK [ansible-role-XtraDB-Cluster : Create SST user] ****************************************************************************************************************************************************************************************************************************** skipping: [node2] skipping: [node3] fatal: [node1]: FAILED! => {"changed": false, "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, u\"Access denied for user 'root'@'::1' (using password: YES)\")"}

This was the end of our attempts - we tried to add this user but it did not work correctly through the ansible playbook while we could use IPv6 localhost address to connect to when using MySQL client.

Total Installation Time - Unknown (Automated Installation Failed)

In total we spent 64 minutes and we still haven’t managed to get things going automatically. The remaining problems are root password creation which doesn’t seem to work and then getting the Galera Cluster started (SST user issue). It is hard to tell how long will it take to debug it further. It is sure possible - it is just hard to quantify because it really depends on the experience with Ansible and MySQL. It is definitely not something anyone can just download, configure and run. Well, maybe another playbook would have worked differently? It is possible, but it may as well result in different issues. Ok, so there is a learning curve to climb and debugging to make but then, when you are all set, you will just run a script. Well, that’s sort of true. As long as changes introduced by the maintainer won’t break something you depend on or new Ansible version will break the playbook or the maintainer will just forget about the project and stop developing it (for the role that we used there’s quite useful pull request waiting already for almost a year, which might be able to solve the Python dependency issue - it has not been merged). Unless you accept that you will have to maintain this code, you cannot really rely on it being 100% accurate and working in your environment, especially given that the original developer has no incentives in keeping the code up to date. Also, what about other versions? You cannot use this particular playbook to install PXC 5.6 or any MariaDB version. Sure, there are other playbooks you can find. Will they work better or maybe you’ll spend another bunch of hours trying to make them to work?

ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE ClusterControl

Finally, let’s take a look at how ClusterControl can be used to deploy Percona XtraDB Cluster.

Configuring SSH Connectivity - 1 minute

ClusterControl requires SSH connectivity across all the nodes to connect and configure them. We generated a SSH key and manually distributed it across the nodes.

Setting Up ClusterControl - 3 minutes 15 seconds

Quick search “ClusterControl install” pointed us to relevant ClusterControl documentation page. We were looking for a “simpler way to install ClusterControl” therefore we followed the link and found following instructions.

Downloading the script and running it took 3 minutes and 15 seconds, we had to take some actions while installation proceeded so it is not unattended installation.

Logging Into UI and Deployment Start - 1 minute 10 seconds

We pointed our browser to the IP of ClusterControl node.

We passed the required contact information and we were presented with the Welcome screen:

Next step - we picked the deployment option.

We had to pass SSH connectivity details.

We also decided on the vendor, version, password and hosts to use. This whole process took 1 minute and 10 seconds.

Percona XtraDB Cluster Deployment - 12 minutes 5 seconds

The only thing left was to wait for ClusterControl to finish the deployment. After 12 minutes and 5 seconds the cluster was ready:

Total Installation Time - 17 minutes 30 seconds Related resources  ClusterControl for MySQL  ClusterControl for MariaDB  ClusterControl for Galera Cluster

We managed to deploy ClusterControl and then PXC cluster using ClusterControl in 17 minutes and 30 seconds. The PXC deployment itself took 12 minutes and 5 seconds. At the end we have a working cluster, deployed according to the best practices. ClusterControl also ensures that the configuration of the cluster makes sense. In short, even if you don't really know anything about MySQL or Galera Cluster, you can have a production-ready cluster deployed in a couple of minutes. ClusterControl is not just a deployment tool, it is also management platform - makes things even easier for people not experienced with MySQL and Galera to identify performance problems (through advisors) and do management actions (scaling the cluster up and down, running backups, creating asynchronous slaves to Galera). What is important, ClusterControl will always be maintained and can be used to deploy all MySQL flavors (and not only MySQL/MariaDB, it also supports TimeScaleDB, PostgreSQL and MongoDB). It also worked out of the box, something which cannot be said about other methods we tested.

If you would like to experience the same, you can download ClusterControl for free. Let us know how you liked it.

Tags:  MySQL galera cluster MariaDB deployment automation clustercontrol

The MySQL Track (and More) at Percona Live 2019

This year we’re having a different concept for Percona Live conferences, which started at Percona Live Europe 2018 last fall. This is an approach practiced by many other organizations by having separate track for MySQL®, MongoDB®, MariaDB®, or PostgreSQL and more.

Having many tracks in this big Open Source Database Conference meant that one track steering committee could not be asked to handle all the talks for each track. So we formed several mini-committees to make sure that the submissions to each of the tracks received the right level of attention from the right kind of reviewers.

I had the honor of championing the MySQL Track along with mini-committee Derek Downey from Pythian, Gillian Gunson from GitHub, Jeremy Cole from Shopify, Dave Stoker from Oracle, Shiv Iyer from WebsScale, Calvin Sun from Huawei, and Kenny Gryp from Oracle.

This committee worked diligently to evaluate each and every submission that fell within the realm of MySQL.

MySQL Track by numbers

Committee members worked independently to review and grade the submissions, then collaborated on conference calls to discuss our thoughts and generate a shortlist. The shortlist was then reviewed by our very own Percona Product Management team.


On Tuesday, 28 May half-day MySQL tutorials are available offering content suitable for beginner, intermediate, and advanced attendees, as well as tutorials covering open source tools. In total, Percona Live offers 16 tutorials.

50 minute talks

We’re allocated two rooms for the MySQL track, offering eighteen 50 minute talks. The track committee chose from 118 submissions, filtered from over 300 talks submitted in total not including sponsored submissions. We looked to cover across subjects with either tooling, use case scenarios, edge cases, new releases and more.

25 minute talks

We filled our eight slots of 25 minutes from more than forty submissions, along with a few that we agreed would be better converted from a 50 minute talk.

Some great talks were not selected…

Once the committee completed grading we took a long look across the track to make sure that we offered variety of topic and of difficulty. It was a hard call as there were several great talks we would like to have it in our show but they were too similar to another, highly graded, talk. We want everyone to get the best of their time at Percona Live and avoid empty rooms, and offering a variety of talks is an important aspect. This is something we care a lot about here at Percona, from the perspective of both attendees and speakers.

Don’t miss out

Last but not least if you still haven’t booked your trip, and a discount on your seat could persuade you, tweet to @ask_dba and I’ll see what I can rustle up. Look forward to seeing you in Austin,Texas.

MySQL 8.0.16 Introducing CHECK constraint

MySQL 8.0.16 introduces the SQL CHECK constraint feature. This is one of the most requested and long awaited features for MySQL. This post describes the details of the feature. Let’s get started!


The CHECK constraint is a type of integrity constraint in SQL.…

Facebook Twitter Google+ LinkedIn

MySQL 8.0.16: how to validate JSON values in NoSQL with check constraint

As you may have noticed, MySQL 8.0.16 has been released today !

One of the major long expected feature is the support of CHECK contraints .

My colleague, Dave Stokes, already posted an article explaining how this works.

In this post, I wanted to show how we could take advantage of this new feature to validate JSON values.

Let’s take the following example:

So we have a collection of documents representing rates from a user on some episodes. Now, I expect that the value for the rating should be between 0 and 20.

Currently I could enter whatever value, even characters…

To avoid characters, I can already create a virtual column as integer:

So now, only integer value for rating should be allowed:

Perfect, but can I enter any integer value ?

In fact yes of course ! And that’s where the new CHECK Constraints enter in action !

We need first to modify the current document having a value for the ratingattribute that won’t be valid for the new constraints.

And now we can test again:

Woohooo! Nice feature that also benefits to the MySQL Document Store !

For the curious that want to see how the table looks like in SQL definition:

Enjoy NoSQL with MySQL 8.0 Document Store #MySQL8isGreat.

MySQL Shell 8.0.16 for MySQL Server 8.0 and 5.7 has been released

Dear MySQL users,

MySQL Shell 8.0.16 is a maintenance release of MySQL Shell 8.0 Series (a
component of the MySQL Server). The MySQL Shell is provided under
Oracle’s dual-license.

MySQL Shell 8.0 is highly recommended for use with MySQL Server 8.0 and
5.7. Please upgrade to MySQL Shell 8.0.16.

MySQL Shell is an interactive JavaScript, Python and SQL console
interface, supporting development and administration for the MySQL
Server. It provides APIs implemented in JavaScript and Python that
enable you to work with MySQL InnoDB cluster and use MySQL as a document

The AdminAPI enables you to work with MySQL InnoDB cluster, providing an
integrated solution for high availability and scalability using InnoDB
based MySQL databases, without requiring advanced MySQL expertise. For
more information about how to configure and work with MySQL InnoDB
cluster see

The X DevAPI enables you to create “schema-less” JSON document
collections and perform Create, Update, Read, Delete (CRUD) operations
on those collections from your favorite scripting language. For more
information about how to use MySQL Shell and the MySQL Document Store
support see

For more information about the X DevAPI see

If you want to write applications that use the the CRUD based X DevAPI
you can also use the latest MySQL Connectors for your language of
choice. For more information about Connectors see

For more information on the APIs provided with MySQL Shell see


Using MySQL Shell’s SQL mode you can communicate with servers using the
legacy MySQL protocol. Additionally, MySQL Shell provides partial
compatibility with the mysql client by supporting many of the same
command line options.

For full documentation on MySQL Server, MySQL Shell and related topics,

For more information about how to download MySQL Shell 8.0.16, see the
“Generally Available (GA) Releases” tab at

We welcome and appreciate your feedback and bug reports, see

Enjoy and thanks for the support!

Changes in MySQL Shell 8.0.16 (2019-04-25, General Availability) * Functionality Added or Changed * Bugs Fixed Functionality Added or Changed * Important Change: Attempting to connect to an X Protocol port, 33060 by default, using the classic MySQL protocol resulted in the following error: ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0 This was because of differences in X Protocol and classic MySQL protocol clients expectations on how connections were initialized. Now, in such a situation the generated error message is ERROR 2007 (HY000): Protocol mismatch; server version = 11, client version = 10. If you encounter this error then you are probably trying to use the wrong port for the protocol your client is using. As part of this improvement the mysqlx_enable_hello_notice system variable has been added, which controls messages sent to classic MySQL protocol clients that try to connect over X Protocol. When enabled, clients which do not support X Protocol that attempt to connect to the server X Protocol port receive an error explaining they are using the wrong protocol. Set mysqlx_enable_hello_notice to false to permit clients which do not recognize the hello message to still connect. * MySQL Shell's upgrade checker utility can now check the configuration file (my.cnf or my.ini) for the server instance. The utility checks for any system variables that are defined in the configuration file but have been removed in the target MySQL Server release, and also for any system variables that are not defined in the configuration file and will have a different default value in the target MySQL Server release. For these checks, when you invoke checkForServerUpgrade(), you must provide the file path to the configuration file. If you omit the file path and the upgrade checker utility needs to run a check that requires the configuration file, that check fails with a message informing you that you must specify the file path. (Bug #27801824, Bug #29222179) * MySQL InnoDB cluster automatically and transparently manages the communication protocol versions of its members, whenever the cluster topology is changed using AdminAPI operations. An InnoDB cluster always uses the most recent communication protocol version that is supported by all instances that are part of the cluster or joining it. + When an instance is added to, removed from, or rejoins the cluster, or a rescan or reboot operation is carried out on the cluster, the communication protocol version is automatically set to a version supported by the instance that is now at the earliest MySQL Server version. + When you carry out a rolling upgrade by removing instances from the cluster, upgrading them, and adding them back into the cluster, the communication protocol version is automatically upgraded when the last remaining instance at the old MySQL Server version is removed from the cluster prior to its upgrade. To see the communication protocol version in use in an InnoDB cluster, use the Cluster.status() function with the 'extended' option enabled. The communication protocol version is returned in the 'GRProtocolVersion' field, provided that the cluster has quorum and no cluster members are unreachable. * MySQL Shell now has a framework and commands that you can use to set up and run reports to display live information from a MySQL server, such as status and performance information. Reports can be run once using the MySQL Shell \show command, or run then refreshed continuously in a MySQL Shell session using the \watch command. They can also be accessed as API functions in the shell.reports object. The reporting facility supports both built-in reports and user-defined reports. User-defined reports can be created in the supported scripting languages JavaScript and Python, and can be run in any MySQL Shell mode (JavaScript, Python, or SQL), regardless of the language that the report was written in. Reports can be saved in a folder in the MySQL Shell configuration path and automatically loaded at startup. You can also create a report directly in the MySQL Shell prompt. You register a report to MySQL Shell using the shell.registerReport method to provide information about the report and the options and arguments that it supports. For more information, see Reporting with MySQL Shell ( * When running MySQL Shell in interactive mode, you can now execute an SQL statement without switching to SQL mode and back again afterwards. This function enables you to conveniently issue some SQL statements in the context of a longer AdminAPI workflow in JavaScript or Python mode. Use the \sql command immediately followed by the SQL statement, for example: \sql select * from limit 3; The SQL statement does not need any additional quoting, and the statement delimiter is optional. With this format, MySQL Shell does not switch mode as it would if you entered the \sql command. After the SQL statement has been executed, MySQL Shell remains in JavaScript or Python mode. You cannot use multiple line mode when you use the \sql command with a query to execute single SQL statements while another language is active. The command only accepts a single SQL query on a single line. * MySQL Shell history is now split per active language which the command was issued under. This means that your history now matches the active language, for example when you are running in JavaScript mode having issued \js, the history contains the previous JavaScript statements you issued, and when you issue \sql to change to SQL mode your history contains the previous SQL statements you issued. Similarly, now any history related commands such as \history clear or \history delete are performed on the history of the current active language. When you install this version, any existing MySQL Shell history files are duplicated to ensure that existing history is not lost. Subsequent operations are then added to the language specific history file. * The new autoRejoinTries option enables you to configure how many times an instance tries to rejoin a group after being expelled. In scenarios where network glitches happen but recover quickly, setting this option prevents you from having to manually add the expelled instance back to the group. The autoRejoinTries option accepts positive integer values between 0 and 2016 and the default value is 0, which means that instances do not try to automatically rejoin. Set the value to a valid integer to configure the number of attempts expelled instances should make to rejoin the group. You can pass the autoRejoinTries option to these AdminAPI operations: + dba.createCluster() + Cluster.addInstance() + Cluster.setOption() + Cluster.setInstanceOption() When you configure the autoRejoinTries option, it sets the group_replication_autorejoin_tries system variable. Passing the option to dba.createCluster(), Cluster.addInstance() or Cluster.setInstanceOption() configures the automatic rejoin for specific cluster instances. Passing the option to Cluster.setOption() configures the automatic rejoin for all cluster instances. For more information, see Responses to Failure Detection and Network Partitioning ( * When resultFormat was set to json or json/raw, every result was being returned as a JSON document. This behavior was expected when JSON wrapping is off (in other words the --json command option was not used when starting MySQL Shell). Now, for consistency reasons when JSON wrapping is off and resultFormat is set to json or json/raw, every record is printed in a separate document and statistics and warnings are printed in plain text. For example if MySQL Shell is started without --json and resultFormat=json/raw: mysqlsh-sql> SHOW DATABASES; {"Database":"information_schema"} {"Database":"mysql"} {"Database":"performance_schema"} {"Database":"sys"} 4 rows in set (0.0035 sec) If MySQL Shell is started with --json and with resultFormat=json/raw: mysqlsh-sql> SHOW DATABASES; { "hasData": true, "rows": [ { "Database": "information_schema" }, { "Database": "mysql" }, { "Database": "performance_schema" }, { "Database": "sys" } ], "executionTime": "0.0018 sec", "affectedRowCount": 0, "affectedItemsCount": 0, "warningCount": 0, "warningsCount": 0, "warnings": [], "info": "", "autoIncrementValue": 0 } * AdminAPI now reports information about the version of MySQL running on instances. This information is available from the following operations: + Cluster.status() + Cluster.describe() + Cluster.rescan() See Checking the MySQL Version on Instances ( with-cluster.html#checking-version-on-instances) for more information. Bugs Fixed * Removing an instance from a cluster when the instance to be removed had no user defined for the group_replication_recovery channel resulted in dropping users on the remaining instances of the cluster. (Bug #29617572) * MySQL Shell could be installed in an environment where Python was not present, but the application has a dependency on many standard Python modules, resulting in error messages at startup. The RPM and Debian packages for MySQL Shell now explicitly specify the dependency on Python. (Bug #29469201) * The MSI file that is used by Windows Installer to install MySQL Shell now adds the path to the application binary (mysqlsh) to the Windows PATH environment variable, so that the application can be started from a command prompt. (Bug #29457639) * In the instructions to build MySQL Shell from source (the INSTALL document), the required version of the optional V8 dependency has been updated from to Thanks to Evgeniy Patlan for spotting this. (Bug #29430049, Bug #94529) * The failoverConsistency option has been deprecated and a new option named consistency has been added, to make it more consistent with the target Group Replication group_replication_consistency system variable name. The MySQL Shell online documentation now also correctly describes all of the values you can assign to the consistency option. (Bug #29356599) * The dba.configureLocalInstance() operation would remove any section that did not start with mysqld from the provided option file. This could remove sections such as the client section from the option file. (Bug #29349014) * MySQL Shell's upgrade checker utility checkForServerUpgrade() could incorrectly report a schema inconsistency error for a table whose name included a special character such as a hyphen. (Bug #29346836, Bug #94303) * When an instance with X Plugin disabled was added to an InnoDB cluster, if the instance was later removed from the cluster using Cluster.removeInstance() the operation failed with LogicError "get_string(7): field is NULL". This was a regression introduced by the fix for Bug#27677227. (Bug #29304183) * There was an inconsistency between the behavior of dba.checkInstanceConfiguration() and the commands to add instances to the cluster (dba.createCluster() and Cluster.addInstance()) regarding the localhost and loopback address validation. In particular, a simple error was printed by dba.checkInstanceConfiguration() but the execution of the operation continued showing that everything was correct at the end of the operation, while an error was issued and the execution stopped for dba.createCluster() and Cluster.addInstance(). As part of fixing this issue, it was decided that the existing localhost and loopback address validations are no longer needed and should be removed. In particular, whatever address is specified for report_host, even if it is localhost or the loopback address (, should be allowed, because it was explicitly specified by the user to use it. (Bug #29279941) * The dba.rebootClusterFromCompleteOutage() operation was not preserving the existing Group Replication configurations previously set for the instances. In particular, the Group Replication local address and exit state action values were being changed. Now all settings are read at the time of rebooting the cluster. (Bug #29265869) * On Windows, MySQL Shell's upgrade checker utility checkForServerUpgrade() incorrectly reported a schema inconsistency error for partitioned tables. (Bug #29256562) * Using either Cluster.setOption() or Cluster.setInstanceOption() to set an option which only exists in MySQL 8.0 on an instance running MySQL 5.7 was not being caught correctly. (Bug #29246657) * On Debian-based platforms (such as Ubuntu), if the hostname resolved to - which is the default on these platforms - it was not possible to create a cluster using the default settings. Now, in such situations a proper validation of the instance is performed before creating a cluster and adding instances to it. (Bug #29246110) * MySQL Shell stopped unexpectedly if Python code was running in interactive mode and threw exceptions from C++ libraries. These exceptions are now caught and translated to Python's built-in RuntimeError exceptions. (Bug #29057116) * The dba.checkInstanceConfiguration() operation did not validate host restrictions for the account provided for cluster administration, for example if the account could actually connect to all of the instances in the cluster. In particular, now an error is issued if the provided user account is only able to connect through localhost. (Bug #29018457) * When a connection is specified using key-value pairs in MySQL Shell's shell.connect() method, the host name cannot be an empty string. MySQL Shell now handles this situation consistently and returns an error if the supplied host name is an empty string. (Bug #28899522) * InnoDB cluster configured auto_increment_increment and auto_increment_offset on instances for clusters running in multi-primary mode and consisting of up to 7 instances based on the logic described at InnoDB cluster and Auto-increment ( cluster.html#mysql-innodb-cluster-auto-increment). But Group Replication permits groups to contain up to 9 members, and Cluster.addInstance() and Cluster.removeInstance() were not following the logic used for other operations. Now, InnoDB cluster uses the same logic for auto increment regardless of the operation used and correctly handles multi-primary clusters with more than 7 instances. (Bug #28812763) * MySQL Shell's JSON import utility can now accept input from FIFO special files (named pipes) when you invoke the utility using the util.importJSON function, so you can carry out large imports by this method without needing to put the data into a file. (Bug #28785527) * When you use the MySQL Shell command \help (or \h, or \?) with a search pattern to search for help on a specific subject, multiple help topic titles can match the pattern and be returned as a list, to be selected by entering the command again with an extended search pattern. With this system, it was possible for help topics with a single-word title to be inaccessible from such a list because there was nothing further to add to the search pattern. To avoid this situation, the handling of multiple matches has now been improved. If a topic title is found that matches the given search pattern exactly (case-sensitive in the event of multiple topic matches, and case-insensitive in the event of no case-sensitive matches), the topic is identified as the exact match and its help data is printed. The rest of the topics with pattern matches in their titles are listed in a "see also" section and can be selected by further pattern matching. (Bug #28393119) * MySQL Shell uses the host value of the provided connection parameters as the target hostname used for AdminAPI operations, namely to register the instance in the metadata (for the dba.createCluster() and cluster.addInstance() operations). However, the host used for the connection parameters might not match the hostname that is used or reported by Group Replication, which uses the value of the report_host system variable when it is defined (in other words it is not NULL), otherwise the value of hostname is used. Therefore, AdminAPI now follows the same logic to register the target instance in the metadata and as the default value for the group_replication_local_address variable on instances, instead of using the host value from the instance connection parameters. During this fix it was detected that when the report_host variable was set to empty, Group Replication uses an empty value for the host but AdminAPI (for example in commands such as dba.checkInstanceConfiguration(), dba.configureInstance(), dba.createCluster()) reports the hostname as the value used which is inconsistent with the value reported by Group Replication. An error is now issued by AdminAPI if an empty value is set for the report_host system variable. (Bug #28285389) * In the event that dba.createCluster() failed and a rollback was performed to remove the created replication (recovery) users, the account created at localhost and any of the ipWhitelist addresses were not being removed. The fix ensures that the replication accounts are removed whenever a rollback related to dba.createCluster() is performed. This work was based on a code contribution from Bin Hong. (Bug #94182, Bug #29308037)


On Behalf of Oracle/MySQL Release Engineering Team,
Nawaz Nazeer Ahamed