Planet MySQL

Instrumenting Read Only Transactions in InnoDB

Probably not well known but quite an important optimization was introduced in MySQL 5.6 – reduced overhead for “read only transactions”. While usually by a “transaction” we mean a query or a group of queries that change data, with transaction engines like InnoDB, every data read or write operation is a transaction.

Now, as a non-locking read operation obviously has less impact on the data, it does not need all the instrumenting overhead a write transaction has. The main thing that can be avoided, as described by documentation, is the transaction ID. So, since MySQL 5.6, a read only transaction does not have a transaction ID. Moreover, such a transaction is not visible in the SHOW ENGINE INNODB STATUS output, though I will not go deeper on what really that means under the hood in this article. The fact is that this optimization allows for better scaling of workloads with many RO threads. An example RO benchmark, where 5.5 vs 5.6/5.7 difference is well seen, may be found here: https://www.percona.com/blog/2016/04/07/mysql-5-7-sysbench-oltp-read-results-really-faster/

To benefit from this optimization in MySQL 5.6, either a transaction has to start with the explicit START TRANSACTION READ ONLY clause or it must be an autocommit, non-locking SELECT statement. In version 5.7 and newer, it goes further, as a new transaction is treated as read-only until a locking read or write is executed, at which point it gets “upgraded” to a read-write one.

Information Schema Instrumentation

Let’s see how it looks like (on MySQL 8.0.12) by looking at information_schema.innodb_trx and information_schema.innodb_metrics tables. The second of these, by default, has transaction counters disabled, so before the test we have to enable it with:

SET GLOBAL innodb_monitor_enable = 'trx%comm%';

or by adding a parameter to the

[mysqld] section of the configuration file and restarting the instance:innodb_monitor_enable = "trx_%"

Now, let’s start a transaction which should be read only according to the rules:

mysql [localhost] {msandbox} (db1) > START TRANSACTION; SELECT count(*) FROM db1.t1; Query OK, 0 rows affected (0.00 sec) +----------+ | count(*) | +----------+ | 3 | +----------+ 1 row in set (0.00 sec mysql [localhost] {msandbox} (db1) > SELECT trx_id,trx_weight,trx_rows_locked,trx_rows_modified,trx_is_read_only,trx_autocommit_non_locking FROM information_schema.innodb_trx\G *************************** 1. row *************************** trx_id: 421988493944672 trx_weight: 0 trx_rows_locked: 0 trx_rows_modified: 0 trx_is_read_only: 0 trx_autocommit_non_locking: 0 1 row in set (0.00 sec)

Transaction started as above, did not appear in SHOW ENGINE INNODB STATUS, and its trx_id looks strangely high. And first surprise—for some reason, trx_is_read_only is 0. Now, what if we commit such a transaction—how do the counters change? (I reset them before the test):

mysql [localhost] {msandbox} (db1) > commit; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (db1) > SELECT name, comment, status, count FROM information_schema.innodb_metrics WHERE name like 'trx%comm%'; +---------------------------+--------------------------------------------------------------------+---------+-------+ | name | comment | status | count | +---------------------------+--------------------------------------------------------------------+---------+-------+ | trx_rw_commits | Number of read-write transactions committed | enabled | 0 | | trx_ro_commits | Number of read-only transactions committed | enabled | 1 | | trx_nl_ro_commits | Number of non-locking auto-commit read-only transactions committed | enabled | 0 | | trx_commits_insert_update | Number of transactions committed with inserts and updates | enabled | 0 | +---------------------------+--------------------------------------------------------------------+---------+-------+ 4 rows in set (0.01 sec)

OK, so clearly it was a read-only transaction overall, just the trx_is_read_only property wasn’t set as expected. I had to report this problem here: https://bugs.mysql.com/bug.php?id=92558

What about an explicit RO transaction:

mysql [localhost] {msandbox} (db1) > START TRANSACTION READ ONLY; SELECT count(*) FROM db1.t1; Query OK, 0 rows affected (0.00 sec) +----------+ | count(*) | +----------+ | 3 | +----------+ 1 row in set (0.00 sec mysql [localhost] {msandbox} (db1) > SELECT trx_id,trx_weight,trx_rows_locked,trx_rows_modified,trx_is_read_only,trx_autocommit_non_locking FROM information_schema.innodb_trx\G *************************** 1. row *************************** trx_id: 421988493944672 trx_weight: 0 trx_rows_locked: 0 trx_rows_modified: 0 trx_is_read_only: 1 trx_autocommit_non_locking: 0 1 row in set (0.00 sec) mysql [localhost] {msandbox} (db1) > commit; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (db1) > SELECT name, comment, status, count FROM information_schema.innodb_metrics WHERE name like 'trx%comm%'; +---------------------------+--------------------------------------------------------------------+---------+-------+ | name | comment | status | count | +---------------------------+--------------------------------------------------------------------+---------+-------+ | trx_rw_commits | Number of read-write transactions committed | enabled | 0 | | trx_ro_commits | Number of read-only transactions committed | enabled | 2 | | trx_nl_ro_commits | Number of non-locking auto-commit read-only transactions committed | enabled | 0 | | trx_commits_insert_update | Number of transactions committed with inserts and updates | enabled | 0 | +---------------------------+--------------------------------------------------------------------+---------+-------+ 4 rows in set (0.01 sec)

OK, both transactions are counted as the same type. Moreover, the two transactions shared the same strange trx_id, which appears to be a fake one. For a simple read executed in autocommit mode, the counters increase as expected too:

mysql [localhost] {msandbox} (db1) > select @@autocommit; SELECT count(*) FROM db1.t1; +--------------+ | @@autocommit | +--------------+ | 1 | +--------------+ 1 row in set (0.00 sec) +----------+ | count(*) | +----------+ | 3 | +----------+ 1 row in set (0.00 sec) mysql [localhost] {msandbox} (db1) > SELECT name, comment, status, count FROM information_schema.innodb_metrics WHERE name like 'trx%comm%'; +---------------------------+--------------------------------------------------------------------+---------+-------+ | name | comment | status | count | +---------------------------+--------------------------------------------------------------------+---------+-------+ | trx_rw_commits | Number of read-write transactions committed | enabled | 0 | | trx_ro_commits | Number of read-only transactions committed | enabled | 2 | | trx_nl_ro_commits | Number of non-locking auto-commit read-only transactions committed | enabled | 1 | | trx_commits_insert_update | Number of transactions committed with inserts and updates | enabled | 0 | +---------------------------+--------------------------------------------------------------------+---------+-------+ 4 rows in set (0.00 sec)

Now, let’s test how a transaction looks when we upgrade it to RW later:

mysql [localhost] {msandbox} (db1) > START TRANSACTION; SELECT count(*) FROM db1.t1; Query OK, 0 rows affected (0.00 sec) +----------+ | count(*) | +----------+ | 3 | +----------+ 1 row in set (0.00 sec) mysql [localhost] {msandbox} (db1) > SELECT trx_id,trx_weight,trx_rows_locked,trx_rows_modified,trx_is_read_only,trx_autocommit_non_locking FROM information_schema.innodb_trx\G *************************** 1. row *************************** trx_id: 421988493944672 trx_weight: 0 trx_rows_locked: 0 trx_rows_modified: 0 trx_is_read_only: 0 trx_autocommit_non_locking: 0 1 row in set (0.00 sec) mysql [localhost] {msandbox} (db1) > SELECT count(*) FROM db1.t1 FOR UPDATE; +----------+ | count(*) | +----------+ | 3 | +----------+ 1 row in set (0.00 sec) mysql [localhost] {msandbox} (db1) > SELECT trx_id,trx_weight,trx_rows_locked,trx_rows_modified,trx_is_read_only,trx_autocommit_non_locking FROM information_schema.innodb_trx\G *************************** 1. row *************************** trx_id: 4106 trx_weight: 2 trx_rows_locked: 4 trx_rows_modified: 0 trx_is_read_only: 0 trx_autocommit_non_locking: 0 1 row in set (0.00 sec) mysql [localhost] {msandbox} (db1) > commit; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (db1) > SELECT name, comment, status, count FROM information_schema.innodb_metrics WHERE name like 'trx%comm%'; +---------------------------+--------------------------------------------------------------------+---------+-------+ | name | comment | status | count | +---------------------------+--------------------------------------------------------------------+---------+-------+ | trx_rw_commits | Number of read-write transactions committed | enabled | 1 | | trx_ro_commits | Number of read-only transactions committed | enabled | 2 | | trx_nl_ro_commits | Number of non-locking auto-commit read-only transactions committed | enabled | 1 | | trx_commits_insert_update | Number of transactions committed with inserts and updates | enabled | 0 | +---------------------------+--------------------------------------------------------------------+---------+-------+ 4 rows in set (0.00 sec)

OK, as seen above, after a locking read was done, our transaction has transformed: it got a real, unique trx_id assigned. Then, when committed, the RW counter increased.

Performance Schema Problem

Nowadays it may feel natural to use performance_schema for monitoring everything. And, indeed, we can monitor types of transactions with it as well. Let’s enable the needed consumers and instruments:

mysql [localhost] {msandbox} (db1) > UPDATE performance_schema.setup_consumers SET ENABLED = 'YES' WHERE NAME LIKE '%transactions%'; Query OK, 0 rows affected (0.00 sec) Rows matched: 3 Changed: 0 Warnings: 0 mysql [localhost] {msandbox} (db1) > UPDATE performance_schema.setup_instruments SET ENABLED = 'YES', TIMED = 'YES' WHERE NAME = 'transaction'; Query OK, 0 rows affected (0.01 sec) Rows matched: 1 Changed: 0 Warnings: 0 mysql [localhost] {msandbox} (db1) > SELECT * FROM performance_schema.setup_instruments WHERE NAME = 'transaction'; +-------------+---------+-------+------------+------------+---------------+ | NAME | ENABLED | TIMED | PROPERTIES | VOLATILITY | DOCUMENTATION | +-------------+---------+-------+------------+------------+---------------+ | transaction | YES | YES | | 0 | NULL | +-------------+---------+-------+------------+------------+---------------+ 1 row in set (0.00 sec) mysql [localhost] {msandbox} (db1) > SELECT * FROM performance_schema.setup_consumers WHERE NAME LIKE '%transactions%'; +----------------------------------+---------+ | NAME | ENABLED | +----------------------------------+---------+ | events_transactions_current | YES | | events_transactions_history | YES | | events_transactions_history_long | YES | +----------------------------------+---------+ 3 rows in set (0.01 sec) mysql [localhost] {msandbox} (db1) > SELECT COUNT_STAR,COUNT_READ_WRITE,COUNT_READ_ONLY FROM performance_schema.events_transactions_summary_global_by_event_name\G *************************** 1. row *************************** COUNT_STAR: 0 COUNT_READ_WRITE: 0 COUNT_READ_ONLY: 0 1 row in set (0.00 sec)

And let’s do some simple tests:

mysql [localhost] {msandbox} (db1) > START TRANSACTION; COMMIT; Query OK, 0 rows affected (0.01 sec) Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (db1) > SELECT COUNT_STAR,COUNT_READ_WRITE,COUNT_READ_ONLY FROM performance_schema.events_transactions_summary_global_by_event_name\G *************************** 1. row *************************** COUNT_STAR: 1 COUNT_READ_WRITE: 1 COUNT_READ_ONLY: 0 1 row in set (0.00 sec) mysql [localhost] {msandbox} (db1) > SELECT name, comment, status, count FROM information_schema.innodb_metrics WHERE name like 'trx%comm%'; +---------------------------+--------------------------------------------------------------------+---------+-------+ | name | comment | status | count | +---------------------------+--------------------------------------------------------------------+---------+-------+ | trx_rw_commits | Number of read-write transactions committed | enabled | 0 | | trx_ro_commits | Number of read-only transactions committed | enabled | 0 | | trx_nl_ro_commits | Number of non-locking auto-commit read-only transactions committed | enabled | 0 | | trx_commits_insert_update | Number of transactions committed with inserts and updates | enabled | 0 | +---------------------------+--------------------------------------------------------------------+---------+-------+ 4 rows in set (0.00 sec)

A void transaction caused an increase to this RW counter in Performance Schema view! Moreover, a simple autocommit select increases it too:

mysql [localhost] {msandbox} (db1) > SELECT count(*) FROM db1.t1; +----------+ | count(*) | +----------+ | 3 | +----------+ 1 row in set (0.01 sec) mysql [localhost] {msandbox} (db1) > SELECT COUNT_STAR,COUNT_READ_WRITE,COUNT_READ_ONLY FROM performance_schema.events_transactions_summary_global_by_event_name\G *************************** 1. row *************************** COUNT_STAR: 2 COUNT_READ_WRITE: 2 COUNT_READ_ONLY: 0 1 row in set (0.00 sec) mysql [localhost] {msandbox} (db1) > START TRANSACTION READ ONLY; COMMIT; Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (db1) > SELECT COUNT_STAR,COUNT_READ_WRITE,COUNT_READ_ONLY FROM performance_schema.events_transactions_summary_global_by_event_name\G *************************** 1. row *************************** COUNT_STAR: 3 COUNT_READ_WRITE: 2 COUNT_READ_ONLY: 1 1 row in set (0.00 sec) mysql [localhost] {msandbox} (db1) > SELECT name, comment, status, count FROM information_schema.innodb_metrics WHERE name like 'trx%comm%'; +---------------------------+--------------------------------------------------------------------+---------+-------+ | name | comment | status | count | +---------------------------+--------------------------------------------------------------------+---------+-------+ | trx_rw_commits | Number of read-write transactions committed | enabled | 0 | | trx_ro_commits | Number of read-only transactions committed | enabled | 0 | | trx_nl_ro_commits | Number of non-locking auto-commit read-only transactions committed | enabled | 1 | | trx_commits_insert_update | Number of transactions committed with inserts and updates | enabled | 0 | +---------------------------+--------------------------------------------------------------------+---------+-------+ 4 rows in set (0.01 sec)

As seen above, with regard to monitoring transactions via Performance Schema, everything seems completely broken, empty transactions increase counters, and the only way to increase RO counter is to call a read-only transaction explicitly, but again, it should not count when no real read was done from a table. For this reason I filed another bug report: https://bugs.mysql.com/bug.php?id=92364

PMM Dashboard

We implemented a transactions information view in PMM, based on Information_schema.innodb_metrics, which—as presented above—is reliable and shows the correct counters. Therefore, I encourage everyone to use the innodb_monitor_enable setting to enable it and have the PMM graph it. It will look something like this:

FOSDEM 2019

The FOSDEM organization just confirmed that again this year the ecosystem of your favorite database will have its Devroom !

More info to come soon, but save the day: 2 & 3rd February 2019 in Brussels !

It seems the MySQL & Friends Devroom  (MariaDB, Percona, Oracle, and all tools in the ecosystem) will be held on Saturday (to be confirmed).

Stay tuned !

Globalizing Player Accounts at Riot Games While Maintaining Availability

The Player Accounts team at Riot Games needed to consolidate the player account infrastructure and provide a single, global accounts system for the League of Legends player base. To do this, they migrated hundreds of millions of player accounts into a consolidated, globally replicated composite database cluster in AWS. This provided higher fault tolerance and lower latency access to account data. In this talk by Tyler Turk (Infrastructure Engineer, Riot Games), we discuss this effort to migrate eight disparate database clusters into AWS as a single composite database cluster replicated in four different AWS regions, provisioned with terraform, and managed and operated by Ansible.

Join us Tuesday, Nov 27, 1:45 – 2:45 PM

MySQL Books - 2018 has been a very good year

Someone once told me you can tell how healthy a software project is by the number of new books each year.  For the past few years the MySQL community has been blessed with one or two books each year. Part of that was the major shift with MySQL 8 changes but part of it was that the vast majority of the changes were fairly minor and did not need detailed explanations. But this year we have been blessed with four new books.  Four very good books on new facets of MySQL.

Introducing the MySQL 8 Document Store is the latest book from Dr. Charles Bell on MySQL.  If you have read any other of Dr. Chuck's book you know they are well written with lots of examples.  This is more than a simple introduction with many intermediate and advanced concepts covered in detail.

Introducing the MySQL 8 Document Store MySQL & JSON - A Practical Programming Guide by yours truly is a guide for developers who want to get the most of the JSON data type introduced in MySQL 5.7 and improved in MySQL 8.  While I love MySQL's documentation, I wanted to provide detailed examples on how to use the various functions and features of the JSON data type. 

MySQL and JSON A Practical Programming Guide
Jesper Wisborg Krogh is a busy man at work and somehow found the time to author and co-author two books.  The newest is MySQL Connector/Python Revealed: SQL and NoSQL Data Storage Using MySQL for Python Programmers which I have only just received.  If you are a Python Programmer (or want to be) then you need to order your copy today.  A few chapters in and I am already finding it a great, informative read.
MySQL Connector/Python Revealed
Jesper and Mikiya Okuno produced a definitive guide to the MySQL NDB cluster with Pro MySQL NDB Cluster.  NDB cluster is often confusing and just different enough from 'regular' MySQL to make you want to have a clear, concise guidebook by your side.  And this is that book.

Pro MySQL NDB Cluster
RecommendationEach of these books have their own primary MySQL niche (Docstore, JSON, Python & Docstore, and NDB Cluster) but also have deeper breath in that they cover material you either will not find in the documentation or have to distill that information for yourself.  They not only provide valuable tools to learn their primary facets of technology but also provide double service as a reference guide. 



Build a Custom Toggle Switch with React

Building web applications usually involves making provisions for user interactions. One of the major ways of making provision for user interactions is through forms. Different form components exist for taking different kinds of input from the user. For example, a password component takes sensitive information from a user and masks the information so that it is not visible.

Most times, the information you need to get from a user is boolean-like - for example: yes or no, true or false, enable or disable, on or off, etc. Traditionally, the checkbox form component is used for getting these kinds of input. However, in modern interface designs, toggle switches are commonly used as checkbox replacements, although there are some accessibility concerns.

In this tutorial, we will see how to build a custom toggle switch component with React. At the end of the tutorial, we would have built a very simple demo React app that uses our custom toggle switch component.

Here is a demo of the final application we will be building in this tutorial.

Prerequisites

Before getting started, you need to ensure that you have Node already installed on your machine. I will also recommend that you install the Yarn package manager on your machine, since we will be using it for package management instead of npm that ships with Node. You can follow this Yarn installation guide to install yarn on your machine.

We will create the boilerplate code for our React app using the create-react-app command-line package. You also need to ensure that it is installed globally on your machine. If you are using npm >= 5.2 then you may not need to install create-react-app as a global dependency since we can use the npx command.

Finally, this tutorial assumes that you are already familiar with React. If that is not the case, you can check the React Documentation to learn more about React.

Getting Started Create new Application

Start a new React application using the following command. You can name the application however you desire.

create-react-app react-toggle-switch

npm >= 5.2

If you are using npm version 5.2 or higher, it ships with an additional npx binary. Using the npx binary, you don't need to install create-react-app` globally on your machine. You can start a new React application with this simple command:

npx create-react-app react-toggle-switch Install Dependencies

Next, we will install the dependencies we need for our application. Run the following command to install the required dependencies.

yarn add lodash bootstrap prop-types classnames yarn add -D npm-run-all node-sass-chokidar

We have installed node-sass-chokidar as a development dependency for our application to enable us use SASS. For more information about this, see this guide.

Modify the npm Scripts

Edit the package.json file and modify the scripts section to look like the following:

"scripts": { "start:js": "react-scripts start", "build:js": "react-scripts build", "start": "npm-run-all -p watch:css start:js", "build": "npm-run-all build:css build:js", "test": "react-scripts test --env=jsdom", "eject": "react-scripts eject", "build:css": "node-sass-chokidar --include-path ./src --include-path ./node_modules src/ -o src/", "watch:css": "npm run build:css && node-sass-chokidar --include-path ./src --include-path ./node_modules src/ -o src/ --watch --recursive" } Include Bootstrap CSS

We installed the bootstrap package as a dependency for our application since we will be needing some default styling. To include Bootstrap in the application, edit the src/index.js file and add the following line before every other import statement.

import "bootstrap/dist/css/bootstrap.min.css"; Start the Application

Start the application by running the following command with yarn:

yarn start

The application is now started and development can begin. Notice that a browser tab has been opened for you with live reloading functionality to keep in sync with changes in the application as you develop.

At this point, the application view should look like the following screenshot:

The ToggleSwitch Component

Create a new directory named components inside the src directory of your project. Next, create another new directory named ToggleSwitch inside the components directory. Next, create two new files inside src/components/ToggleSwitch, namely: index.js and index.scss.

Add the following content into the src/components/ToggleSwitch/index.js file.

/_ src/components/ToggleSwitch/index.js _/ import PropTypes from 'prop-types'; import classnames from 'classnames'; import isString from 'lodash/isString'; import React, { Component } from 'react'; import isBoolean from 'lodash/isBoolean'; import isFunction from 'lodash/isFunction'; import './index.css'; class ToggleSwitch extends Component {} ToggleSwitch.propTypes = { theme: PropTypes.string, enabled: PropTypes.oneOfType([ PropTypes.bool, PropTypes.func ]), onStateChanged: PropTypes.func } export default ToggleSwitch;

In this code snippet, we created the ToggleSwitch component and added typechecks for some of its props.

  • theme - is a string indicating the style and color to be used for the toggle switch.

  • enabled - can be either a boolean or a function that returns a boolean, and it determines the state of the toggle switch when it is rendered.

  • onStateChanged - is a callback function that will be called when the state of the toggle switch changes. This is useful for triggering actions on the parent component when the switch is toggled.

Initializing the ToggleSwitch State

In the following code snippet, we initialize the state of the ToggleSwitch component and define some component methods for getting the state of the toggle switch.

/_ src/components/ToggleSwitch/index.js _/ class ToggleSwitch extends Component { state = { enabled: this.enabledFromProps() } isEnabled = () => this.state.enabled enabledFromProps() { let { enabled } = this.props; // If enabled is a function, invoke the function enabled = isFunction(enabled) ? enabled() : enabled; // Return enabled if it is a boolean, otherwise false return isBoolean(enabled) && enabled; } }

Here, the enabledFromProps() method resolves the enabled prop that was passed and returns a boolean indicating if the switch should be enabled when it is rendered. If enabled prop is a boolean, it returns the boolean value. If it is a function, it first invokes the function before determining if the returned value is a boolean. Otherwise, it returns false.

Notice that we used the return value from enabledFromProps() to set the initial enabled state. Also, we have added the isEnabled() method to get the current enabled state.

Toggling the ToggleSwitch

Let's go ahead and add the method that actually toggles the switch when it is clicked.

/_ src/components/ToggleSwitch/index.js _/ class ToggleSwitch extends Component { // ...other class members here toggleSwitch = evt => { evt.persist(); evt.preventDefault(); const { onClick, onStateChanged } = this.props; this.setState({ enabled: !this.state.enabled }, () => { const state = this.state; // Augument the event object with SWITCH_STATE const switchEvent = Object.assign(evt, { SWITCH_STATE: state }); // Execute the callback functions isFunction(onClick) && onClick(switchEvent); isFunction(onStateChanged) && onStateChanged(state); }); } }

Since this method will be triggered as a click event listener, we have declared it with the evt parameter. First, this method toggles the current enabled state using the logical NOT (!) operator. When the state has been updated, it triggers the callback functions passed to the onClick and onStateChanged props.

Notice that since onClick requires an event as its first argument, we augmented the event with an additional SWITCH_STATE property containing the new state object. However, the onStateChanged callback is called with the new state object.

Rendering the ToggleSwitch

Finally, let's implement the render() method of the ToggleSwitch component.

/_ src/components/ToggleSwitch/index.js _/ class ToggleSwitch extends Component { // ...other class members here render() { const { enabled } = this.state; // Isolate special props and store the remaining as restProps const { enabled: _enabled, theme, onClick, className, onStateChanged, ...restProps } = this.props; // Use default as a fallback theme if valid theme is not passed const switchTheme = (theme && isString(theme)) ? theme : 'default'; const switchClasses = classnames( `switch switch--${switchTheme}`, className ) const togglerClasses = classnames( 'switch-toggle', `switch-toggle--${enabled ? 'on' : 'off'}` ) return ( <div className={switchClasses} onClick={this.toggleSwitch} {...restProps}> <div className={togglerClasses}></div> </div> ) } }

A lot is going on in this render() method - so let's try to break it down.

  1. First, the enabled state is destructured from the component state.

  2. Next, we destructure the component props and extract the restProps that will be passed down to the switch. This enables us to intercept and isolate the special props of the component.

  3. Next, we use classnames to construct the classes for the switch and the inner toggler, based on the theme and the enabled state of the component.

  4. Finally, we render the DOM elements with the appropriate props and classes. Notice that we passed in this.toggleSwitch as the click event listener on the switch.

Styling the ToggleSwitch

Now that we have the ToggleSwitch component and its required functionality, we will go ahead and write the styles for the toggle switch.

Add the following code snippet to the src/components/ToggleSwitch/index.scss file you created earlier:

/_ src/components/ToggleSwitch/index.scss _/ // DEFAULT COLOR VARIABLES $ball-color: #ffffff; $active-color: #62c28e; $inactive-color: #cccccc; // DEFAULT SIZING VARIABLES $switch-size: 32px; $ball-spacing: 2px; $stretch-factor: 1.625; // DEFAULT CLASS VARIABLE $switch-class: 'switch-toggle'; /_ SWITCH MIXIN _/ @mixin switch($size: $switch-size, $spacing: $ball-spacing, $stretch: $stretch-factor, $color: $active-color, $class: $switch-class) {}

Here, we defined some default variables and created a switch mixin. In the next section, we will we will implement the mixin, but first, let's examine the parameters of the switch mixin:

  • $size - The height of the switch element. It must have a length unit. It defaults to 32px.

  • $spacing - The space between the circular ball and the switch container. It must have a length unit. It defaults to 2px.

  • $stretch - A factor used to determine the extent to which the width of the switch element should be stretched. It must be a unitless number. It defaults to 1.625.

  • $color - The color of the switch when in active state. This must be a valid color value. Note that the circular ball is always white irrespective of this color.

  • $class - The base class for identifying the switch. This is used to dynamically create the state classes of the switch. It defaults to 'switch-toggle'. Hence, the default state classes are .switch-toggle--on and .switch-toggle--off.

Implementing the Switch Mixin

Here is the implementation of the switch mixin:

/_ src/components/ToggleSwitch/index.scss _/ @mixin switch($size: $switch-size, $spacing: $ball-spacing, $stretch: $stretch-factor, $color: $active-color, $class: $switch-class) { // SELECTOR VARIABLES $self: '.' + $class; $on: #{$self}--on; $off: #{$self}--off; // SWITCH VARIABLES $active-color: $color; $switch-size: $size; $ball-spacing: $spacing; $stretch-factor: $stretch; $ball-size: $switch-size - ($ball-spacing _ 2); $ball-slide-size: ($switch-size _ ($stretch-factor - 1) + $ball-spacing); // SWITCH STYLES height: $switch-size; width: $switch-size * $stretch-factor; cursor: pointer !important; user-select: none !important; position: relative !important; display: inline-block; &#{$on}, &#{$off} { &::before, &::after { content: ''; left: 0; position: absolute !important; } &::before { height: inherit; width: inherit; border-radius: $switch-size / 2; will-change: background; transition: background .4s .3s ease-out; } &::after { top: $ball-spacing; height: $ball-size; width: $ball-size; border-radius: $ball-size / 2; background: $ball-color !important; will-change: transform; transition: transform .4s ease-out; } } &#{$on} { &::before { background: $active-color !important; } &::after { transform: translateX($ball-slide-size); } } &#{$off} { &::before { background: $inactive-color !important; } &::after { transform: translateX($ball-spacing); } } }

In this mixin, we start by setting some variables based on the parameters passed to the mixin. Then we go ahead, creating the styles. Notice that we are using the ::after and ::before pseudo-elements to dynamically create the components of the switch. ::before creates the switch container while ::after creates the circular ball.

Also notice how we constructed the state classes from the base class and assign them to variables. The $on variable maps to the selector for the enabled state, while the $off variable maps to the selector for the disabled state.

We also ensured that the base class (.switch-toggle) must be used together with a state class (.switch-toggle--on or .switch-toggle--off) for the styles to be available. Hence, we used the &#{$on} and &#{$off} selectors.

Creating Themed Switches

Now that we have our switch mixin, we will continue to create some themed styles for the toggle switch. We will create two themes, namely: default and graphite-small.

Append the following code snippet to the src/components/ToggleSwitch/index.scss file.

/_ src/components/ToggleSwitch/index.scss _/ @function get-switch-class($selector) { // First parse the selector using `selector-parse` // Extract the first selector in the first list using `nth` twice // Extract the first simple selector using `simple-selectors` and `nth` // Extract the class name using `str-slice` @return str-slice(nth(simple-selectors(nth(nth(selector-parse($selector), 1), 1)), 1), 2); } .switch { $self: &; $toggle: #{$self}-toggle; $class: get-switch-class($toggle); // default theme &#{$self}--default > #{$toggle} { // Always pass the $class to the mixin @include switch($class: $class); } // graphite-small theme &#{$self}--graphite-small > #{$toggle} { // A smaller switch with a `gray` active color // Always pass the $class to the mixin @include switch($color: gray, $size: 20px, $class: $class); } }

Here we first create a Sass function named get-switch-class that takes a $selector as parameter. It runs the $selector through a chain of Sass functions and tries to extract the first class name. For example, if it receives:

  • .class-1 .class-2, .class-3 .class-4, it returns class-1.

  • .class-5.class-6 > .class-7.class-8, it returns class-5.

Next, we define styles for the .switch class. We dynamically set the toggle class to .switch-toggle and assign it to the $toggle variable. Notice that we assign the class name returned from the get-switch-class() function call to the $class variable. Finally, we include the switch mixin with the necessary parameters to create the theme classes.

Notice that the structure of the selector for the themed switch looks like this: &#{$self}--default > #{$toggle} (using the default theme as an example). Putting everything together, this means that the elements hierarchy should look like the following in order for the styles to be applied:

<!-- Use the default theme: switch--default --> <element class="switch switch--default"> <!-- The switch is in enabled state: switch-toggle--on --> <element class="switch-toggle switch-toggle--on"></element> </element>

Here is a simple demo showing what the toggle switch themes look like:

Building the Sample App

Now that we have the ToggleSwitch React component with the required styling, let's go ahead and start creating the sample app we saw at the beginning section.

Modify the src/App.js file to look like the following code snippet:

/_ src/App.js _/ import classnames from 'classnames'; import snakeCase from 'lodash/snakeCase'; import React, { Component } from 'react'; import Switch from './components/ToggleSwitch'; import './App.css'; // List of activities that can trigger notifications const ACTIVITIES = [ 'News Feeds', 'Likes and Comments', 'Live Stream', 'Upcoming Events', 'Friend Requests', 'Nearby Friends', 'Birthdays', 'Account Sign-In' ]; class App extends Component { // Initialize app state, all activities are enabled by default state = { enabled: false, only: ACTIVITIES.map(snakeCase) } toggleNotifications = ({ enabled }) => { const { only } = this.state; this.setState({ enabled, only: enabled ? only : ACTIVITIES.map(snakeCase) }); } render() { const { enabled } = this.state; const headingClasses = classnames( 'font-weight-light h2 mb-0 pl-4', enabled ? 'text-dark' : 'text-secondary' ); return ( <div className="App position-absolute text-left d-flex justify-content-center align-items-start pt-5 h-100 w-100"> <div className="d-flex flex-wrap mt-5" style={{width: 600}}> <div className="d-flex p-4 border rounded align-items-center w-100"> <Switch theme="default" className="d-flex" enabled={enabled} onStateChanged={this.toggleNotifications} /> <span className={headingClasses}>Notifications</span> </div> {/_ ...Notification options here... _/} </div> </div> ); } } export default App;

Here we initialize the ACTIVITIES constant with an array of activities that can trigger notifications. Next, we initialized the app state with two properties:

  • enabled - a boolean that indicates whether notifications are enabled.

  • only - an array that contains all the activities that are enabled to trigger notifications.

Notice that we used the snakeCase utility from Lodash to convert the activities to snakecase before updating the state. Hence, 'News Feeds' becomes 'news_feeds'.

Next, we defined the toggleNotifications() method that updates the app state based on the state it receives from the notification switch. This is used as the callback function passed to the onStateChanged prop of the toggle switch. Notice that when the app is enabled, all activities will be enabled by default, since the only state property is populated with all the activities.

Finally, we rendered the DOM elements for the app and left a slot for the notification options which will be added soon. At this point, the app should look like the following screenshot:

Next go ahead and look for the line that has this comment:

{/_ ...Notification options here... _/}

and replace it with the following content in order to render the notification options:

{ enabled && ( <div className="w-100 mt-5"> <div className="container-fluid px-0"> <div className="pt-5"> <div className="d-flex justify-content-between align-items-center"> <span className="d-block font-weight-bold text-secondary small">Email Address</span> <span className="text-secondary small mb-1 d-block"> <small>Provide a valid email address with which to receive notifications.</small> </span> </div> <div className="mt-2"> <input type="text" placeholder="mail@domain.com" className="form-control" style={{ fontSize: 14 }} /> </div> </div> <div className="pt-5 mt-4"> <div className="d-flex justify-content-between align-items-center border-bottom pb-2"> <span className="d-block font-weight-bold text-secondary small">Filter Notifications</span> <span className="text-secondary small mb-1 d-block"> <small>Select the account activities for which to receive notifications.</small> </span> </div> <div className="mt-5"> <div className="row flex-column align-content-start" style={{ maxHeight: 180 }}> { this.renderNotifiableActivities() } </div> </div> </div> </div> </div> ) }

Notice here that we made a call to this.renderNotifiableActivities() to render the activities. Let's go ahead and implement this method and the other remaining methods.

Add the following methods to the App component.

/_ src/App.js _/ class App extends Component { toggleActivityEnabled = activity => ({ enabled }) => { let { only } = this.state; if (enabled && !only.includes(activity)) { only.push(activity); return this.setState({ only }); } if (!enabled && only.includes(activity)) { only = only.filter(item => item !== activity); return this.setState({ only }); } } renderNotifiableActivities() { const { only } = this.state; return ACTIVITIES.map((activity, index) => { const key = snakeCase(activity); const enabled = only.includes(key); const activityClasses = classnames( 'small mb-0 pl-3', enabled ? 'text-dark' : 'text-secondary' ); return ( <div key={index} className="col-5 d-flex mb-3"> <Switch theme="graphite-small" className="d-flex" enabled={enabled} onStateChanged={ this.toggleActivityEnabled(key) } /> <span className={activityClasses}>{ activity }</span> </div> ); }) } }

Here, we have implemented the renderNotifiableActivities method. We iterate through all the activities using ACTIVITIES.map() and render each with a toggle switch for it. Notice that the toggle switch uses the graphite-small theme. We also detect the enabled state of each activity by checking whether it already exists in the only state variable.

Finally, we defined the toggleActivityEnabled method which was used to provide the callback function for the onStateChanged prop of each activity's toggle switch. We defined it as a higher-order function so that we can pass the activity as argument and return the callback function. It checks if an activity is already enabled and updates the state accordingly.

Now the app should look like the following screenshot:

If you prefer to have all the activities disabled by default instead of enabled as shown in the initial screenshot, then you could make the following changes to the App component:

/_ src/App.js _/ class App extends Component { // Initialize app state, all activities are disabled by default state = { enabled: false, only: [] } toggleNotifications = ({ enabled }) => { const { only } = this.state; this.setState({ enabled, only: enabled ? only : [] }); } } Accessibility Concerns

Using toggle switches in our applications instead of traditional checkboxes can enable us create neater interfaces, especially considering the fact that it is difficult to style a traditional checkbox however we want.

However, using toggle switches instead of checkboxes has some accessibility issues, since the user-agent may not be able to interpret the component's function correctly.

A few things can be done to improve the accessibility of the toggle switch and enable user-agents to understand the role correctly. For example, you can use the following ARIA attributes:

<switch-element tabindex="0" role="switch" aria-checked="true" aria-labelledby="#label-element"></switch-element>

You can also listen to more events on the toggle switch to create more ways the user can interact with the component.

Conclusion

In this tutorial, we have been able to create a custom toggle switch for our React applications with proper styling that supports different themes. We have also been able to see how we can use it in our application instead of traditional checkboxes and the accessibility concerns involved.

For the complete sourcecode of this tutorial, checkout the react-toggle-switch-demo repository on Github. You can also get a live demo of this tutorial on Code Sandbox.

MySQL Replication Notes

The MySQL Replication was my first project as a Database Administrator (DBA) and I have been working with Replication technologies for last few years and I am indebted to contribute my little part for development of this technology. MySQL supports different replication topologies, having better understanding of basic concepts will help you in building and managing various and complex topologies. I am writing here, some of the key points to taken care when you are building MySQL replication. I consider this post as a starting point for building a high performance and consistent MySQL servers.  Let me start with below key points
Hardware. MySQL Server Version MySQL Server Configuration Primary Key Storage EngineI will update this post with relevant points, whenever I get time. I am trying to provide generic concepts and it will be applicable to all version of MySQL, however, some of the concepts are new and applicable to latest versions (>5.0).
Hardware:
Resourcing of the slave must be on par (or better than) for any Master to keep up with the Master. The slave resource includes the following things:
Disk IO Computation (vCPU) InnoDB Buffer Pool (RAM)
MySQL 5.7 supports Multi Threaded Replication, but are limited to one thread per database. In case of heavy writes (multiple threads) on Master databases, there is a chance that, Slave will be lag behind the Master, since only one thread is applying BINLOG to the Slave per database and its writes are all serialised. MySQL Version: It is highly recommended to have Master and Slave servers should run on same version. Different version of MySQL on slave can affect the SQL execution timings. For example, MySQL 8.0 is comparatively much faster than 5.5. Also, it is worth to consider the features addition, deletion and modifications. MySQL Server Configuration: The MySQL server configuration should be identical, we may have identical hardware resources and same MySQL version, but if MySQL is not configured to utilize the available resources in similar method, there will be changes in execution plan. For example, InnoDB buffer pool size should be configured on MySQL server to utilize the memory. Even if we have a identical hardwares, buffer pool must be configured at the MySQL instance. Primary Key: The primary key plays an important role in Row-Based-Replication (when binlog_format is either ROW or MIXED). Most often, slave lagging behind master while applying RBR event is due to the lack of primary key on the table involved. When no primary key is defined, for each affected row on master, the entire row image has to be compared on a row-by-row basis against the matching table’s data on the slave. This can be explained by how a transaction is performed on master and slave based on the availability of primary key:

With Primary Key Without Primary Key On Master Uniquely identifies the row Make use of any available key or performs a full table scan On Slave Uniquely identifies each rows & changes can be quickly applied to appropriate row images on the slave. Entire row image is compared on a row-by-row basis against the matching table’s data on slave. Row-by-row scan can be very expensive and time consuming and cause slave to lag behind master. When there is no primary key defined on a table, InnoDB internally generates a hidden clustered index named GEN_CLUST_INDEX containing row ID values. MySQL replication cannot use this hidden primary key for sort operations, because this hidden row IDs are unique to each MySQL instance and are not consistent between a master and a slave. The best solution is to ensure all tables have a primary key. When there is no unique not null key available on table, at least create an auto-incrementing integer column (surrogate key) as primary key. If immediately, it is not possible to create a primary key on all such tables, there is a workaround to overcome this for short period of time by changing slave rows search algorithm. This is not the scope of this post, I will write future post on this topic. Mixing of Storage Engines: MySQL Replication supports different storage engines on master and slave servers. But, there are few important configuration to be taken care when mixing of storage engines. It should be noted that, InnoDB is a transactional storage engine and MyISAM is a non-transactional. On Rollback: If binlog_format is STATEMENT and when a transaction updates, InnoDB and MyISAM tables and then performs ROLLBACK, only InnoDB tables data is removed and when this statement is written to binlog it will be send to slave, on slave where both the tables are MyISAM will not perform the ROLLBACK, since it does not supports transaction. It will leave the table inconsistent with master. Auto-Increment column: This should be noted that, the way auto-increment is implemented on MyISAM and InnoDB different, MyISAM will lock a entire table to generate auto-increment and the auto-increment is part of a composite key, insert operation on MyISAM table marked as unsafe. Refer this page for better understanding https://dev.mysql.com/doc/refman/8.0/en/replication-features-auto-increment.html Referential Integrity Constraints: InnoDB supports foreign keys and MyISAM does not. Cascading updates and deletes operations on InnoDB tables on master will replicate to slave, only if the tables are InnoDB on both master and slave. This is true for both STATEMENT and ROW based replications. Refer this page for explanation: https://dev.mysql.com/doc/refman/5.7/en/innodb-and-mysql-replication.html Locking: InnoDB performs row-level locking and MyISAM performs table-level locking and all transaction on the slave are executed in a serialized manner, this will negatively impact the slave performance and end up in slave lagging behind the master. Logging: MyISAM is a non-transactional storage engine and transactions are logged into binary log by client thread, immediately after execution, but before the locks are released. If the query is part of the transaction and if there is a InnoDB table involved on same transaction and it is executed before the MyISAM query, then it will not written to binlog immediately after execution, it will wait for either commit or rollback. This is done to ensure, order of execution is same in slave as in the master. Transaction on InnoDB tables will be written to the binary log, only when the transaction is committed. It is highly advisable to use transactional storage engine on MySQL Replication. Mixing of storage engine may leads to inconsistency and performance issues between master and slave server. Though MySQL does not produce any warnings, it should be noted and taken care from our end. Also, the introduction of MySQL 8.0 (from 5.6) with default storage engine as InnoDB and deprecating older ISAM feature indicates the future of MySQL database, it is going to be completely transactional and it is recommended to have InnoDB storage engine. There is a discussion online, about the removal of other storage engines and development on InnoDB engine by Oracle, though it is not scope of this article, as a Database Administrator, I prefer having different storage engine for different use cases and it has been unique feature of MySQL. I hope this post is useful, please share your thoughts / feedbacks on comment section.

MySQL 8: Performance Schema Digests Improvements

Since MySQL 5.6, the digest feature of the MySQL Performance Schema has provided a convenient and effective way to obtain statistics of queries based on their normalized form. The feature works so well that it has almost completely (from my experience) replaced the connector extensions and proxy for collecting query statistics for the Query Analyzer (Quan) in MySQL Enterprise Monitor (MEM).

MySQL 8 adds further improvements to the digest feature in the Performance Schema including a sample query with statistics for each digest, percentile information, and a histogram summary. This blog will explore these new features.

MySQL Enterprise Monitor is one of the main users of the Performance Schema digests for its Query Analyzer.

Let’s start out looking at the the good old summary by digest table.

Query Sample

The base table for digest summary information is the events_statements_summary_by_digest table. This has been around since MySQL 5.6. In MySQL 8.0 it has been extended with six columns of which three have data related to a sample query will be examined in this section.

The three sample columns are:

  • QUERY_SAMPLE_TEXT: An actual example of a query.
  • QUERY_SAMPLE_SEEN: When the sample query was seen.
  • QUERY_SAMPLE_TIMER_WAIT: How long time the sample query took to execute (in picoseconds).

As an example consider the query SELECT * FROM world.city WHERE id = <value>. The sample information for that query as well as the digest and digest text (normalized query) may look like:

mysql> SELECT DIGEST, DIGEST_TEXT, QUERY_SAMPLE_TEXT, QUERY_SAMPLE_SEEN, sys.format_time(QUERY_SAMPLE_TIMER_WAIT) AS SampleTimerWait FROM performance_schema.events_statements_summary_by_digest WHERE DIGEST_TEXT LIKE '%`world` . `city`%'\G *************************** 1. row *************************** DIGEST: 9431aed9635923565d7bc92cc36d6411c3abb9f52d2c22715be21b5472e3c366 DIGEST_TEXT: SELECT * FROM `world` . `city` WHERE `ID` = ? QUERY_SAMPLE_TEXT: SELECT * FROM world.city WHERE ID = 130 QUERY_SAMPLE_SEEN: 2018-10-09 17:19:20.500944 SampleTimerWait: 17.34 ms 1 row in set (0.00 sec)

There are a few things to note here:

  • The digest in MySQL 8 is a sha256 hash whereas in 5.6 and 5.7 it was an md5 hash.
  • The digest text is similar to the normalized query that the mysqldumpslow script can generate for queries in the slow query log; just that the Performance Schema uses a question mark as a placeholder.
  • The QUERY_SAMPLE_SEEN value is in the system time zone.
  • The sys.format_time() function is in the query used to convert the picoseconds to a human readable value.

The maximum length of the sample text is set with the performance_schema_max_sql_text_length option. The default is 1024 bytes. It is the same option that is used for the SQL_TEXT columns in the statement events tables. It requires a restart of MySQL to change the value. Since the query texts are stored in several contexts and some of the Performance Schema tables can have thousands of rows, do take care not to increase it beyond what you have memory for.

How is the sample query chosen? The sample is the slowest example of a query with the given digest. If the performance_schema_max_digest_sample_age option is set to a non-zero value (the default is 60 seconds) and the existing sample is older than the specified value, it will always be replaced.

The events_statements_summary_by_digest also has another set of new columns: percentile information.

Percentile Information

Since the beginning, the events_statements_summary_by_digest table has included some statistical information about the query times for a given digest: the minimum, average, maximum, and total query time. In MySQL 8 this has been extended to include information about the 95th, 99th, and 99.9th percentile. The information is available in the QUANTILE_95, QUANTILE_99, and QUANTILE_999 column respectively. All of the values are in picoseconds.

What does the new columns mean? Based on the histogram information of the query (see the next section), MySQL calculates a high estimate of the query time. For a given digest, 95% of the executed queries are expected to be faster than the query time given by QUANTILE_95. Similar for the two other columns.

As an example consider the same digest as before:

mysql> SELECT DIGEST, DIGEST_TEXT, QUERY_SAMPLE_TEXT, sys.format_time(SUM_TIMER_WAIT) AS SumTimerWait, sys.format_time(MIN_TIMER_WAIT) AS MinTimerWait, sys.format_time(AVG_TIMER_WAIT) AS AvgTimerWait, sys.format_time(MAX_TIMER_WAIT) AS MaxTimerWait, sys.format_time(QUANTILE_95) AS Quantile95, sys.format_time(QUANTILE_99) AS Quantile99, sys.format_time(QUANTILE_999) AS Quantile999 FROM performance_schema.events_statements_summary_by_digest WHERE DIGEST_TEXT LIKE '%`world` . `city`%'\G *************************** 1. row *************************** DIGEST: 9431aed9635923565d7bc92cc36d6411c3abb9f52d2c22715be21b5472e3c366 DIGEST_TEXT: SELECT * FROM `world` . `city` WHERE `ID` = ? QUERY_SAMPLE_TEXT: SELECT * FROM world.city WHERE ID = 130 SumTimerWait: 692.77 ms MinTimerWait: 50.32 us AvgTimerWait: 68.92 us MaxTimerWait: 17.34 ms Quantile95: 104.71 us Quantile99: 165.96 us Quantile999: 363.08 us 1 row in set (0.00 sec)

Having the 95th, 99th, and 99.9th percentile helps predict the performance of a query and show the spread of the query times. Even more information about the spread can be found using the new family member: histograms.

Histograms

Histograms is a way to put the query execution times into buckets, so it is possible to see how the query execution times spread. This can for example be useful to see how evenly the query time is. The average query time may be fine, but if that is based on some queries executing super fast and others very slow, it will still result in unhappy users and customers.

The MAX_TIMER_WAIT column of the events_statements_summary_by_digest table discussed this far shows the high watermark, but it does not say whether it is a single outlier or a result of general varying query times. The histograms give the answer to this.

Using the query digest from earlier in the blog, the histogram information for the query can be found in the events_statements_histogram_by_digest table like:

mysql> SELECT BUCKET_NUMBER, sys.format_time(BUCKET_TIMER_LOW) AS TimerLow, sys.format_time(BUCKET_TIMER_HIGH) AS TimerHigh, COUNT_BUCKET, COUNT_BUCKET_AND_LOWER, BUCKET_QUANTILE FROM performance_schema.events_statements_histogram_by_digest WHERE DIGEST = '9431aed9635923565d7bc92cc36d6411c3abb9f52d2c22715be21b5472e3c366' AND COUNT_BUCKET > 0 ORDER BY BUCKET_NUMBER; +---------------+-----------+-----------+--------------+------------------------+-----------------+ | BUCKET_NUMBER | TimerLow | TimerHigh | COUNT_BUCKET | COUNT_BUCKET_AND_LOWER | BUCKET_QUANTILE | +---------------+-----------+-----------+--------------+------------------------+-----------------+ | 36 | 50.12 us | 52.48 us | 524 | 524 | 0.052400 | | 37 | 52.48 us | 54.95 us | 2641 | 3165 | 0.316500 | | 38 | 54.95 us | 57.54 us | 310 | 3475 | 0.347500 | | 39 | 57.54 us | 60.26 us | 105 | 3580 | 0.358000 | | 40 | 60.26 us | 63.10 us | 48 | 3628 | 0.362800 | | 41 | 63.10 us | 66.07 us | 3694 | 7322 | 0.732200 | | 42 | 66.07 us | 69.18 us | 611 | 7933 | 0.793300 | | 43 | 69.18 us | 72.44 us | 236 | 8169 | 0.816900 | | 44 | 72.44 us | 75.86 us | 207 | 8376 | 0.837600 | | 45 | 75.86 us | 79.43 us | 177 | 8553 | 0.855300 | | 46 | 79.43 us | 83.18 us | 236 | 8789 | 0.878900 | | 47 | 83.18 us | 87.10 us | 186 | 8975 | 0.897500 | | 48 | 87.10 us | 91.20 us | 203 | 9178 | 0.917800 | | 49 | 91.20 us | 95.50 us | 116 | 9294 | 0.929400 | | 50 | 95.50 us | 100.00 us | 135 | 9429 | 0.942900 | | 51 | 100.00 us | 104.71 us | 105 | 9534 | 0.953400 | | 52 | 104.71 us | 109.65 us | 65 | 9599 | 0.959900 | | 53 | 109.65 us | 114.82 us | 65 | 9664 | 0.966400 | | 54 | 114.82 us | 120.23 us | 59 | 9723 | 0.972300 | | 55 | 120.23 us | 125.89 us | 40 | 9763 | 0.976300 | | 56 | 125.89 us | 131.83 us | 34 | 9797 | 0.979700 | | 57 | 131.83 us | 138.04 us | 33 | 9830 | 0.983000 | | 58 | 138.04 us | 144.54 us | 27 | 9857 | 0.985700 | | 59 | 144.54 us | 151.36 us | 16 | 9873 | 0.987300 | | 60 | 151.36 us | 158.49 us | 25 | 9898 | 0.989800 | | 61 | 158.49 us | 165.96 us | 20 | 9918 | 0.991800 | | 62 | 165.96 us | 173.78 us | 9 | 9927 | 0.992700 | | 63 | 173.78 us | 181.97 us | 11 | 9938 | 0.993800 | | 64 | 181.97 us | 190.55 us | 11 | 9949 | 0.994900 | | 65 | 190.55 us | 199.53 us | 4 | 9953 | 0.995300 | | 66 | 199.53 us | 208.93 us | 6 | 9959 | 0.995900 | | 67 | 208.93 us | 218.78 us | 6 | 9965 | 0.996500 | | 68 | 218.78 us | 229.09 us | 6 | 9971 | 0.997100 | | 69 | 229.09 us | 239.88 us | 3 | 9974 | 0.997400 | | 70 | 239.88 us | 251.19 us | 2 | 9976 | 0.997600 | | 71 | 251.19 us | 263.03 us | 2 | 9978 | 0.997800 | | 72 | 263.03 us | 275.42 us | 2 | 9980 | 0.998000 | | 73 | 275.42 us | 288.40 us | 4 | 9984 | 0.998400 | | 74 | 288.40 us | 302.00 us | 2 | 9986 | 0.998600 | | 75 | 302.00 us | 316.23 us | 2 | 9988 | 0.998800 | | 76 | 316.23 us | 331.13 us | 1 | 9989 | 0.998900 | | 78 | 346.74 us | 363.08 us | 3 | 9992 | 0.999200 | | 79 | 363.08 us | 380.19 us | 2 | 9994 | 0.999400 | | 80 | 380.19 us | 398.11 us | 1 | 9995 | 0.999500 | | 83 | 436.52 us | 457.09 us | 1 | 9996 | 0.999600 | | 100 | 954.99 us | 1.00 ms | 1 | 9997 | 0.999700 | | 101 | 1.00 ms | 1.05 ms | 1 | 9998 | 0.999800 | | 121 | 2.51 ms | 2.63 ms | 1 | 9999 | 0.999900 | | 162 | 16.60 ms | 17.38 ms | 1 | 10000 | 1.000000 | +---------------+-----------+-----------+--------------+------------------------+-----------------+ 49 rows in set (0.02 sec)

In this example, 3694 times (the COUNT_BUCKET column) when the query were executed, the query time was between 63.10 microseconds and 66.07 microseconds, so the execution time matched the interval of bucket number 41. There has been at total of 7322 executions (the COUNT_BUCKET_AND_LOWER column) of the query with a query time of 66.07 microseconds or less. This means that 73.22% (the BUCKET_QUANTILE column) of the queries have a query time of 66.07 microseconds or less.

In addition to the shown columns, there is SCHEMA_NAME and DIGEST (which together with BUCKET_NUMBER form a unique key). For each digest there are 450 buckets with the width of the bucket (in terms of difference between the low and high timers) gradually becoming larger and larger. The first, middle, and last five buckets are:

mysql> SELECT BUCKET_NUMBER, sys.format_time(BUCKET_TIMER_LOW) AS TimerLow, sys.format_time(BUCKET_TIMER_HIGH) AS TimerHigh FROM performance_schema.events_statements_histogram_by_digest WHERE DIGEST = '9431aed9635923565d7bc92cc36d6411c3abb9f52d2c22715be21b5472e3c366' AND (BUCKET_NUMBER < 5 OR BUCKET_NUMBER > 444 OR BUCKET_NUMBER BETWEEN 223 AND 227); +---------------+-----------+-----------+ | BUCKET_NUMBER | TimerLow | TimerHigh | +---------------+-----------+-----------+ | 0 | 0 ps | 10.00 us | | 1 | 10.00 us | 10.47 us | | 2 | 10.47 us | 10.96 us | | 3 | 10.96 us | 11.48 us | | 4 | 11.48 us | 12.02 us | | 223 | 275.42 ms | 288.40 ms | | 224 | 288.40 ms | 302.00 ms | | 225 | 302.00 ms | 316.23 ms | | 226 | 316.23 ms | 331.13 ms | | 227 | 331.13 ms | 346.74 ms | | 445 | 2.11 h | 2.21 h | | 446 | 2.21 h | 2.31 h | | 447 | 2.31 h | 2.42 h | | 448 | 2.42 h | 2.53 h | | 449 | 2.53 h | 30.50 w | +---------------+-----------+-----------+ 15 rows in set (0.02 sec)

The bucket thresholds are fixed and thus the same for all digests. There is also a global histogram in the events_statements_histogram_global.

This includes the introduction to the new Performance Schema digest features. As monitoring tools start to use this information, it will help create a better monitoring experience. Particularly the histograms will benefit from being shown as graphs.

Announcement: Second Alpha Build of Percona XtraBackup 8.0 Is Available

The second alpha build of Percona XtraBackup 8.0.2 is now available in the Percona experimental software repositories.

Note that, due to the new MySQL redo log and data dictionary formats, the Percona XtraBackup 8.0.x versions will only be compatible with MySQL 8.0.x and Percona Server for MySQL 8.0.x. This release supports backing up Percona Server 8.0 Alpha.

For experimental migrations from earlier database server versions, you will need to backup and restore and using XtraBackup 2.4 and then use mysql_upgrade from MySQL 8.0.x

PXB 8.0.2 alpha is available for the following platforms:

  • RHEL/Centos 6.x
  • RHEL/Centos 7.x
  • Ubuntu 14.04 Trusty*
  • Ubuntu 16.04 Xenial
  • Ubuntu 18.04 Bionic
  • Debian 8 Jessie*
  • Debian 9 Stretch

Information on how to configure the Percona repositories for apt and yum systems and access the Percona experimental software is here.

* We might drop these platforms before GA release.

Improvements
  • PXB-1658: Import keyring vault plugin from Percona Server 8
  • PXB-1609: Make version_check optional at build time
  • PXB-1626: Support encrypted redo logs
  • PXB-1627: Support obtaining binary log coordinates from performance_schema.log_status
Fixed Bugs
  • PXB-1634: The CREATE TABLE statement could fail with the DUPLICATE KEY error
  • PXB-1643: Memory issues reported by ASAN in PXB 8
  • PXB-1651: Buffer pool dump could create a (null) file during prepare stage of Mysql8.0.12 data
  • PXB-1671: A backup could fail when the MySQL user was not specified
  • PXB-1660: InnoDB: Log block N at lsn M has valid header, but checksum field contains Q, should be P

Other bugs fixed: PXB-1623PXB-1648PXB-1669PXB-1639, and PXB-1661.

You’re not storing sensitive data in your database. Seriously?

At technology events, I often ask attendees if they’re storing sensitive data in MySQL. Only a few hands go up. Then, I rephrase and ask, “how many of you would be comfortable if your database tables were exposed on the Internet?” Imagine how it would be perceived by your customers, your manager, your employees or your board of directors. Once again, “how many of you are storing sensitive data in MySQL?” Everyone.

TWO MAXIMS:

1.) You are storing sensitive data.

Even if it’s truly meaningless data, you can’t afford for your company to be perceived as loose with data security. If you look closely at your data; however, you’ll likely realize that it could be exploited. Does it include any employee info, server IP addresses or internal routing information?

A recent article by Lisa Vaas from Naked Security highlights a spate of data leaks from poorly configured MongoDB instances.

Here we Mongo again! Millions of records exposed by insecure database

What’s striking is that these leaks didn’t include credit cards, social security numbers or so-called sensitive data. Nevertheless, companies are vulnerable to ransomware and diminished customer trust.

2). Your data will be misplaced, eventually.

Employees quit, servers get decommissioned; but database tables persist. Your tables are passed among developers, DBA’s and support engineers. They are moved between bare metal, VM’s and public cloud providers. Given enough time, your data will end up in a place it shouldn’t be.

Often people don’t realize that their binary data is easily exposed. Take any binary data, for example, and run the Linux strings function against it. On a Linux command line, just type “strings filename”. You’ll see your data scroll across the screen in readable text.

ENCRYPT MYSQL DATA

Two years ago, MySQL developers had to change their application to encrypt data. Now, transparent data encryption in MySQL 5.7 and 8.0 require no application changes. With Oracle’s version of MySQL, there’s little performance overhead after the data is encrypted.

Below are a few simple steps to encrypt your data in MySQL 8.0. This process relies on a keyring file. This won’t meet compliance requirements (see KEY MANAGEMENT SYSTEMS below), but it’s a good first step.

  1. Check your version of MySQL. It should be MySQL 5.7 or 8.0.
  2. Pre-load the plugin in your my.cnf: early-plugin-load = keyring_file.so
  3. Execute the following queries:
  • INSTALL PLUGIN keyring_udf SONAME ‘keyring_udf.so’;
  • CREATE FUNCTION keyring_key_generate RETURNS INTEGER SONAME ‘keyring_udf.so’;
  • SELECT keyring_key_generate(‘alongpassword’, ‘DSA’, 256);
  • ALTER TABLE titles ENCRYPTION = ‘Y’;

Per documentation warning: The keyring_file and keyring_encrypted file plugins are not intended as regulatory compliance solutions. Security standards such as PCI, FIPS, and others require use of key management systems to secure, manage, and protect encryption keys in key vaults or hardware security modules (HSMs).

KEY MANAGEMENT SYSTEMS (KMS)

Credit card and data privacy regulations require that keys are restricted and rotated. If your company collects payment information, it’s likely that your organization already has one a key management system (KMS). These systems are usually software or hardware appliances used strictly for managing your corporate encryption keys. The MySQL Enterprise Edition includes a plugin for communicating directly with the KMS. MySQL is compatible with Oracle Key Vault, SafeNet KeySecure, Thales Vormetric Key Management and Fornetix Key Orchestration.

Introduction to Oracle Key Vault

In summary, reconsider if you believe that you’re not storing sensitive data. If using MySQL, capabilities in the latest releases make it possible to encrypt data without changing your application. At the very least, encrypt your data with the key file method (above). Ideally, however; investigate a key management system to also meet regulatory requirements.

Persistence of autoinc fixed in MySQL 8.0

The release of MySQL 8.0 has brought a lot of bold implementations that touched on things that have been avoided before, such as added support for common table expressions and window functions. Another example is the change in how AUTO_INCREMENT (autoinc) sequences are persisted, and thus replicated.

This new implementation carries the fix for bug #73563 (Replace result in auto_increment value less or equal than max value in row-based), which we’ve only found about recently. The surprising part is that the use case we were analyzing is a somewhat common one; this must be affecting a good number of people out there.

Understanding the bug

The business logic of the use case is such the UNIQUE column found in a table whose id is managed by an AUTO_INCREMENT sequence needs to be updated, and this is done with a REPLACE operation:

“REPLACE works exactly like INSERT, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is inserted.”

So, what happens in practice in this particular case is a DELETE followed by an INSERT of the target row.

We will explore this scenario here in the context of an oversimplified currency converter application that uses USD as base reference:

CREATE TABLE exchange_rate ( id INT PRIMARY KEY AUTO_INCREMENT, currency VARCHAR(3) UNIQUE, rate FLOAT(5,3) ) ENGINE=InnoDB;

Let’s add a trio of rows to this new table:

INSERT INTO exchange_rate (currency,rate) VALUES ('EUR',0.854), ('GBP',0.767), ('BRL',4.107);

which gives us the following initial set:

master (test) > select * from exchange_rate; +----+----------+-------+ | id | currency | rate  | +----+----------+-------+ |  1 | EUR      | 0.854 | |  2 | GBP      | 0.767 | |  3 | BRL      | 4.107 | +----+----------+-------+ 3 rows in set (0.00 sec)

Now we update the rate for Brazilian Reais using a REPLACE operation:

REPLACE INTO exchange_rate SET currency='BRL', rate=4.500;

With currency being a UNIQUE field the row is fully replaced:

master (test) > select * from exchange_rate; +----+----------+-------+ | id | currency | rate  | +----+----------+-------+ |  1 | EUR      | 0.854 | |  2 | GBP      | 0.767 | |  4 | BRL      | 4.500 | +----+----------+-------+ 3 rows in set (0.00 sec)

and thus the autoinc sequence is updated:

master (test) > SHOW CREATE TABLE exchange_rate\G *************************** 1. row ***************************      Table: exchange_rate Create Table: CREATE TABLE `exchange_rate` ( `id` int(11) NOT NULL AUTO_INCREMENT, `currency` varchar(3) DEFAULT NULL, `rate` float(5,3) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `currency` (`currency`) ) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=latin1 1 row in set (0.00 sec)

The problem is that the autoinc sequence is not updated in the replica as well:

slave1 (test) > select * from exchange_rate;show create table exchange_rate\G +----+----------+-------+ | id | currency | rate  | +----+----------+-------+ |  1 | EUR      | 0.854 | |  2 | GBP      | 0.767 | |  4 | BRL      | 4.500 | +----+----------+-------+ 3 rows in set (0.00 sec) *************************** 1. row ***************************      Table: exchange_rate Create Table: CREATE TABLE `exchange_rate` ( `id` int(11) NOT NULL AUTO_INCREMENT, `currency` varchar(3) DEFAULT NULL, `rate` float(5,3) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `currency` (`currency`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1 1 row in set (0.00 sec)

Now, the moment we promote that replica as master and start writing to this table we’ll hit a duplicate key error:

slave1 (test) > REPLACE INTO exchange_rate SET currency='BRL', rate=4.600; ERROR 1062 (23000): Duplicate entry '4' for key 'PRIMARY'

Note that:

a) the transaction fails and the row is not replaced, however the autoinc sequence is incremented:

slave1 (test) > SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE table_schema='test' AND table_name='exchange_rate'; +----------------+ | AUTO_INCREMENT | +----------------+ |              5 | +----------------+ 1 row in set (0.00 sec)

b) this problem only happens with row-based replication (binlog_format=ROW), where REPLACE in this case is logged as a row UPDATE:

# at 6129 #180829 18:29:55 server id 100  end_log_pos 5978 CRC32 0x88da50ba Update_rows: table id 117 flags: STMT_END_F ### UPDATE `test`.`exchange_rate` ### WHERE ###   @1=3 /* INT meta=0 nullable=0 is_null=0 */ ###   @2='BRL' /* VARSTRING(3) meta=3 nullable=1 is_null=0 */ ###   @3=4.107                /* FLOAT meta=4 nullable=1 is_null=0 */ ### SET ###   @1=4 /* INT meta=0 nullable=0 is_null=0 */ ###   @2='BRL' /* VARSTRING(3) meta=3 nullable=1 is_null=0 */ ###   @3=4.5                  /* FLOAT meta=4 nullable=1 is_null=0 */

With statement-based replication—or even mixed format—the REPLACE statement is replicated as is: it will trigger a DELETE+INSERT in the background on the replica and thus update the autoinc sequence in the same way it did on the master.

This example (tested with Percona Server versions 5.5.61, 5.6.36 and 5.7.22) helps illustrate the issue with autoinc sequences not being persisted as they should be with row-based replication. However, MySQL’s Worklog #6204 includes a couple of scarier scenarios involving the master itself, such as when the server crashes while a transaction is writing to a table similar to the one used in the example above. MySQL 8.0 remedies this bug.

Workarounds

There are a few possible workarounds to consider if this problem is impacting you and if neither upgrading to the 8 series nor resorting to statement-based or mixed replication format are viable options.

We’ll be discussing three of them here: one that resorts around the execution of checks before a failover (to detect and fix autoinc inconsistencies in replicas), another that requires a review of all REPLACE statements like the one from our example and adapt it as to include the id field, thus avoiding the bug, and finally one that requires changing the schema of affected tables in such a way that the target field is made the Primary Key of the table while id (autoinc) is converted into a UNIQUE key.

a) Detect and fix

The less intrusive of the workarounds we conceived for the problem at hand in terms of query and schema changes is to run a check for each of the tables that might be facing this issue in a replica before we promote it as master in a failover scenario:

slave1 (test) > SELECT ((SELECT MAX(id) FROM exchange_rate)>=(SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE table_schema='test' AND table_name='exchange_rate')) as `check`; +-------+ | check | +-------+ |     1 | +-------+ 1 row in set (0.00 sec)

If the table does not pass the test, like ours didn’t at first (just before we attempted a REPLACE after we failed over to the replica), then update autoinc accordingly. The full routine (check + update of autoinc) could be made into a single stored procedure:

DELIMITER // CREATE PROCEDURE CheckAndFixAutoinc() BEGIN  DECLARE done TINYINT UNSIGNED DEFAULT 0;  DECLARE tableschema VARCHAR(64);  DECLARE tablename VARCHAR(64);  DECLARE columnname VARCHAR(64);    DECLARE cursor1 CURSOR FOR SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME FROM information_schema.COLUMNS WHERE TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema', 'sys') AND EXTRA LIKE '%auto_increment%';  DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1;  OPEN cursor1;    start_loop: LOOP   IF done THEN     LEAVE start_loop;   END IF;   FETCH cursor1 INTO tableschema, tablename, columnname;   SET @get_autoinc = CONCAT('SELECT @check1 := ((SELECT MAX(', columnname, ') FROM ', tableschema, '.', tablename, ')>=(SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA=\'', tableschema, '\' AND TABLE_NAME=\'', tablename, '\')) as `check`');   PREPARE stm FROM @get_autoinc;   EXECUTE stm;   DEALLOCATE PREPARE stm;   IF @check1>0 THEN     BEGIN       SET @select_max_id = CONCAT('SELECT @max_id := MAX(', columnname, ')+1 FROM ', tableschema, '.', tablename);       PREPARE select_max_id FROM @select_max_id;       EXECUTE select_max_id;       DEALLOCATE PREPARE select_max_id;       SET @update_autoinc = CONCAT('ALTER TABLE ', tableschema, '.', tablename, ' AUTO_INCREMENT=', @max_id);       PREPARE update_autoinc FROM @update_autoinc;       EXECUTE update_autoinc;       DEALLOCATE PREPARE update_autoinc;     END;   END IF;  END LOOP start_loop;    CLOSE cursor1; END// DELIMITER ;

It doesn’t allow for as clean a failover as we would like but it can be helpful if you’re stuck with MySQL<8.0 and binlog_format=ROW and cannot make changes to your queries or schema.

b) Include Primary Key in REPLACE statements

If we had explicitly included the id (Primary Key) in the REPLACE operation from our example it would have also been replicated as a DELETE+INSERT even when binlog_format=ROW:

master (test) > REPLACE INTO exchange_rate SET currency='BRL', rate=4.500, id=3; # at 16151 #180905 13:32:17 server id 100  end_log_pos 15986 CRC32 0x1d819ae9  Write_rows: table id 117 flags: STMT_END_F ### DELETE FROM `test`.`exchange_rate` ### WHERE ###   @1=3 /* INT meta=0 nullable=0 is_null=0 */ ###   @2='BRL' /* VARSTRING(3) meta=3 nullable=1 is_null=0 */ ###   @3=4.107                /* FLOAT meta=4 nullable=1 is_null=0 */ ### INSERT INTO `test`.`exchange_rate` ### SET ###   @1=3 /* INT meta=0 nullable=0 is_null=0 */ ###   @2='BRL' /* VARSTRING(3) meta=3 nullable=1 is_null=0 */ ###   @3=4.5                  /* FLOAT meta=4 nullable=1 is_null=0 */ # at 16199 #180905 13:32:17 server id 100  end_log_pos 16017 CRC32 0xf11fed56  Xid = 184 COMMIT/*!*/;

We could point out that we are doing it wrong by not having the id included in the REPLACE statement in the first place; the reason for not doing so would be mostly related to avoiding an extra lookup for each replace (to obtain the id for the currency we want to update). On the other hand, what if your business logic do expects the id to change at each REPLACE ? You should have such requirement in mind when considering this workaround as it is effectively a functional change to what we had initially.

c) Make the target field the Primary Key and keep autoinc as a UNIQUE key

If we make currency the Primary Key of our table and id a UNIQUE key instead:

CREATE TABLE exchange_rate ( id INT UNIQUE AUTO_INCREMENT, currency VARCHAR(3) PRIMARY KEY, rate FLOAT(5,3) ) ENGINE=InnoDB;

the same REPLACE operation will be replicated as a DELETE+INSERT too:

# at 19390 #180905 14:03:56 server id 100  end_log_pos 19225 CRC32 0x7042dcd5  Write_rows: table id 131 flags: STMT_END_F ### DELETE FROM `test`.`exchange_rate` ### WHERE ###   @1=3 /* INT meta=0 nullable=0 is_null=0 */ ###   @2='BRL' /* VARSTRING(3) meta=3 nullable=0 is_null=0 */ ###   @3=4.107                /* FLOAT meta=4 nullable=1 is_null=0 */ ### INSERT INTO `test`.`exchange_rate` ### SET ###   @1=4 /* INT meta=0 nullable=0 is_null=0 */ ###   @2='BRL' /* VARSTRING(3) meta=3 nullable=0 is_null=0 */ ###   @3=4.5                  /* FLOAT meta=4 nullable=1 is_null=0 */ # at 19438 #180905 14:03:56 server id 100  end_log_pos 19256 CRC32 0x79efc619  Xid = 218 COMMIT/*!*/;

Of course, the same would be true if we had just removed id entirely from the table and kept currency as the Primary Key. This would work in our particular test example but that won’t always be the case. Please note though that if you do keep id on the table you must make it a UNIQUE key: this workaround is based on the fact that this key becomes a second unique constraint, which triggers a different code path to log a replace operation. Had we made it a simple, non-unique key instead that wouldn’t be the case.

If you have any comments or suggestions about the issue addressed in this post, the workarounds we propose, or even a different view of the problem you would like to share please leave a comment in the section below.

Co-Author: Trey Raymond

Trey Raymond is a Sr. Database Engineer for Oath Inc. (née Yahoo!), specializing in MySQL. Since 2010, he has worked to build the company’s database platform and supporting team into industry leaders.

While a performance guru at heart, his experience and responsibilities range from hardware and capacity planning all through the stack to database tool and utility development.

He has a reputation for breaking things to learn something new.

Co-Author: Fernando Laudares

Fernando is a Senior Support Engineer with Percona. Fernando’s work experience includes the architecture, deployment and maintenance of IT infrastructures based on Linux, open source software and a layer of server virtualization. He’s now focusing on the universe of MySQL, MongoDB and PostgreSQL with a particular interest in understanding the intricacies of database systems, and contributes regularly to this blog. You can read his other articles here.

Reduce MySQL Memory Utilization With ProxySQL Multiplexing

MySQL Adventures: Reduce MySQL Memory Utilization With ProxySQL Multiplexing

In our previous post, we explained about how max_prepared_statement_count can bring production down . This blog is the continuity of that post. If you can read that blog from the below link.

How max_prepared_stmt_count bring down the production MySQL system

We had set the max_prepared_stmt_count to 20000. But after that, we were facing the below error continuously.

Can't create more than max_prepared_stmt_count statements (current value: 20000)

We tried to increase it to 25000, 30000 and finally 50000. But unfortunately, we can’t fix it and increasing this value will lead to a memory leak which we explained in our previous blog.

We are using ProxySQL to access the database servers. And the architecture looks like below.

Multiplexing in ProxySQL: The main purpose of the multiplexing is to reduce the connections to MySQL servers. So we can send thousands of connections to only a hundred backend connections.

We enabled multiplexing while setting up the Database environment. But multiplexing will not be work in all the times. ProxySQL has some sense to track the transactions which are executing in that connection. If any transactions are not committed or rollback then, it’ll never use that connection for the next request. It’ll pick another free connection from the connection pool.

From the ProxySQL Doc, there are few scenarios where multiplexing is disabled.

  • active transaction
  • Tables are locked.
  • Set queries (like SET FOREIGN_KEY_CHECKS)

In our case, the most of the errors are due to prepare statement count. Believe it, this issue made us to reduce the memory utilization also.

Get the currently active prepared statements:

Run the below query which will give tell us the number of active prepare statements in the backend.

SELECT
*
FROM
stats_mysql_global
WHERE
variable_name LIKE '%stmt%'; +---------------------------+----------------+
| Variable_Name | Variable_Value |
+---------------------------+----------------+
| Com_backend_stmt_prepare | 168318911 |
| Com_backend_stmt_execute | 143165882 |
| Com_backend_stmt_close | 0 |
| Com_frontend_stmt_prepare | 171609010 |
| Com_frontend_stmt_execute | 171234713 |
| Com_frontend_stmt_close | 171234006 |
| Stmt_Client_Active_Total | 19983 |
| Stmt_Client_Active_Unique | 4 |
| Stmt_Server_Active_Total | 84 |
| Stmt_Server_Active_Unique | 23 |
| Stmt_Max_Stmt_id | 10002 |
| Stmt_Cached | 2186 |
+---------------------------+----------------+
# You can get the same results in MySQL also MySQL> show status like 'Prepared_stmt_count';
+---------------------+-------+
| Variable_name | Value |
+---------------------+-------+
| Prepared_stmt_count | 19983 |
+---------------------+-------+

You can see the number of active prepare statements are 19983. while running the query again and again, I could see a random count but those all are more than 19000. And you can see the Com_backend_stmt_close is 0.

Yes, ProxySQL will never close the prepared statements in the backend. But there is a mechanism in ProxySQL which allocates 20 prepared statements(20 is the default value) to each connection. Once its executed all 20 statements then the connection will come back to the connection pool and close all 20 statements in one shot.

Run the below query to get the default statement count for a connection.

proxysql> show variables like "%stmt%"; +--------------------------------+-------+
| Variable_name | Value |
+--------------------------------+-------+
| mysql-max_stmts_per_connection | 20 |
| mysql-max_stmts_cache | 10000 |
+--------------------------------+-------+

There is a great explanation about this variable by René Cannaò who is the founder of ProxtSQL. You can read about that here.

Why this much prepared statements are running?

As mentioned earlier, proxysql will never close the prepared statements in the backend. We realize that we are getting heavy traffic on both ProxySQL servers and its send it to one Master node. And also the Laravel has all the queries with prepared statement format. That's why we are getting this much prepared statements.

Get where the most of the prepared statements are used:

Run the below query in proxySQL and this will give you the total count of executed prepared statements on all the databases and the usernames.

SELECT
username, schemaname, count(*)
FROM
stats_mysql_prepared_statements_info
GROUP BY
1, 2
ORDER BY
3 DESC; +----------+---------------+----------+
| username | schemaname | count(*) |
+----------+---------------+----------+
| DBname | user1 | 2850 |
| DBname | user2 | 257 |
| DBname | user3 | 108 |
| DBname | user1 | 33 |
| DBname | user2 | 33 |
| DBname | user1 | 16 |
| DBname | user1 | 15 |
+----------+---------------+----------+ #There is a Bug in this view. The Username column is actually showing the schemaname and the schemaname column is showing usernames. I have reported this bug in proxySQL's github repo. https://github.com/sysown/proxysql/issues/1720 Force ProxySQL to use multiplex:

There are few cases proxysql will disable the multiplexing. Particularly all queries that have @ in their query_digest will disable multiplexing.

Collect the queries which has @ SELECT
DISTINCT digest, digest_text
FROM
stats_mysql_query_digest
WHERE
digest_text LIKE '%@%' \G;
*************************** 1. row ***************************
digest: 0xA8F2FFB14312850C
digest_text: SELECT @@max_allowed_packet *************************** 2. row ***************************
digest: 0x7CDEEF2FF695B7F8
digest_text: SELECT @@session.tx_isolation *************************** 3. row ***************************
digest: 0x2B838C3B5DE79958
digest_text: SELECT @@session.auto_increment_increment AS auto_increment_increment, @@character_set_client AS character_set_client, @@character_set_connection AS character_set_connection, @@character_set_results AS character_set_results, @@character_set_server AS character_set_server, @@init_connect AS init_connect, @@interactive_timeout AS interactive_timeout, @@license AS license, @@lower_case_table_names AS lower_case_table_names, @@max_allowed_packet AS max_allowed_packet, @@net_buffer_length AS net_buffer_length, @@net_write_timeout AS net_write_timeout, @@query_cache_size AS query_cache_size, @@query_cache_type AS query_cache_type, @@sql_mode AS sql_mode, @@system_time_zone AS system_time_zone, @@time_zone AS time_zone, @@tx_isolation AS tx_isolation, @@wait_timeout AS wait_timeout

Finally, the above statements are executed by prepared statements. These queries are by default disabled the multiplexing. But ProxySQL has another cool feature that we can allow these queries (which has @ ) to use multiplexing.

Run the below query to set proxysql to use multiplexing for these queries. We can insert it by Query pattern or query digest.

# Add multiplexing to a query using query_text INSERT INTO mysql_query_rules
(active, match_digest, multiplex)
VALUES
('1', '^SELECT @@session.tx_isolation', 2);

INSERT INTO mysql_query_rules
(active, match_digest, multiplex)
VALUES
('1', '^SELECT @@max_allowed_packet', 2); # Add multiplexing to a query using query Diagest INSERT INTO mysql_query_rules
(active, digest, multiplex)
VALUES
('1', '0x2B838C3B5DE79958', 2); # Save it to Runtime and Disk LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK; Restart ProxySQL:

If we enabled multiplexing in proxySQL then its mandatory to restart ProxySQL service or close all the existing connections and open it as a new one.

service proxysql restart

Lets check the active prepared statements in both ProxySQL and MySQL.

#proxySQL SELECT
*
FROM
stats_mysql_global
WHERE
variable_name LIKE '%stmt%'; +--------------------------+----------------+
| Variable_Name | Variable_Value |
+--------------------------+----------------+
| Stmt_Client_Active_Total | 6 |
+--------------------------+----------------+ #mysql
show status like 'Prepared_stmt_count';
+---------------------+-------+
| Variable_name | Value |
+---------------------+-------+
| Prepared_stmt_count | 166 |
+---------------------+-------+

The number of Prepared_stmt_count is dramatically reduced from 20000 to 200. But why there is a different count between proxySQL and mysql?

Again refer to the ProxySQL’s doc. Once whenever a connection executed 20 statements, then only it’ll Close that prepared statement. In ProxySQL, its showing active statements. But MySQL is showing active + executed statements count.

Number of Backend connections:

After this change, there is a sudden drop in a number of backend connections in the ProxySQL. This proves that the statements which disable the multiplexing will create more backend connections.

MySQL’s Memory:

Now we can see the explanation for this blog title. See the below graph which is showing the high memory drop.

The two main parts where mysql used more memory:

  1. mysql connections.
  2. Prepared statements.

We all already knows that each mysql connection requires some amount of memory. And for prepared statements, I have explained about its memory consumption in my previous blog.

Conclusion:

ProxySQL has a lot of great features. Multiplexing is good. But after this incident only we realize that multiplexing will help to reduce the number of backend connections and MySQL’s memory utilization as well.

I hope if you are a DBA you will read this and implement it in your environment as well. If it helped for you then feel free to give your claps.

Reduce MySQL Memory Utilization With ProxySQL Multiplexing was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

How max_prepared_stmt_count bring down the production MySQL system

MySQL Adventures: How max_prepared_stmt_count can bring down production

We recently moved an On-Prem environment to GCP for better scalability and availability. The customer’s main database is MySQL. Due to the nature of customer’s business, it’s a highly transactional workload (one of the hot startups in APAC). To deal with the scale and meet availability requirements, we have deployed MySQL behind ProxySQL — which takes care of routing some of the resource intensive SELECTs to chosen replicas. The setup consists of:

  • One Master
  • Two slaves
  • One Archive database server

Post migration to GCP, everything was nice and calm for a couple of weeks, until MySQL decided to start misbehaving and leading to an outage. We were able to quickly resolve and bring the system back online and what follows are lessons from this experience.

The configuration of the Database:
  • CentOS 7.
  • MySQL 5.6
  • 32 Core CPU
  • 120GB Memory
  • 1 TB SSD for MySQL data volume.
  • The total database size is 40GB. (yeah, it is small in size, but highly transactional)
  • my.cnf is configured using Percona’s configuration wizard.
  • All tables are InnoDB Engine
  • No SWAP partitions.
The Problem

It all started with an alert that said MySQL process was killed by Linux’s OOM Killer. Apparently MySQL was rapidly consuming all the memory (about 120G) and OOM killer perceived it as a threat to the stability of the system and killed the process. We were perplexed and started investigating.

Sep 11 06:56:39 mysql-master-node kernel: Out of memory: Kill process 4234 (mysqld) score 980 or sacrifice child Sep 11 06:56:39 mysql-master-node kernel: Killed process 4234 (mysqld) total-vm:199923400kB, anon-rss:120910528kB, file-rss:0kB, shmem-rss:0kB Sep 11 06:57:00 mysql-master-node mysqld: /usr/bin/mysqld_safe: line 183: 4234 Killed nohup /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sock < /dev/null > /dev/null 2>&1

Naturally, we started looking at mysql configuration to see if something is off.

InnoDB Parameters: innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 512M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 100G Other Caching Parameters: tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 4096
table-open-cache = 50

We are not really using query cache and one of the heavy front end service is PHP Laravel.

Here is the memory utilization graph.

The three highlighted areas are the points at which we had issues in production. The second issue happened very shortly, so we reduced the innodb-buffer-pool-size to 90GB. But even though the memory utilization never came down. So we scheduled a cronjob to flush OS Cache at least to give some addition memory to the Operating system by using the following command. This was a temporary measure till we found the actual problem.

sync; echo 3 > /proc/sys/vm/drop_cache

But This didn’t help really. The memory was still growing and we had to look at what’s really inside the OS Cache?

Fincore:

There is a tool called fincore helped me find out what’s actually the OS cache held. Its actually using Perl modules. use the below commands to install this.

yum install perl-Inline rpm -ivh http://fr2.rpmfind.net/linux/dag/redhat/el6/en/x86_64/dag/RPMS/fincore-1.9-1.el6.rf.x86_64.rpm

It never directly shows what files are inside the buffer/cache. We instead have to manually give the path and it’ll check what files are in the cache for that location. I wanted to check about Cached files for the mysql data directory.

cd /mysql-data-directory fincore -summary * > /tmp/cache_results

Here is the sample output of the cached files results.

page size: 4096 bytes
auto.cnf: 1 incore page: 0
dbadmin: no incore pages.
Eztaxi: no incore pages.
ibdata1: no incore pages.
ib_logfile0: 131072 incore pages: 0 1 2 3 4 5 6 7 8 9 10......
ib_logfile1: 131072 incore pages: 0 1 2 3 4 5 6 7 8 9 10......
mysql: no incore pages.
mysql-bin.000599: 8 incore pages: 0 1 2 3 4 5 6 7
mysql-bin.000600: no incore pages.
mysql-bin.000601: no incore pages.
mysql-bin.000602: no incore pages.
mysql-bin.000858: 232336 incore pages: 0 1 2 3 4 5 6 7 8 9 10......
mysqld-relay-bin.000001: no incore pages.
mysqld-relay-bin.index: no incore pages.
mysql-error.log: 4 incore pages: 0 1 2 3
mysql-general.log: no incore pages.
mysql.pid: no incore pages.
mysql-slow.log: no incore pages.
mysql.sock: no incore pages.
ON: no incore pages.
performance_schema: no incore pages.
mysql-production.pid: 1 incore page: 0 6621994 pages, 25.3 Gbytes in core for 305 files; 21711.46 pages, 4.8 Mbytes per file. The highlighted points show the graph when OS Cache is cleared.How we investigated this issue:

The first document that everyone refers is How mysql uses the memory from MySQL’s documentation. So we started with where are all the places that mysql needs memory. I’ll explain this about in a different blog. Lets continue with the steps which we did.

Make sure MySQL is the culprit:

Run the below command and this will give you the exact memory consumption about MySQL.

ps --no-headers -o "rss,cmd" -C mysqld | awk '{ sum+=$1 } END { printf ("%d%s\n", sum/NR/1024,"M") }' 119808M Additional Tips:

If you want to know each mysql’s threads memory utilization, run the below command.

# Get the PID of MySQL:
ps aux | grep mysqld mysql 4378 41.1 76.7 56670968 47314448 ? Sl Sep12 6955:40 /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sock # Get all threads memory usage:
pmap -x 4378 4378: /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sock
Address Kbytes RSS Dirty Mode Mapping
0000000000400000 11828 4712 0 r-x-- mysqld
000000000118d000 1720 760 476 rw--- mysqld
000000000133b000 336 312 312 rw--- [ anon ]
0000000002b62000 1282388 1282384 1282384 rw--- [ anon ]
00007fd4b4000000 47816 37348 37348 rw--- [ anon ]
00007fd4b6eb2000 17720 0 0 ----- [ anon ]
00007fd4bc000000 48612 35364 35364 rw--- [ anon ]
.........
.........
.........
00007fe1f0075000 2044 0 0 ----- libpthread-2.17.so
00007fe1f0274000 4 4 4 r---- libpthread-2.17.so
00007fe1f0275000 4 4 4 rw--- libpthread-2.17.so
00007fe1f0276000 16 4 4 rw--- [ anon ]
00007fe1f027a000 136 120 0 r-x-- ld-2.17.so
00007fe1f029c000 2012 2008 2008 rw--- [ anon ]
00007fe1f0493000 12 4 0 rw-s- [aio] (deleted)
00007fe1f0496000 12 4 0 rw-s- [aio] (deleted)
00007fe1f0499000 4 0 0 rw-s- [aio] (deleted)
00007fe1f049a000 4 4 4 rw--- [ anon ]
00007fe1f049b000 4 4 4 r---- ld-2.17.so
00007fe1f049c000 4 4 4 rw--- ld-2.17.so
00007fe1f049d000 4 4 4 rw--- [ anon ]
00007ffecc0f1000 132 72 72 rw--- [ stack ]
00007ffecc163000 8 4 0 r-x-- [ anon ]
ffffffffff600000 4 0 0 r-x-- [ anon ]
---------------- ------- ------- -------
total kB 122683392 47326976 47320388 InnoDB Buffer Pool:

Initially we suspected the InnoDB. We have checked the innoDB usage from the monitoring system. But the result was negative. It never utilized more than 40GB. That thickens the plot. If buffer pool only has 40 GB, who is eating all that memory?

Is this correct? Does Buffer Pool only hold 40GB?

What’s Inside the BufferPool and whats its size?

SELECT
page_type AS page_type,
sum(data_size) / 1024 / 1024 AS size_in_mb
FROM
information_schema.innodb_buffer_page
GROUP BY
page_type
ORDER BY
size_in_mb DESC; +-------------------+----------------+
| Page_Type | Size_in_MB |
+-------------------+----------------+
| INDEX | 39147.63660717 |
| IBUF_INDEX | 0.74043560 |
| UNDO_LOG | 0.00000000 |
| TRX_SYSTEM | 0.00000000 |
| ALLOCATED | 0.00000000 |
| INODE | 0.00000000 |
| BLOB | 0.00000000 |
| IBUF_BITMAP | 0.00000000 |
| EXTENT_DESCRIPTOR | 0.00000000 |
| SYSTEM | 0.00000000 |
| UNKNOWN | 0.00000000 |
| FILE_SPACE_HEADER | 0.00000000 |
+-------------------+----------------+

A quick guide about this query.

  • INDEX: B-Tree index
  • IBUF_INDEX: Insert buffer index
  • UNKNOWN: not allocated / unknown state
  • TRX_SYSTEM: transaction system data
Bonus:

To get the buffer pool usage by index

SELECT
table_name AS table_name,
index_name AS index_name,
count(*) AS page_count,
sum(data_size) / 1024 / 1024 AS size_in_mb
FROM
information_schema.innodb_buffer_page
GROUP BY
table_name, index_name
ORDER BY
size_in_mb DESC; Then where mysql was holding the Memory?

We checked all of the mysql parts where its utilizing memory. Here is a rough calculation for the memory utilization during the mysql crash.

BufferPool: 40GB
Cache/Buffer: 8GB
Performance_schema: 2GB
tmp_table_size: 32M
Open tables cache for 50 tables: 5GB
Connections, thread_cache and others: 10GB

Almost it reached 65GB, we can round it as 70GB out of 120GB. But still its approximate only. Something is wrong right? My DBA mind started to think where is the remaining?

Till now,
  1. MySQL is the culprit who is consuming all of the memory.
  2. Clearing OS cache never helped. Its fine.
  3. Buffer Pool is also in healthy state.
  4. Other memory consuming parameters are looks good.
It’s time to Dive into the MySQL.

Lets see what kind of queries are running into the mysql.

show global status like 'Com_%';
+---------------------------+-----------+
| Variable_name | Value |
+---------------------------+-----------+
| Com_admin_commands | 531242406 |
| Com_stmt_execute | 324240859 |
| Com_stmt_prepare | 308163476 |
| Com_select | 689078298 |
| Com_set_option | 108828057 |
| Com_begin | 4457256 |
| Com_change_db | 600 |
| Com_commit | 8170518 |
| Com_delete | 1164939 |
| Com_flush | 80 |
| Com_insert | 73050955 |
| Com_insert_select | 571272 |
| Com_kill | 660 |
| Com_rollback | 136654 |
| Com_show_binlogs | 2604 |
| Com_show_slave_status | 31245 |
| Com_show_status | 162247 |
| Com_show_tables | 1105 |
| Com_show_variables | 10428 |
| Com_update | 74384469 |
+---------------------------+-----------+

Select, Insert, Update these counters are fine. But a huge amount of prepared statements were running into the mysql.

One more Tip: Valgrind

Valgrind is a powerful open source tool to profile any process’s memory consumption by threads and child processes.

Install Valgrind: # You need C compilers, so install gcc wget ftp://sourceware.org/pub/valgrind/valgrind-3.13.0.tar.bz2
tar -xf valgrind-3.13.0.tar.bz2
cd valgrind-3.13.0
./configure
make
make install Note: Its for troubleshooting purpose, you should stop MySQL and Run with Valgrind.
  • Create an log file to Capture
touch /tmp/massif.out
chown mysql:mysql /tmp/massif.out
chmod 777 /tmp/massif.out
  • Run mysql with Valgrind
/usr/local/bin/valgrind --tool=massif --massif-out-file=/tmp/massif.out /usr/sbin/mysqld –default-file=/etc/my.cnf
  • Lets wait for 30mins (or till the mysql takes the whole memory). Then kill the Valgranid and start mysql as normal.
Analyze the Log: /usr/local/bin/ms_print /tmp/massif.out

We’ll explain mysql memory debugging using valgrind in an another blog.

Memory Leak:

We have verified all the mysql parameters and OS level things for the memory consumption. But no luck. So I started to think and search about mysql’s memory leak parts. Then I found this awesome blog by Todd.

Yes, the only parameter I didn’t check is max_prepared_stmt_count.

What is this?

From MySQL’s Doc,

This variable limits the total number of prepared statements in the server. It can be used in environments where there is the potential for denial-of-service attacks based on running the server out of memory by preparing huge numbers of statements.
  • Whenever we prepared a statement, we should close in the end. Else it’ll not the release the memory which is allocated to it.
  • For executing a single query, it’ll do three executions (Prepare, Run the query and close).
  • There is no visibility that how much memory is consumed by a prepared statement.
Is this the real root cause?

Run this query to check how many prepared statements are running in mysql server.

mysql> show global status like 'com_stmt%'; +-------------------------+-----------+
| Variable_name | Value |
+-------------------------+-----------+
| Com_stmt_close | 0 |
| Com_stmt_execute | 210741581 |
| Com_stmt_fetch | 0 |
| Com_stmt_prepare | 199559955 |
| Com_stmt_reprepare | 1045729 |
| Com_stmt_reset | 0 |
| Com_stmt_send_long_data | 0 |
+-------------------------+-----------+

You can see there are 1045729 prepared statements are running and the Com_stmt_close variables is showing none of the statements are closed.

This query will return the max count for the preparements.

mysql> show variables like 'max_prepared_stmt_count';
+-------------------------+---------+
| Variable_name | Value |
+-------------------------+---------+
| max_prepared_stmt_count | 1048576 |
+-------------------------+---------+

Oh, its the maximum value for this parameter. Then we immediately reduced it to 2000.

mysql> set global max_prepared_stmt_count=2000; -- Add this to my.cnf
vi /etc/my.cnf [mysqld]
max_prepared_stmt_count = 2000

Now, the mysql is running fine and the memory leak is fixed. Till now the memory utilization is normal. In Laravel framework, its almost using this prepared statement. We can see so many laravel + prepare statements questions in StackOverflow.

SYS Schema in 5.7:

In MySQL 5.7 the sys schema provides all the informations about the MySQL’s memory Utilization.

# Credits: lefred.be SELECT SUBSTRING_INDEX(event_name,'/',2) AS code_area,
sys.format_bytes(SUM(current_alloc)) AS current_alloc
FROM sys.x$memory_global_by_current_bytes
GROUP BY SUBSTRING_INDEX(event_name,'/',2)
ORDER BY SUM(current_alloc) DESC; Conclusion:

The very important lesson as a DBA I learned is, before setting up any parameter value check the consequences of modifying it and make sure it should not affect the production anymore. Now the mysql side is fine, but the application was throwing the below error.

Can't create more than max_prepared_stmt_count statements (current value: 20000)

To continue about this series, the next blog post will explain how we fixed the above error using multiplexing and how it helped to dramatically reduce the mysql’s memory utilization.

How max_prepared_stmt_count bring down the production MySQL system was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

On MySQL XA Transactions

One of the features I missed in my blog post on problematic MySQL features back in July is XA transactions. Probably I was too much in a hurry, as this feature is known to be somewhat buggy, limited and not widely used outside of Java applications. My first related feature request, Bug #40445 - "Provide C functions library implementing X/OPEN XA interface for Bea Tuxedo", was created almost 10 years ago, based on the issue from one of MySQL/Sun customers of that time. I remember some internal debates on how much time and efforts the implementation may require, but the decision was not made anyway, and one still can not directly integrate MySQL with Tuxedo transaction manager (that the idea of XA transactions originally came from). It's even more funny to see that feature request still just "Verified" when taking into account the fact that BEA Tuxedo software is Oracle's software since... 2008.

XA Transactions support is a useful MySQL feature, but I wonder if one day it may just become abandoned as that West Pier in Brighton, or overwhelmed with many small bugs in a same way as this stairs to the beach in Hove...
But maybe XA transactions are not widely used and nobody cares much about them?

Let me try to do a quick review of related active bug reports and feature requests before making any conclusions:
  • Bug #91702 - "Feature Request: JOIN support for XA START command". This feature request was added less than 3 months ago and is still "Open". It means there are users interested in this feature, but Oracle engineers do not care much even to verify related requests, even less - to give them some priority. 
    See also Bug #78498 - "XA issue or limitation with 5.6.19 engine", reported 3 years ago, that is essentially about the same limitation. As bug reporter explained:
    "... it prevents us to use MySQL with Weblogic on 2 phase commit scenarii..."
  • Yet another example of a request ignored for a long time is Bug #90659 - "implicit commit and unlocking with xa start", that is about the inconsistency of current implementation. Even less (as we already know) they care about XA support outside of Java as one can conclude from the fact that Connector/Net related request, Bug #70587 - "Dot Net Distributed transaction not supported in MySql Server 5.6", had not got any attention since July, 2015...
  • Bug #91646 - "xa command still operation when super_read_only is true". This bug was reported in July by Zhenghu Wen. It seems nobody cares much about XA transactions integration when new features are added to MySQL server.
  • Bug #89860 - "XA may lost prepared transaction and cause different between master and slave". This bug reported by Michael Yang (See also Bug #88534) sounds really serious and was previously reported by Andrei Elkin (who works for MariaDB now) as Bug #76233 - "XA prepare is logged ahead of engine prepare". See also Bug #87560 - "XA PREPARE log order error in replication and binlog recovery" by Wei Zhao, who also contributed a patch. See also Bug #83983 - "innodb fail to recover the prepared xa transaction" (the bug reported by Dennis Gao is still "Open", while it's clearly related to or is a duplicate of "Verified" bugs mentioned above).
    So many related/duplicate problem reports, but no fix so far!
  • Bug #88748 - "InnoDB: Failing assertion: trx->conc_state == 1". This assertion failure was reported by Roel Van de Paar back in December, 2017. See also his Bug #84754 - "oid String::chop(): Assertion `strlen(m_ptr) == m_length' failed."
    I noted that Oracle recently invented new "low" severity levels, and this bug is S6 (Debug Builds). I do not really agree that assertions in debug builds are of so low severity - they are in the code for a reason, to prevent crashes in non-debug builds and all kinds of inconsistencies.
  • Bug #87526 - "The output of 'XA recover convert xid' is not useful". This bug reported by Sveta Smirnova caused a lot of troubles to poor users with prepared transactions hanging around for weeks after crash, as it prevented any easy way to get rid of them (and related locks) in some cases. The bug is still "Verified" in MySQL and "On hold" in Percona Server, while MariaDB fixed it in 10.3, see MDEV-14593.
  • Bug #87130 - "XA COMMIT not taken as transaction boundary". Yet another bug report with a patch from Wei Zhao.
  • Bug #75205 - "Master should write a LOST_EVENTS entry on xa commit after recovery." Daniël van Eeden reported this at early 5.7 pre-GA stage, and manual explains now that:
    "In MySQL 5.7.7 and later, there is a change in behavior and an XA transaction is written to the binary log in two parts. When XA PREPARE is issued, the first part of the transaction up to XA PREPARE is written using an initial GTID. A XA_prepare_log_event is used to identify such transactions in the binary log. When XA COMMIT or XA ROLLBACK is issued, a second part of the transaction containing only the XA COMMIT or XA ROLLBACK statement is written using a second GTID. Note that the initial part of the transaction, identified by XA_prepare_log_event, is not necessarily followed by its XA COMMIT or XA ROLLBACK, which can cause interleaved binary logging of any two XA transactions. The two parts of the XA transaction can even appear in different binary log files. This means that an XA transaction in PREPARED state is now persistent until an explicit XA COMMIT or XA ROLLBACK statement is issued, ensuring that XA transactions are compatible with replication."but the bug report is still "Verified".
    By the way, the need to deal with such prepared transactions recovered from the binary log caused problems like those listed above (with XA RECOVER CONVERT and order of preparing in the binary log vs engines that support XA...
  • Bug #71351 - "select hit query cache after xa commit, no result return". This bug probably affects only MySQL 5.5, so no wonder it's ignored now. Nobody tried to fix it while MySQL 5.5 was still supported, though.
There are some more bugs originally filed in other categories, but still related to XA:
  • Bug #72036 - "XA isSameRM() shouldn't take database into account". This Connecotr/J bug was reported in 2014 by Jess Balint.
  • Bug #78050 - "Crash on when XA functions activated by a storage engine". It happens when binary log not enabled. This bug was reported by Zhenye Xie, who also contributed a patch later. Still this crashing bug remains "Verified".
  • Bug #87385 - "Partial external XA transactions are not rolled back correctly". Yet another bug report with a patch from Wei Zhao. See also his Bug #87389 - "Replication position not persisted correctly for XA transactions".
  • Bug #91633 - "Replication failure (errno 1399) on update in XA tx after deadlock". This bug reported by Lukas Sydorowski got recent comment from other community member yesterday. So, the feature is used these days, still.
Now time for conclusions:
  1. Take extra care while using XA transactions in replication environments or with point in time recovery - you may easily end up with slaves out of sync with master and data lost.
  2. Feature requests related to XA transactions are mostly ignored, sometimes for a decade... 
  3. Patches contributed do not seem to speed up XA bug fixing.
  4. I'd say that Oracle does not care much about XA Transactions since MySQL 5.7 GA release in 2015.
  5. MySQL Community still use XA transactions with MySQL (and they will be used even more as corporate users migrate from Oracle RDBMS), find bugs and even try to fix them. But probably will have to use forks rather than MySQL itself if current attitude towards XA bugs processing and fixing remains.

New Certified Docker Images + Kubernetes Scripts Simplify MariaDB Cloud Deployments

New Certified Docker Images + Kubernetes Scripts Simplify MariaDB Cloud Deployments Saravana Krish… Fri, 10/05/2018 - 14:33

In the last few years, enterprise development teams have been focused on reducing the cost of production-grade applications while improving the velocity and agility of development. That’s led to massive public and private cloud adoption – and deployment of databases in containers. To address this growing need, we’ve released new Docker images and Kubernetes scripts that make it easy to deploy and manage MariaDB databases. Now organizations can focus on building their applications rather than on managing and optimizing container infrastructure.

On-demand webinar: The Future of MariaDB on Containers
Watch this recorded webinar to get a look at official Docker images, learn how to run stateful MariaDB clusters on Kubernetes and more.
Watch Now

New – MariaDB Docker Images, Kubernetes Scripts & Sandbox Environments

We’ve released certified Docker images and enabled customers to seamlessly deploy MariaDB servers in Kubernetes and Docker environments. We are delivering three standalone Docker images (one each for MariaDB Server, ColumnStore and MaxScale) and two sandboxes (one for MariaDB AX and one for MariaDB TX). Customers can deploy the standalone Docker images in a standard Docker environment or create complex topologies in Kubernetes environment using YAML scripts.

For someone just getting started with MariaDB or wanting to learn the capabilities offered, the sandboxes will enable them to easily experiment with MariaDB AX and TX. Sandboxes are self-contained, with all the documentation needed to quickly bring up TX or AX and experiment with sample apps (bookstore in the case of TX, and Zeppelin notebook in the case of AX). You have the flexibility to quickly deploy them on a laptop or desktop using Docker Compose to immediately run sample applications.

We’ve also released a Kubernetes script to create a master/slave (one master + two slaves) cluster with MaxScale at the front end. (This script is showcased in the on-demand webinar.) Now customers can deploy MariaDB TX with MaxScale easily in high availability mode. The script deploys the master/slave cluster in such a way that when the master fails one of the slaves will be automatically promoted as the new master. Kubernetes will try to bring up the pod and maintain the configuration integrity. When the old master comes back it will automatically become a slave. The script also supports expanding the number of slave nodes easily, using a simple command. 

 

How to Get Started with Docker and Kubernetes

In the last few years, enterprise development teams have been focused on reducing the cost of production-grade applications while improving the velocity and agility of development. That’s led to massive public and private cloud adoption – and deployment of databases in containers. To address this growing need, we’ve released new Docker images and Kubernetes scripts that make it easy to deploy and manage MariaDB databases. Now organizations can focus on building their applications rather than on managing and optimizing container infrastructure.

Login or Register to post comments

Replication Monitoring with the Performance Schema

The traditional way to monitor replication in MySQL is the SHOW SLAVE STATUS command. However as it will be shown, it has its limitations and in MySQL 5.7 and 8.0 the MySQL developers have started to implement the information as Performance Schema tables. This has several advantages including better monitoring of the replication delay in MySQL 8.0. This blog discusses why SHOW SLAVE STATUS should be replaced with the Performance Schema tables.

The Setup

The replication setup that will be used for the examples in this blog can be seen in the following figure.

Replication Setup with Multi-Source and Chained Replication

There are two source instances (replication masters). Source 1 replicates to the Relay instance (i.e. it acts both as a replica and source in the setup). The Relay instance replicates to the Replica instance which also replicates from Source 2. That is, the Replica instance uses multi-source replication to replicate from the Source 1Relay chain as well as directly from Source 2.

This blog will use the Replica instance to demonstrate SHOW SLAVE STATUS and Performance Schema replication tables.

SHOW SLAVE STATUS

The SHOW SLAVE STATUS command has been around since the addition of replication to MySQL. Thus it is familiar to all database administrators who have been working with replication. It is also very simple to use, and it is easy to remember the syntax. So far so good. However, it also has some limitations.

Let’s take a look at how the output of SHOW SLAVE STATUS looks in the setup described above:

*************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: replication Master_Port: 3308 Connect_Retry: 60 Master_Log_File: binlog.000001 Read_Master_Log_Pos: 61422958 Relay_Log_File: relaylog-relay.000005 Relay_Log_Pos: 59921651 Relay_Master_Log_File: binlog.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: sakila.% Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 59921443 Relay_Log_Space: 61423581 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: Yes Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 49 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 11 Master_UUID: 7616e9d1-c868-11e8-92f0-080027effed8 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 81c777c0-c86f-11e8-9031-080027effed8:28-81 Executed_Gtid_Set: 2165e6e2-c870-11e8-8818-080027effed8:1-39, 81c777c0-c86f-11e8-9031-080027effed8:1-80 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: relay Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0 *************************** 2. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: replication Master_Port: 3307 Connect_Retry: 60 Master_Log_File: binlog.000002 Read_Master_Log_Pos: 366288 Relay_Log_File: relaylog-source_2.000005 Relay_Log_Pos: 181612 Relay_Master_Log_File: binlog.000002 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: sakila.% Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 366288 Relay_Log_Space: 182074 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: Yes Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 2 Master_UUID: 2165e6e2-c870-11e8-8818-080027effed8 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 2165e6e2-c870-11e8-8818-080027effed8:28-39 Executed_Gtid_Set: 2165e6e2-c870-11e8-8818-080027effed8:1-39, 81c777c0-c86f-11e8-9031-080027effed8:1-80 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: source_2 Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0 2 rows in set (0.22 sec)

The first thought: that’s a lot of lines of output. That is one of the issues – there is no way to limit the output. So summarize some of the issues with SHOW SLAVE STATUS:

  • There is no support for filter conditions or choosing which columns to include, so the output is very verbose. In this case with two replication channels, all available data for both channels are always included.
  • The order of the columns reflects the order they were added rather than how they logically belong together. Over the years many new columns have been added as new features have been added or the feature has been changed from an option configured in my.cnf to an option configured with CHANGE MASTER TO. For example the channel name is the fourth column from the end even if it would be more natural to have it as the first column (as it’s it the “primary key” of the output).
  • For multi-threaded appliers (replication_slave_workers > 1) there is no information for the individual workers.
  • Information related to the connection (I/O thread) and applier (SQL thread) as well configuration and status are mixed.
  • What does Seconds_behind_master mean? For the source_2 channel (the direct replication from Source 2 it is relatively easy to understand, but for the relay channel is it relative to Source 1 (yes) or to Replica (no)? More about this later.

To look at what can be done to solve these limitations, let’s look at the Performance Schema.

Performance Schema Replication Tables

To overcome these limitations, MySQL 5.7 introduced a series of replication tables in the Performance Schema. These have been extended in MySQL 8.0 to make them even more useful. One of the advantages of using the Performance Schema is that queries are executed as regular SELECT statements with the usual support for choosing columns and manipulating them and to apply a WHERE clause. First, let’s take a look at which replication related tables that are available.

Overview of Tables

As of MySQL 8.0.12 there are 11 replication related tables. The tables are:

  • log_status: This table is new in MySQL 8 and provides information about the binary log, relay log, and InnoDB redo log in a consistent manner (i.e. all values are for the same point in time).
  • Applier:
    • replication_applier_configuration: This table shows the configuration of the applier threads for each replication channel. Currently the only setting is the configured replication delay.
    • replication_applier_filters: The channel specific replication filters including where and when they were configured.
    • replication_applier_global_filters: The global replication filters including how and when they were configured.
    • replication_applier_status: This table shows the replication filters for each replication channel. The information includes the service state, remaining delay, and number of transaction retries.
    • replication_applier_status_by_coordinator: For multi-threaded replicas this table shows the status of the thread that manages the actual worker threads. In MySQL 8 this includes several timestamps to give detailed knowledge of the replication delay.
    • replication_applier_status_by_worker: The status of each worker thread (for single-threaded replication there is one per channel). In MySQL 8 this includes several timestamps to give detailed knowledge of the replication delay.
  • Connection:
    • replication_connection_configuration: The configuration of each of the replication channels.
    • replication_connection_status: The status of the replication channels. In MySQL 8 this includes information about the timestamps showing when the currently queuing transaction was originally committed, when it was committed on the immediate source instance, and when it was written to the relay log. This makes it possible to describe much more accurately what the replication delay is.
  • Group Replication:

The Group Replication tables will not be discussed further.

Since the information from SHOW SLAVE STATUS has been split up into several tables, it can be useful to take a look at how the information map.

Old Versus New

The following table shows how to get from a column in the SHOW SLAVE STATUS output to a table and column in the Performance Schema. The channel name is present in all of the Performance Schema replication tables (it’s the primary key or part of it). The replication filters and rewrite rules are split into two tables. The I/O and SQL thread states can be found in the performance_schema.threads by joining using the THREAD_ID column for the corresponding threads.

SHOW SLAVE STATUS Performance Schema Column Table Column Slave_IO_State threads PROCESSLIST_STATE Master_Host replication_connection_configuration HOST Master_User replication_connection_configuration USER Master_Port replication_connection_configuration PORT Connect_Retry replication_connection_configuration CONNECTION_RETRY_INTERVAL Master_Log_File Read_Master_Log_Pos Relay_Log_File log_status REPLICATION->>'$.channels[*].relay_log_file' Relay_Log_Pos log_status REPLICATION->>'$.channels[*].relay_log_position' Relay_Master_Log_File Slave_IO_Running replication_connection_status SERVICE_STATE Slave_SQL_Running replication_applier_status_by_coordinator SERVICE_STATE Replicate_Do_DB replication_applier_filters
replication_applier_global_filters Replicate_Ignore_DB replication_applier_filters
replication_applier_global_filters Replicate_Do_Table replication_applier_filters
replication_applier_global_filters Replicate_Ignore_Table replication_applier_filters
replication_applier_global_filters Replicate_Wild_Do_Table replication_applier_filters
replication_applier_global_filters Replicate_Wild_Ignore_Table replication_applier_filtersreplication_applier_global_filters Last_Errno Last_Error Skip_Counter Exec_Master_Log_Pos Relay_Log_Space Until_Condition Until_Log_File Until_Log_Pos Master_SSL_Allowed replication_connection_configuration SSL_ALLOWED Master_SSL_CA_File replication_connection_configuration SSL_CA_FILE Master_SSL_CA_Path replication_connection_configuration SSL_CA_PATH Master_SSL_Cert replication_connection_configuration SSL_CERTIFICATE Master_SSL_Cipher replication_connection_configuration SSL_CIPHER Master_SSL_Key replication_connection_configuration SSL_KEY Seconds_Behind_Master Master_SSL_Verify_Server_Cert replication_connection_configuration SSL_VERIFY_SERVER_CERTIFICATE Last_IO_Errno replication_connection_status LAST_ERROR_NUMBER Last_IO_Error replication_connection_status LAST_ERROR_MESSAGE Last_SQL_Errno replication_applier_status_by_worker
replication_applier_status_by_coordinator LAST_ERROR_NUMBER Last_SQL_Error replication_applier_status_by_worker
replication_applier_status_by_coordinator LAST_ERROR_MESSAGE Replicate_Ignore_Server_Ids Master_Server_Id Master_UUID replication_connection_status SOURCE_UUID Master_Info_File SQL_Delay replication_applier_configuration DESIRED_DELAY SQL_Remaining_Delay replication_applier_status REMAINING_DELAY Slave_SQL_Running_State threads PROCESSLIST_STATE Master_Retry_Count replication_connection_configuration CONNECTION_RETRY_COUNT Master_Bind replication_connection_configuration NETWORK_INTERFACE Last_IO_Error_Timestamp replication_connection_status LAST_ERROR_TIMESTAMP Last_SQL_Error_Timestamp replication_applier_status_by_worker
replication_applier_status_by_coordinator LAST_ERROR_TIMESTAMP Master_SSL_Crl replication_connection_configuration SSL_CRL_FILE Master_SSL_Crlpath replication_connection_configuration SSL_CRL_PATH Retrieved_Gtid_Set replication_connection_status RECEIVED_TRANSACTION_SET Executed_Gtid_Set Auto_Position replication_connection_configuration AUTO_POSITION Replicate_Rewrite_DB replication_applier_filters
replication_applier_global_filters Channel_Name Master_TLS_Version replication_connection_configuration TLS_VERSION Master_public_key_path replication_connection_configuration PUBLIC_KEY_PATH Get_master_public_key replication_connection_configuration GET_PUBLIC_KEY

As it can be seen, there are a few columns from SHOW SLAVE STATUS that do not have any corresponding tables and columns in the Performance Schema yet. One that probably is familiar to many as the main mean of monitoring the replication lag is the the Seconds_Behind_Master column.  This is no longer needed. It is now possible to get a better value using the timestamp columns in the replication_applier_status_by_coordinator, replication_applier_status_by_worker, and replication_connection_status tables. Talking about that, it is time to see the Performance Schema replication tables in action.

Examples

The rest of the blog shows example outputs each of the replication tables (except the ones related to Group Replication) in the Performance Schema. For some of the tables there is a short discussion following the output. The queries have been executed in rapid succession after the above SHOW SLAVE STATUS output was generated. As the outputs have been generated using separate queries, they do not correspond to the exact same point in time.

log_status

The log_status table shows the replication and engine log data so the data is consistent:

mysql> SELECT SERVER_UUID, JSON_PRETTY(LOCAL) AS LOCAL, JSON_PRETTY(REPLICATION) AS REPLICATION, STORAGE_ENGINES FROM log_status\G *************************** 1. row *************************** SERVER_UUID: 302f0073-c869-11e8-95bb-080027effed8 LOCAL: { "gtid_executed": "2165e6e2-c870-11e8-8818-080027effed8:1-39,\n81c777c0-c86f-11e8-9031-080027effed8:1-80", "binary_log_file": "binlog.000002", "binary_log_position": 60102708 } REPLICATION: { "channels": [ { "channel_name": "relay", "relay_log_file": "relaylog-relay.000005", "relay_log_position": 61423166 }, { "channel_name": "source_2", "relay_log_file": "relaylog-source_2.000005", "relay_log_position": 181612 } ] } STORAGE_ENGINES: {"InnoDB": {"LSN": 120287720, "LSN_checkpoint": 118919036}} 1 row in set (0.03 sec)

replication_applier_configuration

The replication_applier_configuration table shows the configuration of the applier threads:

mysql> SELECT * FROM replication_applier_configuration; +--------------+---------------+ | CHANNEL_NAME | DESIRED_DELAY | +--------------+---------------+ | relay | 0 | | source_2 | 0 | +--------------+---------------+ 2 rows in set (0.04 sec)

replication_applier_filters

The replication_applier_filters table shows the channel specific replication filters:

mysql> SELECT * FROM replication_applier_filters\G *************************** 1. row *************************** CHANNEL_NAME: relay FILTER_NAME: REPLICATE_WILD_IGNORE_TABLE FILTER_RULE: world.% CONFIGURED_BY: STARTUP_OPTIONS_FOR_CHANNEL ACTIVE_SINCE: 2018-10-05 20:49:48.185078 COUNTER: 0 1 rows in set (0.18 sec)

There is one filter specifically for the relay channel: the channel will ignore changes to tables in the world schema and the filter was set using the the replicate_wild_ignore_table option in the MySQL configuration file.

replication_applier_global_filters

The replication_applier_global_filters table shows the replication filters that are shared for all channels:

mysql> SELECT * FROM replication_applier_global_filters\G *************************** 1. row *************************** FILTER_NAME: REPLICATE_WILD_IGNORE_TABLE FILTER_RULE: sakila.% CONFIGURED_BY: CHANGE_REPLICATION_FILTER ACTIVE_SINCE: 2018-10-05 20:48:54.341004 1 row in set (0.02 sec)

There is also one global replication filter. This has been set using the CHANGE REPLICATION FILTER statement.

replication_applier_status

The replication_applier_status table shows the overall status for the applier threads:

mysql> SELECT * FROM replication_applier_status; +--------------+---------------+-----------------+----------------------------+ | CHANNEL_NAME | SERVICE_STATE | REMAINING_DELAY | COUNT_TRANSACTIONS_RETRIES | +--------------+---------------+-----------------+----------------------------+ | relay | ON | NULL | 0 | | source_2 | ON | NULL | 0 | +--------------+---------------+-----------------+----------------------------+ 2 rows in set (0.05 sec)

replication_applier_status_by_coordinator

The replication_applier_status_by_coordinator table shows the status for the coordinator when multi-threaded appliers has been configured (slave_parallel_workers > 1):

mysql> SELECT * FROM replication_applier_status_by_coordinator\G *************************** 1. row *************************** CHANNEL_NAME: relay THREAD_ID: 55 SERVICE_STATE: ON LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_PROCESSED_TRANSACTION: 81c777c0-c86f-11e8-9031-080027effed8:81 LAST_PROCESSED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 2018-10-05 21:07:52.286116 LAST_PROCESSED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 2018-10-05 21:08:10.692561 LAST_PROCESSED_TRANSACTION_START_BUFFER_TIMESTAMP: 2018-10-05 21:08:10.843893 LAST_PROCESSED_TRANSACTION_END_BUFFER_TIMESTAMP: 2018-10-05 21:08:11.142150 PROCESSING_TRANSACTION: PROCESSING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 PROCESSING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 PROCESSING_TRANSACTION_START_BUFFER_TIMESTAMP: 0000-00-00 00:00:00.000000 *************************** 2. row *************************** CHANNEL_NAME: source_2 THREAD_ID: 59 SERVICE_STATE: ON LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_PROCESSED_TRANSACTION: 2165e6e2-c870-11e8-8818-080027effed8:39 LAST_PROCESSED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 2018-10-05 20:52:12.422129 LAST_PROCESSED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 2018-10-05 20:52:12.422129 LAST_PROCESSED_TRANSACTION_START_BUFFER_TIMESTAMP: 2018-10-05 20:52:13.010969 LAST_PROCESSED_TRANSACTION_END_BUFFER_TIMESTAMP: 2018-10-05 20:52:13.519174 PROCESSING_TRANSACTION: PROCESSING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 PROCESSING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 PROCESSING_TRANSACTION_START_BUFFER_TIMESTAMP: 0000-00-00 00:00:00.000000 2 rows in set (0.06 sec)

This is an example of MySQL 8 has detailed information about the timings were for the various stages of the applied events. For example for the relay channel, it can be seen that for the last processed transaction, it took 18 seconds from the transaction was committed on Source 1 (original commit) until it was committed on Relay (immediate commit), but then it only took around half a second until the coordinate was done processing the transaction (i.e. sending it to a worker). Which brings us to the status by worker.

replication_applier_status_by_worker

The replication_applier_status_by_worker table shows the status for each worker thread:

mysql> SELECT * FROM replication_applier_status_by_worker\G *************************** 1. row *************************** CHANNEL_NAME: relay WORKER_ID: 1 THREAD_ID: 56 SERVICE_STATE: ON LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_APPLIED_TRANSACTION: 81c777c0-c86f-11e8-9031-080027effed8:80 LAST_APPLIED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 2018-10-05 21:07:27.721738 LAST_APPLIED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 2018-10-05 21:07:50.387138 LAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP: 2018-10-05 21:07:50.526808 LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP: 2018-10-05 21:08:09.296713 APPLYING_TRANSACTION: 81c777c0-c86f-11e8-9031-080027effed8:81 APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 2018-10-05 21:07:52.286116 APPLYING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 2018-10-05 21:08:10.692561 APPLYING_TRANSACTION_START_APPLY_TIMESTAMP: 2018-10-05 21:08:10.855825 *************************** 2. row *************************** CHANNEL_NAME: relay WORKER_ID: 2 THREAD_ID: 57 SERVICE_STATE: ON LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_APPLIED_TRANSACTION: LAST_APPLIED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_APPLIED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP: 0000-00-00 00:00:00.000000 APPLYING_TRANSACTION: APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 APPLYING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 APPLYING_TRANSACTION_START_APPLY_TIMESTAMP: 0000-00-00 00:00:00.000000 *************************** 3. row *************************** CHANNEL_NAME: source_2 WORKER_ID: 1 THREAD_ID: 60 SERVICE_STATE: ON LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_APPLIED_TRANSACTION: 2165e6e2-c870-11e8-8818-080027effed8:39 LAST_APPLIED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 2018-10-05 20:52:12.422129 LAST_APPLIED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 2018-10-05 20:52:12.422129 LAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP: 2018-10-05 20:52:13.520916 LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP: 2018-10-05 20:52:14.302957 APPLYING_TRANSACTION: APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 APPLYING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 APPLYING_TRANSACTION_START_APPLY_TIMESTAMP: 0000-00-00 00:00:00.000000 *************************** 4. row *************************** CHANNEL_NAME: source_2 WORKER_ID: 2 THREAD_ID: 61 SERVICE_STATE: ON LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_APPLIED_TRANSACTION: LAST_APPLIED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_APPLIED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP: 0000-00-00 00:00:00.000000 APPLYING_TRANSACTION: APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 APPLYING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 APPLYING_TRANSACTION_START_APPLY_TIMESTAMP: 0000-00-00 00:00:00.000000 4 rows in set (0.16 sec)

The timestamps for the relay channel’s workers (only one has been active as it can be seen) can be used to see that the last transaction took around 19 seconds to apply and it was committed also 19 seconds after it committed on the immediate source (the Relay instance).

You can compare this delay of 19 seconds with the 49 seconds claimed by Seconds_Behind_Master in the SHOW SLAVE STATUS output. Why the difference? Seconds_Behind_Master is really the time from the original source started to execute the current transaction until now. So that includes the time it took to execute the transaction not only on Source 1 but also on Relay and the time used until now on Replica.

replication_connection_configuration

The replication_connection_configuration table shows the configuration for each connection to the source of the replication:

mysql> SELECT * FROM replication_connection_configuration\G *************************** 1. row *************************** CHANNEL_NAME: relay HOST: 127.0.0.1 PORT: 3308 USER: replication NETWORK_INTERFACE: AUTO_POSITION: 1 SSL_ALLOWED: YES SSL_CA_FILE: SSL_CA_PATH: SSL_CERTIFICATE: SSL_CIPHER: SSL_KEY: SSL_VERIFY_SERVER_CERTIFICATE: NO SSL_CRL_FILE: SSL_CRL_PATH: CONNECTION_RETRY_INTERVAL: 60 CONNECTION_RETRY_COUNT: 86400 HEARTBEAT_INTERVAL: 30.000 TLS_VERSION: PUBLIC_KEY_PATH: GET_PUBLIC_KEY: NO *************************** 2. row *************************** CHANNEL_NAME: source_2 HOST: 127.0.0.1 PORT: 3307 USER: replication NETWORK_INTERFACE: AUTO_POSITION: 1 SSL_ALLOWED: YES SSL_CA_FILE: SSL_CA_PATH: SSL_CERTIFICATE: SSL_CIPHER: SSL_KEY: SSL_VERIFY_SERVER_CERTIFICATE: NO SSL_CRL_FILE: SSL_CRL_PATH: CONNECTION_RETRY_INTERVAL: 60 CONNECTION_RETRY_COUNT: 86400 HEARTBEAT_INTERVAL: 30.000 TLS_VERSION: PUBLIC_KEY_PATH: GET_PUBLIC_KEY: NO 2 rows in set (0.01 sec)

replication_connection_status

The replication_connection_status table shows the status of each connection:

mysql> SELECT * FROM replication_connection_status\G *************************** 1. row *************************** CHANNEL_NAME: relay GROUP_NAME: SOURCE_UUID: 7616e9d1-c868-11e8-92f0-080027effed8 THREAD_ID: 54 SERVICE_STATE: ON COUNT_RECEIVED_HEARTBEATS: 28 LAST_HEARTBEAT_TIMESTAMP: 2018-10-05 21:03:49.684895 RECEIVED_TRANSACTION_SET: 81c777c0-c86f-11e8-9031-080027effed8:28-81 LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_QUEUED_TRANSACTION: 81c777c0-c86f-11e8-9031-080027effed8:81 LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 2018-10-05 21:07:52.286116 LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 2018-10-05 21:08:10.692561 LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2018-10-05 21:08:10.835992 LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2018-10-05 21:08:10.842133 QUEUEING_TRANSACTION: QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000 *************************** 2. row *************************** CHANNEL_NAME: source_2 GROUP_NAME: SOURCE_UUID: 2165e6e2-c870-11e8-8818-080027effed8 THREAD_ID: 58 SERVICE_STATE: ON COUNT_RECEIVED_HEARTBEATS: 47 LAST_HEARTBEAT_TIMESTAMP: 2018-10-05 21:08:12.589150 RECEIVED_TRANSACTION_SET: 2165e6e2-c870-11e8-8818-080027effed8:28-39 LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_QUEUED_TRANSACTION: 2165e6e2-c870-11e8-8818-080027effed8:39 LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 2018-10-05 20:52:12.422129 LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 2018-10-05 20:52:12.422129 LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2018-10-05 20:52:12.486156 LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2018-10-05 20:52:12.486263 QUEUEING_TRANSACTION: QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000 2 rows in set (0.02 sec)

Similar to the applier timestamps, the connection timestamps can be used to check the lag caused by the connection threads fetching the transactions from its source’s binary log and writing it to its own relay log. For the relay channel it took around 0.14 second from the transaction was committed on the immediate source (the Relay instance) until the connection thread started to write the transaction to the relay log (LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP) and then further 0.007 second to complete the write.

Conclusion

The replication related tables in the Performance Schema, particularly the MySQL 8 version of them, provides a very useful way to get information about the replication configuration and status. As a support engineer myself, I look forward to have access to the new timestamp values, when I investigate replication delays.

There is still some work remaining such. For example, it could be useful to know when the transaction started on the original source. That way it is possible to compare the execution time between a source and a replica. However, the current information that is available is already a great improvement.

Shinguz: MariaDB/MySQL Schulungstermine 2019

Jetzt sind auch noch die letzten MariaDB und MySQL Schulungstermine für 2019 festgelegt und veröffentlicht.

Mit unseren drei Schulungspartnern in Essen, Köln und Berlin bietet FromDual zur Zeit insgesamt 12 öffentliche Schulungen zum Thema MariaDB und MySQL an.

Es sind dies:


Diese Schulungen finden in deutsch statt. Auf Wunsch können auch Schulungen in englisch angeboten werden.

Was bleibt übrig für 2018?

Noch im Jahr 2018 werden 5 weitere Schulungen durchgeführt. Diese finden alle sicher statt!

Es sind dies:


In diesen Schulungen sind noch vereinzelt Plätze frei. Habt Ihr Eure MariaDB/MySQL Schulung für 2018 schon bezogen?

Bei Fragen zu den MariaDB oder MySQL Schulungen hilft Euch das FromDual Schulungsteam gerne weiter!

Taxonomy upgrade extras:  galera cluster schulung training mysql training mariadb training galera cluster training schulung mysql schulung mariadb schulung

The Importance of mysqlbinlog –version

When deciding on your backup strategy, one of the key components for Point In Time Recovery (PITR) will be the binary logs. Thankfully, the mysqlbinlog command allows you to easily take binary log backups, including those that would otherwise be encrypted on disk using encrypt_binlog=ON.

When

mysqlbinlog  is used with --raw --read-from-remote-server --stop-never --verify-binlog-checksum  then it will retrieve binary logs from whichever master it is pointed to, and store them locally on disk in the same format as they were written on the master. Here is an example with the extra arguments that would normally be used:/usr/bin/mysqlbinlog --raw --read-from-remote-server \  --stop-never --connection-server-id=1234 \  --verify-binlog-checksum \  --host=localhost --port=3306 mysql-bin.000024

This would retrieve the localhost binary logs (starting from mysql-bin.000024) reporting as server_id 1234, verify the checksum and then write each of them to disk.

Changes to the mysqlbinlog source code are relatively infrequent, except for when developing for a new major version, so you may be fooled into thinking that the specific version that you use is not so important—a little like the client version. This is something that is more likely to vary when you are taking remote backups.

Here is the result from the 5.7 branch of mysql-server to show the history of commits by year:

$ git blame --line-porcelain client/mysqlbinlog.cc | egrep "^(committer-time|committer-tz)" | cut -f2 -d' ' | while read ct; do read ctz; date --date "Jan 1, 1970 00:00:00 ${ctz} + ${ct} seconds" --utc +%Y; done | sort -n | uniq -c    105 2000     52 2001     52 2002    103 2003    390 2004    113 2005     58 2006    129 2007    595 2008     53 2009    349 2010    151 2011    382 2012    191 2013    308 2014    404 2015     27 2016     42 2017     15 2018

Since the first GA release of 5.7 (October 2015), there haven’t been too many bugs and so if you aren’t using new features then you may think that it is OK to keep using the same version as before:

$ git log --regexp-ignore-case --grep bug --since="2015-10-19" --oneline client/mysqlbinlog.cc 1ffd7965a5e Bug#27558169 BACKPORT TO 5.7 BUG #26826272: REMOVE GCC 8 WARNINGS [noclose] 17c92835bb3 Bug #24674276 MYSQLBINLOG -R --HEXDUMP CRASHES FOR INTVAR,                   USER_VAR, OR RAND EVENTS 052dbd7b079 BUG#26878022 MYSQLBINLOG: ASSERTION `(OLD_MH->M_KEY == KEY) ||              (OLD_MH->M_KEY == 0)' FAILED 543129a577c BUG#26878022 MYSQLBINLOG: ASSERTION `(OLD_MH->M_KEY == KEY) || (OLD_MH->M_KEY == 0)' FAILED ba1a99c5cd7 Bug#26825211 BACKPORT FIX FOR #25643811 TO 5.7 1f0f4476b28 Bug#26117735: MYSQLBINLOG READ-FROM-REMOTE-SERVER NOT HONORING REWRITE_DB FILTERING 12508f21b63 Bug #24323288: MAIN.MYSQLBINLOG_DEBUG FAILS WITH A LEAKSANITIZER ERROR e8e5ddbb707 Bug#24609402 MYSQLBINLOG --RAW DOES NOT GET ALL LATEST EVENTS 22eec68941f Bug#23540182:MYSQLBINLOG DOES NOT FREE THE EXISTING CONNECTION BEFORE OPENING NEW REMOTE ONE 567bb732bc0 Bug#22932576 MYSQL5.6 DOES NOT BUILD ON SOLARIS12 efc42d99469 Bug#22932576 MYSQL5.6 DOES NOT BUILD ON SOLARIS12 6772eb52d66 Bug#21697461 MEMORY LEAK IN MYSQLBINLOG

However, this is not always the case and some issues are more obvious than others! To help show this, here are a couple of the issues that you might happen to notice.

Warning: option ‘stop-never-slave-server-id’: unsigned value <xxxxxxxx> adjusted to <yyyyy>

The server_id that is used by a server in a replication topology should always be unique within the topology. One of the easy ways to ensure this is to use a conversion of the external IPv4 address to an integer, such as INET_ATON , which provides you with an unsigned integer.

The introduction of --connection-server-id (which deprecates

--stop-never-slave-server-id ) changes the behaviour here (for the better). Prior to this you may experience warnings where your server_id was cast to the equivalent of an UNSIGNED SMALLINT. This didn’t seem to be a reported bug, just fixed as a by-product of the change. ERROR: Could not find server version: Master reported unrecognized MySQL version ‘xxx’

When running mysqlbinlog, the version of MySQL is checked so that the event format is set correctly. Here is the code from MySQL 5.7:

switch (*version) {  case '3':    glob_description_event= new Format_description_log_event(1);    break;  case '4':    glob_description_event= new Format_description_log_event(3);    break;  case '5':    /*      The server is soon going to send us its Format_description log      event, unless it is a 5.0 server with 3.23 or 4.0 binlogs.      So we first assume that this is 4.0 (which is enough to read the      Format_desc event if one comes).    */    glob_description_event= new Format_description_log_event(3);    break;  default:    glob_description_event= NULL;    error("Could not find server version: "          "Master reported unrecognized MySQL version '%s'.", version);    goto err;  }

This section of the code last changed in 2008, but of course there is another vendor that no longer uses a 5-prefixed-version number: MariaDB. With MariaDB, it is impossible to take a backup without using a MariaDB version of the program, as you are told that the version is unrecognised. The MariaDB source code contains a change to this section to resolve the issue when the version was bumped to 10:

83c02f32375b client/mysqlbinlog.cc (Michael Widenius    2012-05-31 22:39:11 +0300 1900) case 5: 83c02f32375b client/mysqlbinlog.cc (Michael Widenius    2012-05-31 22:39:11 +0300 1901) case 10:

Interestingly, MySQL 8.0 gets a little closer to not producing an error (although it still does), but finally sees off those rather old ancestral relatives:

 switch (*version) {    case '5':    case '8':    case '9':      /*        The server is soon going to send us its Format_description log        event.      */      glob_description_event = new Format_description_log_event;      break;    default:      glob_description_event = NULL;      error(          "Could not find server version: "          "Master reported unrecognized MySQL version '%s'.",          version);      goto err;  }

These are somewhat trivial examples. In fact, you are more likely to suffer from more serious differences, perhaps ones that do not become immediately apparent, if you are not matching the mysqlbinlog version to the one provided by the version for the server producing the binary logs.

Sadly, it is not so easy to check the versions as the reported version was seemingly left unloved for quite a while (Ver 3.4), so you should check the binary package versions (e.g. using Percona-Server-client-57-5.7.23 with Percona-Server-server-57-5.7.23). Thankfully, the good news is that MySQL 8.0 fixes it!

So reduce the risk and match your package versions!

Log Buffer #555: A Carnival of the Vanities for DBAs

This Log Buffer Edition covers Cloud, Oracle and MySQL.

Cloud:

Cloud Directory is a neglected serverless data store that deserves much more attention.

Many of our large enterprise customers have this worry hanging over their heads … what are they going to do about that mainframe when they migrate?

Miners Extract More Value from the Cloud When They Transform Their Cultures

7 steps to keep your Windows Servers Patched with AWS Systems Manager

No application is an island—each one has dependencies and exists alongside other services, sharing resources and data.

Oracle:

An example gitignore file for Nodejs projects to ensure that local environment variables, build-related output and modules are not committed to the git repository.

Jonathan Lewis writes about Column Group Catalog.

Franck Pachot blogs about Oracle write-consistency bug and multi-thread de-queuing.

Mohammed Azar discusses the issues which may arise when we try to rebuild the forms or reports.

MySQL:

Change management is hard. In everyday production, there are numerous factors working against embracing change. Limited preparation time and a whole new show = a whole new crew, innumerable planning variables, and the challenge of driving an operational plan based on creative instincts.

Kathy Forte shows you how to connect to the same Docker instance using the MySQL Shell which is a tool to use Document Store and to create InnoDB Clusters.

Percona XtraDB Cluster 5.7.23-31.31 Is now available.

A query plan uses a loose index scan if “Using index for group-by” appears in the “Extra” column of the EXPLAIN output. In some plans though, “Using index for group-by (scanning)” appears. What does “(scanning)” mean and how is it different from the regular loose index scan?

Chien Tran on debugging a slow WordPress site.

Setting up MySQL Group Replication with MySQL Docker images

MySQL Group Replication (GR) is a MySQL Server plugin that enables you to create elastic, highly-available, fault-tolerant replication topologies. Groups can operate in a
single-primary mode with automatic primary election, where only one server accepts updates at a time. Alternatively, groups can be deployed in multi-primary mode, where all servers can accept updates, even if they are issued concurrently.…

Pages