Planet MySQL

Mydbops Delighted to be part of Open Source India -2017

Open Source India (OSI) is the premier Open Source conference in Asia targeted at nurturing and promoting the Open Source ecosystem in the subcontinent. Started as Linux Asia in 2004, OSI has been at the helm of bringing together the Open Source industry and the community since its inception.

The objective of OSI is to convince you for the adoption of Open Source when you’re looking for software, using quantitative measures and NOT because these are cost free.. OSI emphasizes quantitative measures (such as use case experiments and case studies) to justify why using OSS is, in many circumstances, a reasonable or even superior approach.

It is one of the prominent conference for open source in ASIA. In the past we have seen huge contribution from open source database community like Oracle MySQL, MongoDB , Postgresql  and others.

Mydbops :

Mydbops is an associate sponsor for this year (2017).  Mydbops Founder Karthik P R is also presenting on the database day (13-10-2017).  He is presenting on the topic “Percona XtraDB cluster“. His talk focus on its architecture, its latest improvements in MySQL 5.7 and seamless integrations.

Talks From Oracle MySQL:

  1. State of Dolphin – MySQL 8.0 and beyond ( Sanjay Manwani MySQL India Director, Oracle )
  2. MySQL InnoDB Cluster and Group Replication (Sujatha Sivakumar & Madhusudhan Joshi Oracle ).

Talks From MongoDB

  1. Accelerating the path to Digital with Cloud Data Strategy – ( Gagan Mehra, Director, Information Strategy, MongoDB ).



MySQL backup, recovery and provisioning within a Continuent Tungsten Cluster

Join us for this training session where we discuss tools for backing up a MySQL database within Tungsten, and how Tungsten makes it easy to re-provision databases and recover a cluster. Tuesday, 10/17 at 9:00 am PT/12:00 pm ET. Sign up today at In this session we will cover:  Methods and tools for taking a backup Verifying the backup contains the last binary log position,

Percona Monitoring and Management 1.3.2 Is Now Available

Percona announces the release of Percona Monitoring and Management 1.3.2. This release only contains bug fixes related to usability.

For install and upgrade instructions, see Deploying Percona Monitoring and Management.

Bug fixes
  • PMM-1529: When the user selected “Today”, “This week”, “This month” or “This year” range in Metrics Monitor and clicked the Query Analytics button, the QAN page opened reporting no data for the selected range even if the data were available.
  • PMM-1528: In some cases, the page not found error could appear instead of the QAN page after upgrading by using the Upgrade button.
  • PMM-1498 : In some cases, it was not possible to shut down the virtual machine containing the PMM Server imported as an OVA image.

Other bug fixes in this release: PMM-913, PMM-1215, PMM-1481PMM-1483, PMM-1507


Instant ADD COLUMN for InnoDB

Instant ADD COLUMN for InnoDB Marko Mäkelä Wed, 10/11/2017 - 11:02

MariaDB Server 10.3.2 alpha was released this week. For InnoDB, the new features coming in MariaDB Server 10.3 include CREATE SEQUENCE which is a logical continuation of the Persistent AUTO_INCREMENT that appeared in MariaDB Server 10.2.

Perhaps one of the most important InnoDB changes coming in MariaDB Server 10.3 is Instant ADD COLUMN for InnoDB tables. The design was brainstormed in April by engineers from MariaDB Corporation, Alibaba and Tencent. A prototype was developed by Vin Chen (陈福荣) from the Tencent Game DBA Team.

What is so special about this? Normally, adding a column to a table requires the full table to be rebuilt. The complexity of the operation is proportional to the size of the table, or O(n·m) where n is the number of rows in the table and m is the number of indexes. Sure, with my online ALTER TABLE for InnoDB tables in MySQL 5.6 and MariaDB Server 10.0, you would be able to modify the table while it is being rebuilt, but it would significantly increase the I/O and memory consumption and cause a replication lag. With instant ADD COLUMN, all that is needed is an O(log n) operation to insert a special hidden record into the table, and an update of the data dictionary. For a large table, instead of taking several hours, the operation would be completed in the blink of an eye. The ADD COLUMN operation is only slightly more expensive than a regular INSERT, due to locking constraints.

In the past, some developers may have implemented a kind of instant add column in the application by encoding multiple columns in a single TEXT or BLOB column. MariaDB Dynamic Columns was an early example of that. A more recent example is JSON and related string manipulation functions.

Adding real columns has the following advantages over encoding columns into a single expandable column:

  • Efficient storage in a native binary format
  • Data type safety
  • Indexes can be built natively
  • Constraints are available: UNIQUE, CHECK, FOREIGN KEY
  • DEFAULT values can be specified
  • Triggers can be written more easily

With instant ADD COLUMN, you can enjoy all the benefits of structured storage without the drawback of having to rebuild the table.

Instant ADD COLUMN is available for both old and new InnoDB tables. Basically you can just upgrade from MySQL 5.x or MariaDB and start adding columns instantly.


Instant ADD COLUMN only applies when the added columns appear last in the table. The place specifier LAST is the default. If AFTER col is specified, then col must be the last column, or the operation will require the table to be rebuilt.

If the table contains a hidden FTS_DOC_ID column due to FULLTEXT INDEX, then instant ADD COLUMN will not be possible.

If the table becomes empty (either via TRUNCATE or DELETE), the table will be converted to the old non-instant format.

Naturally, the operation is crash-safe. If the server is killed while instant ADD COLUMN was executing, the operation will be rolled back. If it was the very first instant ADD COLUMN operation, the table will be restored to the old non-instant format.

InnoDB data files after instant ADD COLUMN cannot be imported to older versions of MariaDB or MySQL. Any table-rebuilding operation such as ALTER TABLE…FORCE will convert the table to the old format.

For technical reasons, instant ADD COLUMN is not available for ROW_FORMAT=COMPRESSED, which is one of the formats that I created before MySQL 5.5.

A simple example of instant ADD COLUMN CREATE TABLE t(id INT PRIMARY KEY, u INT UNSIGNED NOT NULL UNIQUE) ENGINE=InnoDB; INSERT INTO t(id,u) VALUES(1,1),(2,2),(3,3); ALTER TABLE t ADD COLUMN (d DATETIME DEFAULT current_timestamp(), p POINT NOT NULL DEFAULT ST_GeomFromText('POINT(0 0)'), t TEXT CHARSET utf8 DEFAULT 'The quick brown fox jumps over the lazy dog'); UPDATE t SET t=NULL WHERE id=3; SELECT id,u,d,ST_AsText(p),t FROM t; SELECT variable_value FROM information_schema.global_status WHERE variable_name = 'innodb_instant_alter_column';

The above example illustrates that when the added columns are declared NOT NULL, a DEFAULT value must be available, either implied by the data type or set explicitly by the user. The expression need not be constant, but it must not refer to the columns of the table, such as DEFAULT u+1 (a MariaDB extension). The DEFAULT current_timestamp() would be evaluated at the time of the ALTER TABLE and apply to each row, like it does for non-instant ALTER TABLE. If a subsequent ALTER TABLE changes the DEFAULT value for subsequent INSERT, the values of the columns in existing records will naturally be unaffected.

You can download MariaDB Server 10.3.2 here. Note that MariaDB Server 10.3.2 is an alpha release. Please do not use it in production, but feel free to test it.

For now, ALTER TABLE…DROP COLUMN will require the table to be rebuilt. We are proud of the exciting contributions from the robust MariaDB community. Stay tuned for new improvements coming to MariaDB!

Perhaps one of the most important InnoDB changes coming in MariaDB Server 10.3 is Instant ADD COLUMN for InnoDB tables. Normally, adding a column to a table requires the full table to be rebuilt. With instant ADD COLUMN, all that is needed is a special kind of an INSERT or UPDATE of table metadata. For a large table, instead of taking several hours, the operation would be completed in the blink of an eye.

Alexey Kopytov

Sat, 10/14/2017 - 00:33


Hi Marko,

I wonder if there is a way to control the instant ADD COLUMN feature. Would ALGORITHM=COPY force the old ADD COLUMN

lila luna

Wed, 10/25/2017 - 02:11

I truly value this post. I have been looking out everywhere for this! Give thanks heavens I discovered it on this web log. I really respect your work and that i request in future i will be able to come once a lot of knowledge. Like this one. You have stuffed my heart with joy! Abundant appreciated once more! I actually have some smart work expertise with best essay writer [ ] and my words area unit clearly supported what I felt through such processes within the past.

Login or Register to post comments

How to Install LibreNMS Monitoring Tool with Nginx on CentOS 7

LibreNMS is an open source monitoring tool based on PHP/MYSQL/SNMP. It's a fully featured network monitoring system that provides support for wide range of network hardware and operating systems including, FreeBSD, Cisco, Linux, HP, and more. In this tutorial, we will show you how to install and configure LibreNMS using Nginx as the web server and MariaDB as the database.

MySQL Enterprise Monitor 3.3.6 has been released

We are pleased to announce that MySQL Enterprise Monitor 3.3.6 is now available for download on the My Oracle Support (MOS) web site. This is a maintenance release that includes a few new features and fixes a number of bugs. You can find more information on the contents of this release in the change log.

You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet.

Important: MySQL Enterprise Monitor (MEM) 4.0 offers many significant improvements over MEM 3.3 and 3.4, and we highly recommend that you consider upgrading. More information on MEM 4.0 is available here:

Please open a bug or a ticket on My Oracle Support to report problems, request features, or give us general feedback about how this release meets your needs.

If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to, or contact Sales at

Thanks and Happy Monitoring!

- The MySQL Enterprise Tools Development Team

Useful URLs

MySQL Enterprise Monitor 3.2.10 has been released

We are pleased to announce that MySQL Enterprise Monitor 3.2.10 is now available for download on the My Oracle Support (MOS) web site. This is a maintenance release that includes a few new features and fixes a number of bugs. You can find more information on the contents of this release in the change log.

You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet.

Important: MySQL Enterprise Monitor (MEM) 4.0 offers many significant improvements over MEM 3.2, 3.3, and 3.4, and we highly recommend that you consider upgrading. More information on MEM 4.0 is available here:

Please open a bug or a ticket on My Oracle Support to report problems, request features, or give us general feedback about how this release meets your needs.

If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to, or contact Sales at

Thanks and Happy Monitoring!

- The MySQL Enterprise Tools Development Team

Useful URLs

Webinar Wednesday, October 11, 2017: Percona Monitoring and Management (PMM) Demonstration

Join Percona’s Product Manager Michael Coburn as he presents a Percona Monitoring and Management (PMM) Demonstration on Wednesday, October 11, 2017, at 10:00 am PDT / 1:00 pm EDT (UTC-7).

Register Now

How can you optimize database performance if you can’t see what’s happening? Percona Monitoring and Management (PMM) is a free, open source database troubleshooting and performance optimization platform for MySQL and MongoDB. PMM uses Metrics Monitor (Grafana + Prometheus) for visualization of data points. It also has Query Analytics (QAN), to help identify and quantify non-performant queries and provide thorough time-based analysis to ensure that your data works as efficiently as possible.

Michael Coburn will provide a brief demo of PMM. He will also cover newly released features in PMM such as QAN for MongoDB, new MyRocks dashboards and tooltips for metrics monitoring.

By the end of the webinar you will have a better understanding of how to:

  • Observe database performance from a system and service metrics perspective
  • See database performance from the queries executing in MySQL and MongoDB
  • Leverage the metrics and queries from PMM to make informed decisions about crucial database resources: scaling your database tier, database resource utilization and management, and having your team focus on the most critical database events

Register for the webinar here.

Michael Coburn, Principal Technical Account Manager

Michael joined Percona as a Consultant in 2012. He progressed through various roles including Managing Consultant, Principal Architect, Technical Account Manager, and Technical Support Engineer. He is now leading Product Management for Percona Monitoring and Management.


Expert Insights To Slay Problems And Be More Awesome Faster

We’ve improved VividCortex to help you focus on what matters most in your data tier’s performance, configuration, and behavior. Now it’s easier for you to be more awesome for your team and your customers, without needing to actually be an expert!

Navigate to the Health section of the app (clicky clicky), and the first thing you’ll see is a comprehensive set of recommendations for anything amiss in your configuration, queries, and more. This applies both to the database and to the operating system. Our guiding principle is: we’ll show you what you need to know, what you need to do, and how to do it (as much as possible).

You can filter, mute, and drill into any of the recommendations to learn more. These expert insights are based on our own years of combined experience, as well as what you, our customers, have taught us. The result is instant focus on what’s most important to know and do.

In our testing with customers, it’s caught tons and tons of problems they didn’t know about, or thought they’d already solved. “Hey, we still have the wrong filesystem on db-28? We just fixed that on all 50 clusters, how did we miss that one? And why does mongo-prod-12 have completely open auth settings?”

The recommendations are based on a comprehensive and flexible set of heuristics and pattern-matching systems that inspect configuration, status, metrics, query-level behavior, network packet contents, EXPLAIN plans, security settings, and much more. They’re specific and nuanced for each technology we support: MySQL, PostgreSQL, MongoDB, and the Linux OS.

These features are generally available to all customers immediately. Don’t see what you expect? Found a problem we didn’t show you? Tell us. We’ll make the system smarter and everyone will benefit automatically!

[Updated] Monitoring Galera Cluster for MySQL or MariaDB - Understanding metrics and their meaning

To operate any database efficiently, you need to have insight into database performance. This might not be obvious when everything is going well, but as soon as something goes wrong, access to information can be instrumental in quickly and correctly diagnosing the problem.

All databases make some of their internal status data available to users. In MySQL, you can get this data mostly by running 'SHOW STATUS' and 'SHOW GLOBAL STATUS', by executing 'SHOW ENGINE INNODB STATUS', checking information_schema tables and, in newer versions, by querying performance_schema tables.

These methods are far from convenient in day-to-day operations, hence the popularity of different monitoring and trending solutions. Tools like Nagios/Icinga are designed to watch hosts/services, and alert when a service falls outside an acceptable range. Other tools such as Cacti and Munin provide a graphical look at host/service information, and give historical context to performance and usage. ClusterControl combines these two types of monitoring, so we’ll have a look at the information it presents, and how we should interpret it.

If you’re using Galera Cluster (MySQL Galera Cluster by Codership or MariaDB Cluster or Percona XtraDB Cluster), you may have noticed the following section in ClusterControl’s "Overview" tab:

Let’s see, step by step, what kind of data we have here.

The first column contains the list of nodes with their IP addresses - there’s not much else to say about it.

Second column is more interesting - it describes node status (wsrep_local_state_comment status). A node can be in different states:

  • Initialized - The node is up and running, but it’s not a part of a cluster. It can be caused, for example, by network issues;
  • Joining - The node is in the process of joining the cluster and it’s either receiving or requesting a state transfer from one of other nodes;
  • Donor/Desynced - The node serves as a donor to some other node which is joining the cluster;
  • Joined - The node is joined the cluster but its busy catching up on committed write sets;
  • Synced - The node is working normally.

In the same column within the bracket is the cluster status (wsrep_cluster_status status). It can have three distinct states:

  • Primary - The communication between nodes is working and quorum is present (majority of nodes is available)
  • Non-Primary - The node was a part of the cluster but, for some reason, it lost contact with the rest of the cluster. As a result, this node is considered inactive and it won’t accept queries
  • Disconnected - The node could not establish group communication.

"WSREP Cluster Size / Ready" tells us about a cluster size as the node sees it, and whether the node is ready to accept queries. Non-Primary components create a cluster with size of 1 and wsrep readiness is OFF.

Let’s take a look at the screenshot above, and see what it is telling us about Galera. We can see three nodes. Two of them ( and are perfectly fine, they are both "Synced" and the cluster is in "Primary" state. The cluster currently consists of two nodes. Node is "Initialized" and it forms "non-Primary" component. It means that this node lost connection with the cluster - most likely some kind of network issues (in fact, we used iptables to block a traffic to this node from both and

At this moment we have to stop a bit and describe how Galera Cluster works internally. We’ll not go into too much details as it is not within a scope of this blog post but some knowledge is required to understand the importance of the data presented in next columns.

Galera is a "virtually" synchronous, multi-master cluster. It means that you should expect data to be transferred across nodes "virtually" at the same time (no more annoying issues with lagging slaves) and that you can write to any node in a cluster (no more annoying issues with promoting a slave to master). To accomplish that, Galera uses writesets - atomic set of changes that are replicated across the cluster. A writeset can contain several row changes and additional needed information like data regarding locking.

Once a client issues COMMIT, but before MySQL actually commits anything, a writeset is created and sent to all nodes in the cluster for certification. All nodes check whether it’s possible to commit the changes or not (as changes may interfere with other writes executed, in the meantime, directly on another node). If yes, data is actually committed by MySQL, if not, rollback is executed.

What’s important to remember is the fact that nodes, similar to slaves in regular replication, may perform differently - some may have better hardware than others, some may be more loaded than others. Yet Galera requires them to process the writesets in a short and quick manner, in order to maintain "virtual" synchronization. There has to be a mechanism which can throttle the replication and allow slower nodes to keep up with the rest of the cluster.

Let's take a look at "Local Send Q [now/avg]" and "Local Receive Q [now/avg]" columns. Each node has a local queue for sending and receiving writesets. It allows to parallelize some of the writes and queue data which couldn’t be processed at once if node cannot keep up with traffic. In SHOW GLOBAL STATUS we can find eight counters describing both queues, four counters per queue:

  • wsrep_local_send_queue - current state of the send queue
  • wsrep_local_send_queue_min - minimum since FLUSH STATUS
  • wsrep_local_send_queue_max - maximum since FLUSH STATUS
  • wsrep_local_send_queue_avg - average since FLUSH STATUS
  • wsrep_local_recv_queue - current state of the receive queue
  • wsrep_local_recv_queue_min - minimum since FLUSH STATUS
  • wsrep_local_recv_queue_max - maximum since FLUSH STATUS
  • wsrep_local_recv_queue_avg - average since FLUSH STATUS

The above metrics are unified across nodes under ClusterControl -> Performance -> DB Status:

ClusterControl displays "now" and "average" counters, as they are the most meaningful as a single number (you can also create custom graphs based on variables describing the current state of the queues) . When we see that one of the queues is rising, this means that the node can’t keep up with the replication and other nodes will have to slow down to allow it to catch up. We’d recommend to investigate a workload of that given node - check the process list for some long running queries, check OS statistics like CPU utilization and I/O workload. Maybe it’s also possible to redistribute some of the traffic from that node to the rest of the cluster.

"Flow Control Paused" shows information about the percentage of time a given node had to pause its replication because of too heavy load. When a node can’t keep up with the workload it sends Flow Control packets to other nodes, informing them they should throttle down on sending writesets. In our screenshot, we have value of ‘0.30’ for node This means that almost 30% of the time this node had to pause the replication because it wasn’t able to keep up with writeset certification rate required by other nodes (or simpler, too many writes hit it!). As we can see, it’s "Local Receive Q [avg]" points us also to this fact.

Next column, "Flow Control Sent" gives us information about how many Flow Control packets a given node sent to the cluster. Again, we see that it’s node which is slowing down the cluster.

What can we do with this information? Mostly, we should investigate what’s going on in the slow node. Check CPU utilization, check I/O performance and network stats. This first step helps to assess what kind of problem we are facing.

In this case, once we switch to CPU Usage tab, it becomes clear that extensive CPU utilization is causing our issues. Next step would be to identify the culprit by looking into PROCESSLIST (Query Monitor -> Running Queries -> filter by to check for offending queries:

Or, check processes on the node from operating system’s side (Nodes -> -> Top) to see if the load is not caused by something outside of Galera/MySQL.

In this case, we have executed mysqld command through cpulimit, to simulate slow CPU usage specifically for mysqld process by limiting it to 30% out of 400% available CPU (the server has 4 cores).

"Cert Deps Distance" column gives us information about how many writesets, on average, can be applied in parallel. Writesets can, sometimes, be executed at the same time - Galera takes advantage of this by using multiple wsrep_slave_threads to apply writesets. This column gives you some idea how many slave threads you could use on your workload. It’s worth noting that there’s no point in setting up wsrep_slave_threads variable to values higher than you see in this column or in wsrep_cert_deps_distance status variable, on which "Cert Deps Distance" column is based. Another important note - there is no point either in setting wsrep_slave_threads variable to more than number of cores your CPU has.

"Segment ID" - this column will require some more explanation. Segments are a new feature added in Galera 3.0. Before this version, writesets were exchanged between all nodes. Let’s say we have two datacenters:

This kind of chatter works ok on local networks but WAN is a different story - certification slows down due to increased latency, additional costs are generated because of network bandwidth used for transferring writesets between every member of the cluster.

With the introduction of "Segments", things changed. You can assign a node to a segment by modifying wsrep_provider_options variable and adding "gmcast.segment=x" (0, 1, 2) to it. Nodes with the same segment number are treated as they are in the same datacenter, connected by local network. Our graph then becomes different:

The main difference is that it’s no more everyone to everyone communication. Within each segment, yes - it’s still the same mechanism but both segments communicate only through a single connection between two chosen nodes. In case of downtime, this connection will failover automatically. As a result, we get less network chatter and less bandwidth usage between remote datacenters. So, basically, "Segment ID" column tells us to which segment a node is assigned.

Related resources  Galera Cluster for MySQL - Tutorial  How To Monitor MySQL or MariaDB Galera Cluster / Percona XtraDB Cluster  All our Galera Cluster blogs

"Last Committed" column gives us information about the sequence number of the writeset that was last executed on a given node. It can be useful in determining which node is the most current one if there’s a need to bootstrap the cluster.

Rest of the columns are self-explanatory: Server version, uptime of a node and when the status was updated.

As you can see, the "Galera Nodes" section of the "Nodes/Hosts Stats" in the "Overview" tab gives you a pretty good understanding of the cluster’s health - whether it forms a "Primary" component, how many nodes are healthy, are there any performance issues with some nodes and if yes, which node is slowing down the cluster.

This set of data comes in very handy when you operate your Galera cluster, so hopefully, no more flying blind :-)

Tags:  galera cluster MariaDB MySQL mariadb cluster mysql cluster monitoring metrics

MySQL, Percona Server for MySQL and MariaDB Default Configuration Differences

In this blog post, I’ll discuss some of the MySQL and MariaDB default configuration differences, focusing on MySQL 5.7 and MariaDB 10.2. Percona Server for MySQL uses the same defaults as MySQL, so I will not list them separately.

MariaDB Server is a general purpose open source database, created by the founders of MySQL. MariaDB Server (referred to as MariaDB for brevity) has similar roots as Percona Server for MySQL, but is quickly diverging from MySQL compatibility and growing on its own. MariaDB has become the default installation for several operating systems (such as Red Hat Enterprise Linux/CentOS/Fedora). Changes in the default variables can make a large difference in the out-of-box performance of the database, so knowing what is different is important.

As MariaDB grows on its own and doesn’t remain 100% compatible with MySQL, the defaults configuration settings might not mean everything or behave the way they used to. It might use different variable names, or implement the same variables in new ways. You also need to take into account that MariaDB uses it’s own Aria storage engine that has many configuration options that do not exist in MySQL.

Note: In this blog, I am looking at variables common to both MySQL or MariaDB, but have different defaults, not variables that are specific to either MySQL or MariaDB (except for the different switches inside the optimizer_switch).

Binary Logs Variable MariaDB Default MySQL Default sync_binlog 0 1 binlog_format Mixed Row


MySQL has taken a more conservative stance when it comes to the binary log. In the newest versions of MySQL 5.7, they have updated two variables to help ensure all committed data remains intact and identical. Binlog_format was updated to row in MySQL in order to prevent non-deterministic statements from having different results on the slave. Row-based replication also helps when performing a lot of smaller updates. MariaDB defaults to the Mixed format. Mixed uses statement-based format unless certain criteria are met. It that case, it uses the row format. You can see the detailed criteria for when the row format is used here:

The other difference that can cause a significant impact on performance is related to sync_binlog. Sync_binlog controls the number of commit groups to collect before synchronizing the binary log to disk. MySQL has changed this to 1, which means that every transaction is flushed to disk before it is committed. This guarantees that there can never be a committed transaction that is not recorded (even during a system failure). This can create a big impact to performance, as shown by a Roel Van de Paar in his blog:

MariaDB utilizes a value of 0 for sync_binlog, which allows the operating system to determine when the binlog needs to be flushed. This provides better performance, but adds the risk that if MariaDB crashes (or power is lost) that some data may be lost.

MyISAM Variable MariaDB Default MySQL Default myisam_recover_options BACKUP,QUICK OFF key_buffer_size 134217728 8388608


InnoDB replaced MyISAM as the default storage engine for some time now, but it is still used for many system tables. MySQL has tuned down the MyISAM settings, since it is not heavily used.

When mysqld opens a table, it checks whether the table is marked as crashed, or was not closed properly, and runs a check on it based on the myisam_recover_options settings. MySQL disables this by default, preventing recovery. MariaDB has enabled the BACKUP and QUICK recovery options. BACKUP causes a table_name-datetime.bak file to be created whenever a data file is changed during recovery. QUICK causes mysqld to not check the rows in a table if there are no delete blocks, ensuring recovery can occur faster.

MariaDB 10.2 increased the key_buffer_size. This allows for more index blocks to be stored in memory. All threads use this buffer, so a small buffer can cause information to get moved in and out of it more quickly. MariaDB 10.2 uses a buffer 16 times the size of MySQL 5.7: 134217728 in MariaDB 10.2 vsx 8388608 in MySQL 5.7.

Innodb Variable MariaDB Default MySQL Default innodb_max_undo_log_size 10485760(10 MiB) 1073741824(1024 MiB)


InnoDB variables have remained primarily unchanged between MariaDB 10.2 and MySQL 5.7. MariaDB has reduced the innodb_max_undo_log_size starting in 10.2.6. This was reduced from MySQL’s default of 1073741824(1024 MiB) to 10485760(10 MiB). These sizes reflect the maximum size an undo tablespace can become before it is marked for truncation. The tablespace doesn’t get truncated unless innodb_undo_log_truncate is enabled, and it is disabled in MySQL 5.7 and MariaDB 10.2 by default.

Logging Variable MariaDB Default MySQL Default log_error /var/log/mysqld.log log_slow_admin_statements ON OFF log_slow_slave_statements ON OFF lc_messages_dir /usr/share/mysql


Logs are extremely important for troubleshooting any issues so the different choices in logging for MySQL 5.7 and MariaDB 10.2 are very interesting.

The log_error variable allows you to control where errors get logged. MariaDB 10.2 leaves this variable blank, writing all errors to stderr. MySQL 5.7 uses an explicitly created file at: /var/log/mysqld.log.

MariaDB 10.2 has also enabled additional slow statement logging. Log_slow_admin_statements create a record for any administrative statements that are not typically written to the binlog. Log_slow_slave_statements log the replicated statements sent from the master, if they are slow to complete. MySQL 5.7 does not enable logging of these statements by default.

Lc_messages_dir is the directory that contains the error message files for various languages. The variable defaults might be a little misleading in MariaDB 10.2. Lc_messages_dir is left empty by default, although it still uses the same path as MySQL 5.7. The files are located in /usr/share/mysql by default for both databases.

Performance Schema Variable MariaDB Default MySQL Default performance_schema OFF ON performance_schema_setup_actors_size 100 -1 (auto adjusted) performance_schema_setup_objects_size 100 -1 (auto adjusted)


The performance schema is an instrumentation tool that is designed to help troubleshoot various performance concerns. MySQL 5.7 enables the performance schema, and many of its instruments, by default. MySQL even goes so far as to detect the appropriate value for many Performance Schema variables instead of setting a static default. The Performance Schema does come with some overhead, and there are many blogs regarding how much this can impact performance. I think Sveta Smirnova said it best in her blog  Performance Schema Benchmarks OLTP RW: “…test on your system! No generic benchmark can exactly repeat a workload on your site.”

MariaDB has disabled the Performance Schema by default, as well as adjusted a couple of the dynamic variables. Note that if you wish to disable or enable the Performance Schema, it requires a restart of the server since these variables are not dynamic. Performance_schema_setup_actors_size and performance_schema_setup_objects_size have both been set to a static 100, instead of the dynamic -1 used in MySQL 5.7. These both limit the number of rows that can be stored in relative tables. This creates a hard limit to the size these tables can grow to, helping to reduce their data footprint.

SSL/TLS Variable MariaDB Default MySQL Default ssl_ca ca.pem ssl_cert server-cert.pem ssl_key server-key.pem


Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols that allow for secure communication. SSL is actually the predecessor of TLS, although both are often referred to as SSL. MySQL 5.7 and MariaDB 10.2 support both yaSSL and OpenSSL. The default configurations for SSL/TLS differ only slightly between MySQL 5.7 and MariaDB 10.2. MySQL 5.7 sets a specific file name for ssl_ca, ssl_cert, and ssl_key. These files are created in the base directory, identified by the variable basedir. Each of these variables is left blank in MariaDB 10.2, so you need to set them before using secure connections. These variables are not dynamic, so be sure to set the values before starting your database.

Query Optimizer MariaDB 10.2 MySQL 5.7 Optimization Meaning Switch N/A OFF Batched Key Access Controls use of BKA join algorithm batched_key_access N/A ON Block Nested-Loop Controls use of BNL join algorithm block_nested_loop N/A ON Condition Filtering Controls use of condition filtering condition_fanout_filter Deprecated ON Engine Condition Pushdown Controls engine condition pushdown engine_condition_pushdown ON N/A Engine Condition Pushdown Controls ability to push conditions down into non-mergeable views and derived tables condition_pushdown_for_derived ON N/A Exists Subquery Allows conversion of in statements to exists statements exists_to_in ON N/A Exists Subquery Allows conversion of exists statements to in statements in_to_exists N/A ON Index Extensions Controls use of index extensions use_index_extensions OFF N/A Index Merge Allows index_merge for non-equality conditions index_merge_sort_intersection ON N/A Join Algorithms Perform index lookups for a batch of records from the join buffer join_cache_bka ON N/A Join Algorithms Controls use of BNLH and BKAH algorithms join_cache_hashed ON N/A Join Algorithms Controls use of incremental algorithms join_cache_incremental ON N/A Join Algorithms Controls use of block-based algorithms for outer joins outer_join_with_cache ON N/A Join Algorithms Controls block-based algorithms for use with semi-join operations semijoin_with_cache OFF N/A Join Buffer Creates the join buffer with an estimated size based on the estimated number of rows in the result optimize_join_buffer_size ON N/A Materialized Temporary Tables Allows index creation on derived temporary tables derived_keys ON N/A Materialized Temporary Tables Controls use of the rowid-merge strategy partial_match_rowid_merge ON N/A Materialized Temporary Tables Controls use of the partial_match_table-scan strategy partial_match_table_scan OFF ON Multi-Range Read Controls use of the multi-range read strategy mrr OFF ON Multi-Range Read Controls use of cost-based MRR, if mrr=on mrr_cost_based OFF N/A Multi-Range Read Enables key ordered scans if mrr=on mrr_sort_keys ON N/A Order By Considers multiple equalities when ordering results ordery_uses_equalities ON N/A Query Plan Allows the optimizer to use hidden components of InnoDB keys extended_keys ON N/A Query Plan Controls the removal of irrelevant tables from the execution plan table_elimination ON N/A Subquery Stores subquery results and correlation parameters for reuse subquery_cache N/A ON Subquery Materialization Controls us of cost-based materialization ubquery_materialization_cost_based N/A ON Subquery Materialization &


Controls the semi-join duplicate weedout strategy duplicateweedout


The query optimizer has several variances that not only affect query performance but also how you write SQL statements. The query optimizer is substantially different between MariaDB and MySQL, so even with identical configurations you are likely to see varying performance.

The sql_mode puts restrictions on how you can write queries. MySQL 5.7 has several additional restrictions compared to MariaDB 10.2. Only_full_group_by requires that all fields in any select…group by statement are either aggregated or inside the group by clause. The optimizer doesn’t assume anything regarding the grouping, so you must specify it explicitly.

No_zero_date, and no_zero_in_date both affect how the server interprets 0’s in dates. When no_zero_date is enabled, values of ‘0000-00-00’ are permitted but produce a warning. With strict mode enabled, then the value is not permitted and produces an error. No_zero_in_date is similar, except it applies to any section of the date(month, day, or year). With this disabled, dates with 0 parts, such as ‘2017-00-16’ are allowed as is. When enabled, the date is changed to ‘0000-00-00’ without warning. Strict mode prevents the date being inserted, unless ignore is provided as well. “INSERT IGNORE” and “UPDATE IGNORE” inserts the dates as ‘0000-00-00’. 5.7.4 changed this. No_zero_in_date was consolidated with strict mode, and the explicit option is deprecated.

The query_prealloc_size determines the size of the persistent buffer used for statement parsing and execution. If you regularly use complex queries, it can be useful to increase the size of this buffer, as it does not need to allocate additional memory during the query parsing. MySQL 5.7 has set this buffer to 8192, with a block size of 1024. MariaDB increased this value in 10.1.2 up to 24576.

Query_alloc_block_size dictates the size in bytes of any extra blocks allocated during query parsing. If memory fragmentation is a common problem, you might want to look at increasing this value. MySQL 5.7 uses 8192, while MariaDB 10.2 uses 16384 (twice that). Be careful when adjusting the block sizes: going too high consumes more than the needed amount of memory, and too low causes significant fragmentation.

The optimizer_switch variable contains many different switches that impact how the query optimizer plans and performs different queries. MariaDB 10.2 and MySQL 5.7 have many differences in their enabled options, and even the available options. You can see a brief breakdown of each of the options below. Any options with N/A is not supported in that server.

Miscellaneous Variable MariaDB Default MySQL Default default_tmp_storage_engine NULL InnoDB group_concat_max_len 1048576(1M) 1024(1K) Lock_wait_timeout 86400 (1 DAY) 31536000 (1 YEAR) Max_allowed_packet (16777216) 16MB 4194304 (4MB) Max_write_lock_count 4294967295 18446744073709551615 Old_passwords OFF 0 Open_files_limit 0 dependent on OS pid_file /var/lib/mysql/ /var/run/mysqld/ secure_file_priv Varies by installation sort_buffer_size 2097152 262144 table_definition_cache 400 autosized table_open_cache_instances 8 16 thread_cache_size autosized autosized thread_stack 292KB 192KB/256KB


There are many variables that do not fit well into a group. I will go over those here.

When creating temporary tables, if you do not specify a storage engine then a default is used. In MySQL 5.7 this is set to InnoDB, the same as the default_storage_engine. MariaDB 10.2 also uses InnoDB, but it is not explicitly set. MariaDB sets the default_tmp_storage_engine to NULL, which causes it to use the default_storage_engine. This is important to remember if you change your default storage engine, as it would also change the default for temporary tables. An Important note, in MariaDB this is only relevant to tables created with “CREATE TEMPORARY TABLE”. Internal in-memory temporary tables use the memory storage engine, and internal, on-disk temporary tables use the aria engine by default.

The Group_concat function can cause some very large results if left unchecked. You can restrict the maximum size of results from this function with group_concat_max_len. MySQL 5.7 limits this to 1024(1K). MariaDB increased the value in 10.2.4 up to 1048576(1M).

Lock_wait_timeout controls how long a thread waits as it attempts to acquire a metadata lock. Several statements require a metadata lock, including DDL and DML operations, Lock Tables, Flush Tables with Read Lock and Handler statements. MySQL 5.7 defaults to the maximum possible value (one year), while MariaDB 10.2 has toned this down to one day.

Max_allowed_packet sets a limit to the maximum size of a packet, or a generated/intermediate string. This value is intentionally kept small (4MB) on MySQL 5.7 in order to detect the larger, intentionally incorrect packets. MariaDB has increased this value to 16MB. If using any large BLOB fields, you need to adjust this value to the size of the largest BLOB, in multiples of 1024, or you risk running into errors transferring the results.

Max_write_lock_count controls the number of write locks that can be given before some read lock requests being processed. In extremely heavy write loads your reads can pile up while waiting for the writes to complete. Modifying the max_write_lock_count allows you to tune how many writes can occur before some reads are allowed against the table. MySQL 5.7 keeps this value at the maximum (18446744073709551615), while MariaDB 10.2 lowered this to 4294967295. One thing to note is that this is still the maximum value on MariaDB 10.2.

Old_passwords controls the hashing method used by the password function, create user and grant statements. This variable has undergone several changes in MySQL 5.7. As of 5.7.4 the valid options were MySQL 4.1 native hashing, Pre-4.1 (“old”) hashing, and SHA-256 hashing. Version 5.7.5 removed the “old” Pre-4.1 method, and in 5.7.6 the variable has been deprecated with the intent of removing it entirely. MariaDB 10.2 uses a simple boolean value for this variable instead of the enumerated one in MySQL 5.7, though the intent is the same. Both default the old_passwords to OFF, or 0, and allow you to enable the older method if necessary.

Open_files_limit restricts the number of file descriptors mysqld can reserve. If set to 0 (the default in MariaDB 10.2) then mysqld reserves max_connections * 5 or max_connections + table_open_cache * 2, whichever is larger. It should be noted that mysqld cannot use an amount larger than the hard limit imposed by the operating system. MySQL 5.7 is also restricted by the operating systems hard limit, but is set at runtime to the real value permitted by the system (not a calculated value).

The pid_file allows you to control where you store the process id file. This isn’t a file you typically need, but it is good to know where it is located in case some unusual errors occur. On MariaDB you can find this inside /var/lib/mysql/, while on MySQL 5.7 you will find it inside /var/run/mysqld/. You will also notice a difference in the actual name of the file. MariaDB 10.2 uses the hostname as the name of the pid, while MySQL 5.7 simply uses the process name (

Secure_file_priv is a security feature that allows you to restrict the location of files used in data import and export operations. When this variable is empty, which was the default in MySQL before 5.7.6, there is no restriction. If the value is set to NULL, import and export operations are not permitted. The only other valid value is the directory path where files can be imported from or exported to. MariaDB 10.2 defaults to empty. As of MySQL 5.7.6, the default will depend on the install_layout CMAKE option.

INSTALL_LAYOUT DEFAULT VALUE STANDALONE,WIN NULL(>=MySQL 5.7.16_,empty(<MySQL 5.7.16) DEB,RPM,SLES,SVR4 /var/lib/mysql-files Other Mysql-files under the CMAKE_INSTALL_PREFIX value


Mysqld uses a sort buffer regardless of storage engine. Every session that must perform a sort allocates a buffer equal to the value of sort_buffer_size. This buffer should at minimum be large enough to contain 15 tuples. In MySQL 5.7, this defaults to 262144, while MariaDB 10.2 uses the larger value 2097152.

The table_definition_cache restricts the number of table definitions that can be cached. If you have a large number of tables, mysqld may have to read the .frm file to get this information. MySQL 5.7 auto detects the appropriate size to use, while MariaDB 10.2 defaults this value to 400. On my small test VM, MySQL 5.7 chose a value of 1400.

The table_open_cache_instances vary in implementation between MySQL and MariaDB. MySQL 5.7 creates multiple instances of the table_open_cache, each holding a portion of the tables. This helps reduce contention, as a session needs to lock only one instance of the cache for DML statements. In MySQL 5.7.7 the default was a single instance, but this was changed in MySQL 5.7.8 (increased to 16). MariaDB has a more dynamic approach to the table_open_cache. Initially there is only a single instance of the cache, and the table_open_cache_instances variable is the maximum number of instances that can be created. If contention is detected on the single cache, another instance is created and an error logged. MariaDB 10.2 suspects that the maximum eight instances it sets by default should support up to 100 CPU cores.

The thread_cache_size controls when a new thread is created. When a client disconnects the thread is stored in the cache, as long as the maximum number of threads do not exist. Although this is not typically noticeable, if your server sees hundreds of connections per second you should increase this value to so that new connections can use the cache. Thread_cache_size is an automatically detected variable in both MySQL 5.7 and MariaDB 10.2, but their methods to calculate the default vary significantly. MySQL uses a formula, with a maximum of 100: 8+ (max_connections / 100). MariaDB 10.2 uses the smaller value out of 256 or the max_connections size.

The thread_stack is the stack size for each thread. If the stack size is too small, it limits the complexity of SQL statements, the recursion depth of stored procedures and other memory-consuming actions. MySQL 5.7 defaults the stack size to 192KB on 32-bit platforms and 256KB on 64-bit systems. MariaDB 10.2 adjusted this value several times. MariaDB 10.2.0 used 290KB, 10.2.1 used 291KB and 10.2.5 used 292KB.


Hopefully, this helps you with the configurations options between MySQL and MariaDB. Use the comments for any questions.

Fosdem 2018 – MySQL & Friends Devroom: CfP is open !

Good news ! The MySQL & Friends Devroom has been accepted for Fosdem’s 2018 edition !

This event is a real success story for the MySQL ecosystem, the content, the speakers and the attendees are growing every year.

Fosden 2018’s edition will take place 3rd & 4th February in Brussels and our MySQL & Friends devroom will run on Sunday 4th. Fosdem & MySQL is a love story started 18 years ago !

The committee selecting the content for our devroom is not yet created and if you want to be part of this experience, just send me an email (candidate at mysqlandfriends dot eu) before Oct 20th .

If you want to join the Committee you have to align with the following conditions:

– planning to be present at Fosdem
– having a link with MySQL’s Ecosystem
– have some time to review and rate talks
– be an ambassador for the event by promoting it

The Call for Paper is open, you can already submit your proposal using Fosdem’s submission tool.

Marketing and sales talks are not welcome, focus on the engineering, the operations and of course the developers.

Don’t forget to specify the track (MySQL and Friends devroom) and set the duration to 20 mins (short but intense ;-) )!

Thank you, and see you soon in Brussels !


Stepicon 2017 and thoughts on MySQL MOOCs

I periodically do MySQL Troubleshooting tutorials. Every time I run them I have a dilemma on how to make tutorials more interactive. I tried:
  1. Just talk. But troubleshooting is about action, not about sitting and listening 
  2. Put tasks on my own website which required a lot of coding, not related to the talk and tasks themselves.
  3. Put tasks on VirtualBox image. Many attendees do not want to use flash drive or download big image from Internet.
  4. Put tasks into Amazon EC2 instance. This is practically good option, but expensive.
  5. Use machines, provided by tutorial organizers (conference or training center). This is, probably, the best option, but few do it.
I did not like any of these options as final solution. That's why for my last two tutorials I tried Stepik platform. This is how "Troubleshooting MySQL Performance" course was born. Course is in Russian language.

This is not really standard online course which usually include video lectures and tasks. This is just set of tasks which follow my tutorial. I test for what I teach, therefore it is hard to use this course without attending tutorial. However it is something worked and I was pretty satisfied watching how attendees complete tasks online. Good method to measure effectiveness of the tutorial and adjust it on fly.

As usual I found few issues with tasks modules and reported them. Stepik answered very quickly and offered professional video studio for making complete online course.

The offer implies a lot of work for me, but also very attractive, because:
  1. This is a chance to make really great MySQL MOOC course, open for everyone
  2. Professional video would help to perfect course content.
  3. I am free to choose any topic, so course can be for beginners as well as for advanced users.
  4. Current situation with MySQL users is many of them still live in year 2003. I was considering to quit speaking when one of my friends, working with data in really big company, told me in year 2016 what he hears about partitioning support in MySQL for the first time. He attended many conferences with MySQL talks. This shows we are promoting new nice features for audience which already very interested in MySQL. But others quit using MySQL to try competing technologies without knowledge of what MySQL can do.
  5. Stepik has very nice learning concept: short videos each less than 10 minutes and many tests. I like it when I learn.
  6. Test modules, while not perfect for MySQL Troubleshooting course, are very handy to learn new things. I am certainly interested in using them for tutorials.
  7. Stepik can help with student's homework reviews.
However there are things which prevent me from starting this project:
  1. Time to create the course. I have to work and I should have really strong motivation if want to spend a lot of free time, doing this course.
  2. Quality. No matter which experience I have I need help from my honest colleagues who will review full course and tell me what I am doing wrong and what I miss. Or maybe even work on content together.
  3. Time to maintain the course. It is common with MOOCs what students leave comments and expect help with their homeworks. Therefore making a course does not stop when you create final video or homework. Instead, if course becomes popular, it will never end.
So this is how I was invited to Stepicon 2017: event for people who teach on this platform. It was interesting to find out who else use Stepik and what is their motivation. Here is what I found interesting.
  1. Universities, teachers, large and middle-size companies are main content creators at Stepik.
  2. Companies use platform to teach their own employees, popularize products and seek for new hires.
  3. There are closed and open courses.
  4. Some do courses, similarly as I did: offline content and Stepik only for tasks. But I was the only one who did it as tutorial add-on. Most common case is when such tasks are either companion for closed course, accessible inside a company or for a fee, or for offline course in University.
  5. Stepik supports LSM and used inside internal learning platforms as well as together with other MOOC platforms, such as Coursera.
  6. JetBrains demonstrated its own learning tool: PyCharm Edu which is integrated with Stepik.
I am still in doubt if it makes sense for me as individual to do something further than assistance for tutorial, but would be happy if you comment and let me know what you think.

This Week in Data with Colin Charles 9: Oracle OpenWorld and Percona Live Europe Post Mortem

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

This week: a quick roundup of releases, a summary of my thoughts about Percona Live Europe 2017 Dublin, links to look at and upcoming appearances. Oracle OpenWorld happened in San Francisco this past week, and there were lots of MySQL talks there as well (and a good community reception). I have a bit on that as well (from afar).

Look for these updates on Planet MySQL.

Releases Percona Live Europe Dublin

I arrived on Sunday and chose to rest for my tutorial on Monday. Ronald Bradford and I delivered a tutorial on MySQL Security, and in the morning we chose to rehearse. Percona Live Europe had a full tutorial schedule this year, albeit with one cancellation: MySQL and Docker by Giuseppe Maxia, whom we missed this conference. Check out his blog for further posts about MySQL, Docker, and SQL Roles in MySQL 8!

We had the welcome reception at Sinott’s Bar. There was a large selection of food on each table, as well as two drinks for each of us. It was lively, and I think we overtook most of the basement. Later that evening, there were drinks around the hotel bar, as people started to stream in for Tuesday’s packed schedule!

Tuesday was the conference kickoff, with Peter Zaitsev doing the opening keynote on the state of the open source database ecosystem. The bonus of this keynote was also the short 5-minute talks that would help you get a pick on the important topics and themes around the conference. I heard good things about this from attendees. While most people attended the talks, I spent most of my day in meetings! Then the Community Dinner (thank you Oracle for sponsoring), where we held this year’s Lightning Talks (and plenty more to drink). A summary of the social events is at Percona Live Europe Social.

Wednesday morning we definitely wanted to start a few minutes later, considering people were streaming in slower thanks to the poor weather (yes, it rained all day). The State of the Dolphin ensured we found out lots of new things coming to MySQL 8.0 (exciting!), then the sponsor keynote by Continuent given by MC Brown, followed by a database reliability engineering panel with the authors of Database Reliability Engineering Charity Majors and Laine Campbell. Their book signing went quickly too – they have many fans. We also heard from Pepper Media on their happy journey with Percona. Another great day of talks before the evening reception (which had less folk, since people were flying off that evening). Feel free to also read Matthias Crauwels, Percona Live Europe 2017 Review.

Percona Live Europe 2017 Dublin had over 350+ attendees, over 140+ speakers – all in a new location! If you have any comments please feel free to shoot me an email.

Oracle OpenWorld from Afar

At this year’s Oracle OpenWorld there was talk about Oracle’s new self-driving, machine-learning based autonomous database. There was a focus on Amazon SLAs.

It’s unclear if this will also be what MySQL gets eventually, but we have in the MySQL world lossless semi-sync replication. Amazon RDS for MySQL is still DRBD based, and Google Cloud SQL does use semisync – but we need to check further if this is lossless semisync or not.

Folk like claim they can do autonomous self-driving databases, and have many platform integrations to boot. Anyone using this?

Nice to see a Percona contribution to remove InnoDB buffer pool mutex get accepted, and apparently it was done the right way. This is sustainable engineering: fix and contribute back upstream!

I was particularly interested in StorageTapper released by Uber to do real-time MySQL change data streaming and transformation. The slide deck is worth a read as well. also gave a talk. My real takeaway from this was about why MySQL is strong: “thousands of instances, a handful of DBAs.” Doug Henschen also talks about a lot of custom automation capabilities, the bonus of which is many are probably already open source. There are some good talks and slide decks to review.

It wouldn’t be complete without Dimitri Kravtchuk doing some performance smackdowns, and I highly recommend you read MySQL Performance: 2.1M QPS on 8.0-rc.

And for a little bit of fun: there was also an award given to Alexander Rubin for fixing MySQL#2: does not make toast. It’s quite common for open source projects to have such bugs, like the famous Ubuntu bug #1. I’ve seen Alexander demo this before, and if you want to read more check out his blog post from over a year ago: Fixing MySQL Bug#2: now MySQL makes toast! (Yes, it says April 1! but really, it was done!) Most recently it was done at Percona Live Santa Clara 2017.

Link List Upcoming appearances

Percona’s website keeps track of community events, to see where to listen to a Perconian speak. My upcoming appearances are:


I look forward to feedback/tips via e-mail at or on Twitter @bytebot.

Explode Charts To Drill In With VividCortex

VividCortex is designed with two key scalability requirements in mind: it must perform well in large environments with many hosts, and the UI must help users understand and inspect those environments easily. The universal time selector, the host filter, and other features let you start with a top-level summary, grasp what’s happening in your entire environment no matter how many hosts, and then drill into hosts, time ranges, queries, and metrics of interest. We’ve enhanced our charting and graphing capabilities to make this same zoom-in-drill-down inspectability easier, too.

When you load a charts dashboard, you’ll see charts that contain one line per metric. For example, if I view my MongoDB dashboard and use the top-navigation filter to quickly limit the view to “checkpoints,” I’ll see the following single chart.

Notice how the subtitle says “Average of 5 hosts.” Each line on this chart is an average of 5 hosts’s metrics. VividCortex makes it simple to explode this chart into individual ones so you can see what’s happening with each host separately:

When you do this, VividCortex goes from one chart with averages of each metric, to one chart per host with metrics broken out separately. It opens up a new browser tab for this so you don’t lose your place. We designed this so you can explode a chart, inspect, and continue working without needing to navigate back and forth.

This workflow enables scalable charting and graphing, using either custom or prebuilt dashboards for MySQL, MongoDB, PostgreSQL and Redis, with simple drill-down. We hope you like it! Submit any questions or ideas for enhancements with the in-app chat, and we’ll keep making things better for you.

MySQL Enterprise Monitor 3.4.4 has been released

We are pleased to announce that MySQL Enterprise Monitor 3.4.4 is now available for download on the My Oracle Support (MOS) web site. This is a maintenance release that includes a few new features and fixes a number of bugs. You can find more information on the contents of this release in the change log.

You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet.

Important: MySQL Enterprise Monitor (MEM) 4.0 offers many significant improvements over MEM 3.4 and we highly recommend that you consider upgrading. More information on MEM 4.0 is available here:

Please open a bug or a ticket on My Oracle Support to report problems, request features, or give us general feedback about how this release meets your needs.

If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to, or contact Sales at

Thanks and Happy Monitoring!

- The MySQL Enterprise Tools Development Team

Useful URLs

Graph Descriptions for Metrics Monitor in Percona Monitoring and Management 1.3.0

The Metrics Monitor of Percona Monitoring and Management 1.3.0 (PMM) provides graph descriptions to display more information about the monitored data without cluttering the interface.

Percona Monitoring and Management 1.3.0 is a free and open-source platform for managing and monitoring MySQL®, MariaDB® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL, MariaDB® and MongoDB servers to ensure that your data works as efficiently as possible.

Each dashboard graph in PMM contains a lot of information. Sometimes, it is not easy to understand what the plotted line represents. The metric labels and the plotted data are limited and have to account for the space they can use in dashboards. It is simply not possible to provide additional information which might be helpful when interpreting the monitored metrics.

The new version of the PMM dashboards introduces on-demand descriptions with more details about the metrics in the given graph and about the data.

These on-demand descriptions are available when you hover the mouse pointer over the icon at the top left corner of each graph. The descriptions do not use the valuable space of your dashboard. The graph descriptions appear in small boxes. If more information exists about the monitored metrics, the description contains a link to the associated documentation.

In release 1.3.0 of PMM, Metrics Monitor only starts to use this convenient tool. In subsequent releases, graph descriptions are going to become a standard attribute of each Metrics Monitor dashboard.

Getting Started with MariaDB MaxScale Database Firewall Filter

Getting Started with MariaDB MaxScale Database Firewall Filter anderskarlsson4 Thu, 10/05/2017 - 09:20

MariaDB Server and MariaDB MaxScale provides a secure, high-performance database platform. Some aspects of security goes into MariaDB Server and some into MariaDB MaxScale. This blog post describes one of the security features of MariaDB MaxScale, the Database Firewall filter.

MariaDB MaxScale is a powerful tool mostly used for database load balancing, and as such has many benefits, which we'll cover in our upcoming webinar about advanced database proxies. Another aspect of MariaDB MaxScale though is that there are many additional modules that can be used, in particular a range of filters that can be applied, and in this blog we are looking at the Database Firewall filter.

The Database Firewall filter allows you to specify which SQL statements are allowed to run and which are not, by using what are called whitelists and blacklists respectively. You can combine blacklists and whitelists also. In addition to this, the Database Firewall filter also allows a number of other rules to be applied.

Configuring MariaDB MaxScale and MariaDB Server

Before we start working with configuring the specifics of the Database Firewall filter in MariaDB MaxScale, let us have a look at how to set up a basic configuration of MaxScale and how to configure MariaDB Server to work with MariaDB MaxScale when the latter is used to implement security features.

For MariaDB MaxScale to add a level of security, we have to make sure that we don't have traffic bypassing MariaDB MaxScale and access MariaDB Server directly. There are several means of doing this, but in this case I'm going to choose the easy way out. We will run MariaDB MaxScale and MariaDB Server on the same server. Another assumption is that we want to maximize application transparency as much as possible, so application really should not have to have any special settings to run with MariaDB MaxScale as compared to when connecting directly to MariaDB Server.

For this to work, obviously, we are going to work with an environment where MariaDB Server and MariaDB MaxScale are installed on the same server.

Configuring MariaDB Server

What we are going to do here is to make sure that only MaxScale, or any other service with an appropriate username and password that runs on the same server as MariaDB Server, can connect to MariaDB Server. This we are going to do by changing the bind-address of MariaDB Server. So what is this, you ask? A bind address is the network interface that a program that listens to the network listens on. Usually you don't care much about this, there is just one interface in most cases anyway, right?

Well no, the by far most common number of interfaces are 2! So where is that second RJ-45 connector on your box then? The answer is that there is none, this is a virtual interface called the loopback. The bind-address is always associated with a network address, which in the case of your normal network interface is the node address of the server on the network, you connect to something using that address and the traffic is directed there through some kind of magic.

The loopback interface is only available on the box itself and it is always associated with the address This address is not going through any kind of network hardware, it is all in software! I told you this was magic, right!

For a service running on a server, such as MariaDB Server, MaxScale or Apache, they are by default set up to listen or bind on IPADDR_ANY, which means that they listen on any interface on a given port, including the loopback interface. Note that you can listen on the same port on the same box, as long as they are on different interfaces. What we are going to do first is to have MariaDB Server listen only on connections on the loopback interface. Head off to edit your MariaDB Server configuration file, like /etc/my.cnf.d/server.cnf if you are running CentOS / Red Hat and set up bind-address in the mysqld section like this:

[mysqld] bind-address=

Note that we do not have to set the server to listen to a different port, the default 3306 is just fine as even though we are about to set up MariaDB MaxScale to listen to the same port, we are also to set up MariaDB MaxScale to listen bind on another interface, i.e. the normal ethernet interface.

Configuring MariaDB MaxScale to work with MariaDB Server

We now have to force MariaDB MaxScale to listen to the ethernet interface only, so it doesn't collide with MariaDB Server, and also to listen on port 3306. This is for the listener, and we are to give that an appropriate name as we in this case are using MaxScale as a firewall only. So head off to that old time favorite text editor of yours and edit the MariaDB MaxScale configuration file, which is probably in /etc/maxscale.cnf and add a section like this:

[Firewall Listener] type=listener service=Firewall Service protocol=MySQLClient address= port=3306

Replacing the address with the address of your server you are testing this on, obviously.

Starting up MariaDB MaxScale and MariaDB Server

We are soon ready to start up, but MariaDB MaxScale need some more work on the configuration. The default configuration that comes with MariaDB MaxScale has several different services in it and a lot of comments, here I will provide a configuration that is the bare minimum for MariaDB MaxScale to work. We need the listener specific above of course, but also a few other things in our MariaDB MaxScale configuration file /etc/maxscale.cnf.

Server configuration

This section in MariaDB MaxScale defines how MaxScale talks to the different servers. In this case, we will connect to just one server, so that is easy:

[server1] type=server address= port=3306 protocol=MySQLBackend

The thing to note here is that as we have MariaDB MaxScale talking to MariaDB Server on the loopback interface, we set address to, not the address of our host on the network.

Monitor configuration

The we configure a monitor that checks the status of our server, again, this is a bare minimum configuration:

[MySQL Monitor] type=monitor module=mysqlmon servers=server1 user=myuser passwd=mypwd

The servers parameter points to the server defined in the section above. The user and password arguments are used by MaxScale to connect to MariaDB Server to check the status of it. This user is created like this (but you can, and should, use a different username and password than the one used here). From the MariaDB command line on the server we are working with:

$ mysql -u root Welcome to the MariaDB monitor.  Commands end with ; or \g. Your MariaDB connection id is 14 Server version: 10.2.7-MariaDB MariaDB Server Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE USER 'myuser'@'' IDENTIFiED BY  'mypwd'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT SELECT ON mysql.user TO 'myuser'@''; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT SELECT ON mysql.db TO 'myuser'@''; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT SELECT ON mysql.tables_priv TO 'myuser'@''; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT SHOW DATABASES ON *.* TO 'myuser'@''; Query OK, 0 rows affected (0.00 sec)

OK, that's it for the monitor and the listener, so what remains before we start looking at the focal point for this blog post, the filter, what remains is the router or service.

Service configuration

As we are focusing on the Database Firewall filter here, we are using just a single basic router for this, mainly to have something to attach our filter to. We will be using the readconnroute service router here. The configurations looks like this:

[Firewall Service] type=service router=readconnroute servers=server1 user=myuser passwd=mypwd router_options=running filters=Firewall

Most of these parameters are rather obvious I guess. The username and password are used to be able to extract username and passwords from the server to use for authentication. In the Monitor configuration above we showed how to create a database user with appropriate privileges, and this included the necessary privileges both for the Monitor as well as for the Service. Also, you would want to use a different username and password from my example here. The router_options set to running means that we can connect to any server, as long as it is running. And finally the filters setting points to the filter we will be using, and lets move on to this.

Firewall filter configuration

The firewall filter is configured in two places, first it is configured in the maxscale configuration just as usual, and then there is a separate file with the firewall rules. Let's start with the maxscale.cnf settings first:

[Firewall] type=filter module=dbfwfilter action=allow rules=/etc/maxscale.modules.d/rules.txt

There are only two settings here that are interesting, one is the action=allow setting. What this means is that the rules we are to set up define the SQL statements that are allowed, and any other are disallowed, this is called whitelisting. You can define a set up rules for statements that you want to prohibit, and in that case you set allow=block, and this is then called blacklisting. In some cases you might want to do both blacklisting and whitelisting, and to achieve this you create two filters, and then you pipe one into the other in the filters setting in the services.

One more parameter is interesting for this filter, and this is the rules setting, which points to the file where the firewall rules defined, into case /etc/maxscale.modules.d/rules.txt, which is a file that we will create now.

Firewall rules

There are several means to define the firewall rules, for more information see In this example, I will set up a few very basic firewall rules and put them in the file /etc/maxscale.modules.d/rules.txt, let's look at it first and then I'll explain them:

rule allow_select deny regex '^(?i)select .*$' rule allow_show deny regex '^(?i)show .*$' users %@% match any rules allow_select allow_show

The first two defines SQL statements that we are allowed to run, that it says "deny" is not relevant here, that is just part of the syntax, instead it is the "allow" setting for the filter instance that is in effect. As you can image, the SQL statement is matched using a regular expression, in this case PCRE (Perl Regular Expression) is used, see for more details. Let's look closer at the three rules we have defined here. The first one says this:

  • ^ - The pattern matchin starts at the beginning of the string.
  • (?i) - Perform case insensitive matching
  • select - The exact word "select" followed by a space.
  • .* - Followed by 0 or more occurences of any character
  • $ - Followed by end of line

Given this, the second pattern should be obvious. In short, we allow any command that starts with the keyword SELECT or SHOW.

The "users" statement is used to map rules to users, in this case we are matching all users, you can have any kind of wildcards here. Then we say that a command is allowed that matches any of the given patterns.


Having set up the /etc/maxscale.cnf and the rules in /etc/maxscale.modules.d/rules.txt, we are ready to test it. First we restart MaxScale (again this is for CentOS 7):

$ sudo systemctl restart maxscale

Following that, let's connect to MariaDB through MaxScale and see what happens. Note that you have to connect as a non-root user, as root access is blocked by MariaDB MaxScale by default. Also, remember not to connect to the MySQL socket. So:

$ mysql -h test Welcome to the MariaDB monitor.  Commands end with ; or \g. Your MySQL connection id is 8266 Server version: 10.0.0 2.1.6-maxscale MariaDB Server Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [test]> select user(); +------------------+ | user()           | +------------------+ | anders@localhost | +------------------+ 1 row in set (0.00 sec) MySQL [test]> select c1 from t1; Empty set (0.00 sec) MySQL [test]> insert into t1 values(1); ERROR 1141 (HY000): Access denied for user 'anders'@'': Permission denied, query matched regular expression. Conclusion

In this blog, I have shown how to set up a very basic configuration, just to get started. I will follow up later with a blog that includes some details and more advanced configurations. As we have seen, this isn't really complicated per se, what takes some time is to develop a set of rules that match the SQL that is executed by your application.

If you want to dive deeper into advanced database proxies and MariaDB MaxScale, please join our upcoming webinar on Why Innovative Enterprises Deploy Advanced Database Queries on October 18.

Happy SQL'ing

MariaDB Server and MariaDB MaxScale provides a secure, high-performance database platform. Some aspects of security goes into MariaDB Server and some into MariaDB MaxScale. This blog post describes one of the security features of MariaDB MaxScale, the Database Firewall filter.

Login or Register to post comments