Planet MySQL

Percona Server for MySQL 8.0.15-6 Is Now Available

Percona announces the release of Percona Server for MySQL 8.0.15-6 on May 7, 2019 (downloads are available here and from the Percona Software Repositories).

This release is based on MySQL 8.0.14 and 8.0.15. It includes all bug fixes in these releases. Percona Server for MySQL 8.0.15-6 is now the current GA release in the 8.0 series. All of Percona’s software is open-source and free.

Percona Server for MySQL 8.0 includes all the features available in MySQL 8.0 Community Edition in addition to enterprise-grade features developed by Percona. For a list of highlighted features from both MySQL 8.0 and Percona Server for MySQL 8.0, please see the GA release announcement.

New Features:

  • The server part of MyRocks cross-engine consistent physical backups has been implemented by introducing rocksdb_disable_file_deletions and rocksdb_create_temporary_checkpoint session variables. These variables are intended to be used by backup tools. Prolonged use or other misuse can have serious side effects to the server instance.
  • RocksDB WAL file information can now be seen in the performance_schema.log_status table.

Bugs Fixed:


If you are upgrading from 5.7 to 8.0, please ensure that you read the upgrade guide and the document Changed in Percona Server for MySQL 8.0.

Find the release notes for Percona Server for MySQL 8.0.15-6 in our online documentation. Report bugs in the Jira bug tracker.

Shinguz: FromDual Ops Center for MariaDB and MySQL 0.9.1 has been released

FromDual has the pleasure to announce the release of the new version 0.9.1 of its popular FromDual Ops Center for MariaDB and MySQL focmm.

The FromDual Ops Center for MariaDB and MySQL (focmm) helps DBA's and System Administrators to manage MariaDB and MySQL database farms. Ops Center makes DBA and Admins life easier!

The main task of Ops Center is to support you in your daily MySQL and MariaDB operation tasks. More information about FromDual Ops Center you can find here.


The new FromDual Ops Center for MariaDB and MySQL (focmm) can be downloaded from here. How to install and use focmm is documented in the Ops Center User Guide.

In the inconceivable case that you find a bug in the FromDual Ops Center for MariaDB and MySQL please report it to the FromDual bug tracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to

Installation of Ops Center 0.9.1

A complete guide on how to install FromDual Ops Center you can find in the Ops Center User Guide.

Upgrade from 0.3 or 0.9.0 to 0.9.1

Upgrade from 0.3 or 0.9.0 to 0.9.1 should happen automatically. Please do a backup of your Ops Center Instance before you upgrade! Please also check Upgrading.

Changes in Ops Center 0.9.1 Upgrade
  • Sever upgrade bug fixed which prohibited installation of v0.9.0.
Build and Packaging
  • RPM package for RHEL/CentOS 7 is available now.
  • DEB package for Ubuntu 18.04 LTS is available now.
  • SElinux Policy Package file added.
  • COMMIT tag was not replaced correctly during build. This is fixed now.
Taxonomy upgrade extras:  Operations release Backup failover Restore

MySQL InnoDB Cluster : Recovery Process Monitoring with the MySQL Shell Reporting Framework

As explained in this previous post, it’s now (since 8.0.16) possible to use the MySQL Shell Reporting Framework to monitor MySQL InnoDB Cluster.

Additionally, when a member of the MySQL InnoDB Cluster’s Group leaves the group for any reason, or when a new node is added from a backup, this member needs to sync up with the other nodes of the cluster. This process is called the Distributed Recovery.

During the Distributed Recovery, the joiner receives from a donor all the missing transactions using asynchronous replication on a dedicated channel.

It’s of course also possible to monitor the progress of this recovery process by calculating how many transactions have still to be applied locally.

You can download the report file and uncompress it in ~/.mysqlsh/init.d/:


This report must be run only when connected to the joiner:

We can see that as soon as we reach 0, the node finishes the Recovery Process and joins the Group.

Exposing MyRocks Internals Via System Variables: Part 3, Compaction

In this blog post, we continue our series of exploring MyRocks mechanics by looking at the configurable server variables and column family options. In our last post, I explained at a high level how data moves from immutable memtables to disk. In this post, we’re going to talk about what happens to that data as it moves through the compaction process.

What is Compaction?

One of the philosophies of MyRocks is “write the data quickly and sort out data organization later”, which is pretty far removed from engines like InnoDB that take the approach of “continuously organize data on disk so it’s optimal as soon as possible”. MyRocks implements its philosophy in a way that is heavily reliant on a process called ‘compaction’.

Let’s pick up where we left off with that first persistent flush from immutable memtable to data file and see how compaction comes into play. You may have noticed in a few of the variable topics we mentioned how data initially flushed from memory ends up in compaction layer 0. This is the topmost layer in a multi-layered approach to storing your data.

MyRocks’ aim is to have data files that are sorted where you don’t have record overlap across data files. The first file will have a range of records (example: 1 – 100), then the next file would have the next range (example: 101 – 250), and so on. This really isn’t possible when data gets flushed into L0 because in order for that to occur, the flushing threads would have to be aware of what’s going on in all the other immutable memtables that are in scope, which could be in the process of being flushed by other concurrent flushing threads. So in order to get around this, we don’t bother with checking for data overlap in L0, but that’s not going to work for us for very long.

Once level 0 hits a certain size threshold it will start merging data down into the next compaction level down. Level 0 (L0) will feed into Level 1 (L1) and this is where we start removing overlapping data and make sure that sorting is recognized across the compaction layer. The compaction process from L0 to L1 works a little like this:

  1. MyRocks recognizes that the threshold for the amount of data allowed in L0 has been reached and calls for a thread to start compaction from L0 to L1.
  2. The compaction thread wakes up and checks the data files in L0 and picks one to merge into L1. Let’s say, for example, that the file that it picks has keys ranging from 0 – 150.
  3. The thread then checks files that are in L1 that contain records 0 – 150. Let’s say in this example that there are two files that contain these records.

  4. The thread reads the L0 file to be compacted and the two L1 files that it needs to merge the data with and performs a merge sort, making sure that duplicate records are removed.
  5. The thread writes new file(s) in L1 comprising of the newly sorted and deduplicated data.

  6. Old files that were read in from L0 and L1 are removed.

As this process continues, more and more files will be created in L1, until eventually L1 it will hit its size threshold and a new thread will be called to compact from L1 to L2. The process of compacting from L1 to L2 is the same as L0 to L1 but with two key differences.

Firstly, when data is compacted from L0 to L1, as mentioned before, there can be overlapping data. When you compact from L1 to L2, this isn’t a problem as overlapping and duplicate data will not exist by the time it has reached L1.

Secondly, when data is compacted from L0 to L1, due to the nature of the overlapping data, compaction has to be single threaded. As I mentioned before, threads that flush and compact have to be aware of the contents of other threads, or else they cannot ensure a lack of data overlap or duplication. Once data resides in L1, there is no longer any concerns about data overlap, so when you compact from L1 to L2 you can distribute the load over multiple threads.

Eventually, as you compact files from L1 to L2, you’ll hit the threshold for the data size of L2 and it will start to compact to L3 and so on and so on. The process of compacting from L2 to L3 and all subsequent compactions work exactly like the compaction from L1 to L2. This will go all the way down to the bottom most compaction layer, which by default is L6. Given that each compaction layer is larger than the preceding layer (ten times by default), your data set will eventually look like this.

I should note one thing that is misleading about the diagram above is that it doesn’t reflect the default 1:10 scale between compaction layers, but it does show the default nature of L1 being the same size as L0.

If you’re interested in learning about how MyRocks’ leveled compaction implementation selects data files for compaction you can read more about it here on the RocksDB Wiki.

When you compare this to other storage engines that update data files as data is updated, this all seems quite strange, right? So why compact?

The whole point here is the get that data written as quickly as possible and in this case that’s achieved by doing direct writes to L0 and then not having to update data files directly, but instead used data changes in a log structured format and merge them together.

Not having to update files in place is also a huge advantage when it comes to using bloom filters. We’ll cover bloom filters in a future post in this series and I don’t want to get too far ahead of myself here, just keep that in your pocket for now and we’ll come back to it.

So the next question you might ask yourself is, “Why is each compaction layer larger than the one that preceded it?”. This really comes down to file access. If each compaction layer was the same size and not large enough, you would have a lot of contention for files being read by compaction processes. Instead, the compaction layers get larger, thus reducing the odds of file contention during compaction.

Finally, you might ask, “Well… what if my data size is larger than the threshold of my bottom most compaction layer? Do I stop writing data?”. The answer is no. The thresholds in this case are simply triggers that start the compaction. MyRocks will put a lot of emphasis on keeping compaction layers under their thresholds, and you can run into problems if you can’t flush from L0 to L1 fast enough. But when it comes to the bottom-most compaction layer, it can grow as large as it needs to in order to support your full data set.

So now that you know more about the whats and whys of compaction, let’s dig a little deeper and look at the variables that are associated with its mechanics.

Variables and CF_OPTIONS Rocksdb_max_background_jobs

The aim of MyRocks is to allow for concurrent compaction jobs so you can write quickly initially and then organize the data later. The variable rocksdb_max_background_jobs was mentioned in my previous blog post about persisting data from memory to disk. The variable is also being mentioned in this blog post as compaction jobs are also considered to be background operations. As such, this variable will limit the max number of compactions jobs that can be run at one time.

Background jobs created for compaction and flushing are global in the sense that you cannot reserve jobs for specific column families. With MyRocks, you don’t have an option to spread your data files over multiple directories or storage platforms, but you can in RocksDB. I’m pointing this out because if you were to put different column families on different storage platforms, you may be bound to the slowest storage platform during compaction as documented in this bug report created for the RocksDB project. I don’t know if we’ll get an option to distribute our column families over different directories in MyRocks, but I feel this serves as a good example of considerations you need to make when dealing with global variables that have an impact across all column families.

Default: 2

Using the default value of two ensures that there will be enough available background processes for at least one flush and one compaction.


You can use multiple threads to help support a single compaction process. These threads are called subcompactions. You can even use subcompactions to help with compaction from L0 to L1, which, as I noted above, is a single job process. The rocksdb_max_subcompactions variable allows you to designate the maximum number of subcompaction threads that are allowed for each compaction job.

Default: 1


Compaction has the ability to prefetch data from files in order to ensure greater amounts of sequential reads. This variable allows you to specify the amount of data that should be pre-fetched from the data files that are participating in the compaction input process.

Default: 0

If you are using an SSD, you may not see a performance increase by attempting to establish more sequential reads as part of the compaction process. However, if you are using spinning media it’s recommended to set this value to 2Mb.

Setting this variable to a non-zero value will automatically set variable rocksdb_new_table_reader_for_compaction_inputs to ON if it’s not already in that state. Be sure to check the information about rocksdb_new_table_reader_for_compaction_inputs below so you know the impact of enabling that variable before setting rocksdb_compaction_readahead_size to a non-zero value.


Compaction jobs are going to need to read from multiple files during the input process. For example, if you are doing a compaction from L2 to L3 you first need to access the L2 file that’s going to be compacted, and then all the L3 files that overlap with the key range of the L2 file.

The variable rocksdb_new_table_reader_for_compaction_inputs allows you to work with one file descriptor per file that needs to be accessed. For those of you who are not familiar, a file descriptor is a computational resource used to access / read from a file. Using one file descriptor for multiple files means that you need to have that descriptor rotate through the files to get the info that it needs. Using multiple descriptors means you can access multiple files simultaneously, thus parallelizing the process.

Default: OFF

I would be inclined to test enabling this variable, but you have to be mindful of the fact that multiple file descriptors means more memory usage. I would test this in a non-production environment in order to determine the estimated memory footprint increase prior to implementing in production.

CF_OPTIONS target_file_size_base & target_file_size_multiplier

Once the compaction input process is complete, the compaction output process will create a series of new files in the N+1 compaction layer where N is the layer where compaction was initiated. The size of those files is determined by several factors.

The size of files in the L0 compaction layer is going to be the same size as the write_buffer_size variable (the size of the memtable) multiplied by the column family option min_write_buffer_number_to_merge (this is the number of immutable memtables that must be created prior to a flush).

The size of files in the L1 compaction layer is defined by the column family option target_file_size_base.

The size of the files in the L2 compaction layer or greater is defined by the column family option target_file_size_multiplier. Each layer’s file size will be X times greater than the one that came before it, where X is defined by the value of column family option target_file_size_multiplier. For example, if the L1 compaction layer files are set to be 64Mb and the CF option target_file_size_multiplier is set to 2, files in the L2 compaction layer would be 128Mb, files in the L3 compaction layer would be 256Mb, and so on.


  • CF option target_file_size_base: 67108864 (64Mb)
  • CF option target_file_size_multiplier: 1

Assuming you are using the default value for write_buffer_size (64Mb) as well, this means that all data files across all compaction layers will be 64Mb.

I think that 64Mb is a good place to start for L0 and L1 files considering that smaller files should result in faster compaction jobs, which could be important considering the single job nature of compaction from L0 to L1.

As you move to greater compaction levels, you may want to consider the tradeoffs of increasing file sizes. In the case that you are doing a read that is working with a specific key value, you’ll have a lot fewer files to inspect if the key is present (based on min and max values), but in the case that you need to scan a file, you may be checking a lot more data than needed. Also, if you’re using larger files, it’s possible that compaction could take longer as it has a lot more data to merge and sort.

Rocksdb_rate_limiter_bytes_per_sec & Rocksdb_use_direct_io_for_flush_and_compaction

Rocksdb_rate_limiter_bytes_per_sec and rocksdb_use_direct_io_for_flush_and_compaction were both mentioned in my post about persisting data in memory to disk, and I would encourage you to go back and read to about them in that post if you haven’t done so already. I figured they deserved a mention in this post as well considering that they also apply to the compaction output process.


Once compaction has finished and it’s created the new data files on disk, it can delete the files that it used for compaction input as they have duplicate copies of the data that are contained within the files created by the compaction output. Remember, the whole point of compaction is to keep sequential data with as few copies of the data as possible so to reduce the number of files that need to be accessed during read operations.

The problem is that a file that is pending deletion may be in a state where it’s being accessed by another thread at the time that compaction is done with it, and as such it cannot delete the file until it’s no longer being used by other processes, but it must eventually be deleted.

The variable rocksdb_delete_obsolete_files_period_micros specifies the time period to delete obsolete files assuming they weren’t already deleted as part of compaction process.

Default: 21600000000 (6 hours)

I would be inclined to leave this variable in its default state unless I had an environment with heavy writes, long running range reads, and a large data set. With this being the case you may have a number of files that can’t be deleted during compaction and I don’t know if I would want to wait 6 hours to be rid of them.

In contrast, if this was a system with known critical long running queries or transactions, I might be inclined to increase this value.

CF_OPTION level0_file_num_compaction_trigger

So now that we know how files are read and written by compaction, it’s good to understand when compaction occurs. This will occur at different times for each compaction layer and is dependent upon the amount of data stored in each layer.

Compaction starts at L0 and works it was down to the bottommost compaction layer. The compaction trigger threshold for L0 to L1 is defined by the column family option level0_file_num_compaction_trigger. When the number of files in L0 reaches this number, MyRocks will call a job to start comaction from L0 to L1. You can determine how large the L0 compaction layer will be, in bytes, by multiplying the values of variable write_buffer_size, column family option min_write_buffer_number_to_merge, and column family option level0_file_num_compaction_trigger.

Default: 4

I would recommend leaving this variable at its default value. As noted above, compaction from L0 to L1 is a single job, and compaction from L1 to L2 cannot occur while this is ongoing. The larger your L0 compaction layer is, the longer compaction from L0 to L1 will take and the more likely you’ll see interruptions for jobs trying to run compaction from L1 to L2.

CF_OPTION max_bytes_for_level_base

The compaction trigger threshold for L1 to L2 is defined by the column family option max_bytes_for_level_base. When L1 reaches the number of bytes defined by this option, it will trigger a compaction job for L1 to L2, assuming that there is not on ongoing compaction job from L0 to L1.

Default: 268435456 (256Mb)

The recommendation is to keep L1 the same size as L0. I noted how to calculate the size of L0 in the CF_OPTION level0_file_num_compaction_trigger section. If you change the size of L0, be sure to change this column family option to match.

CF_OPTION max_bytes_for_level_multiplier

The compaction trigger threshold for L2 and greater is defined by the column family option max_bytes_for_level_multiplier. L2 will be X times greater than L1 where X is the value of this variable, L3 will be X times greater than L2, and so on.

Default: 10

CF OPTION num_levels

MyRocks will continue to write files lower and lower into compaction layers until it hits the maximum compaction layer, which is defined by this column family option. Keep in mind that compaction layers start at L0, so if this value is 7 then the maximum compaction layer is L6.

Default: 7

There is some fairly complex math that you can use to determine how many compaction levels you need to minimize write amplification, but I think it goes outside of the scope of this blog series. If you want to know more about this I would recommend this blog post by Mark Callaghan which provided the details and was then followed up by this blog post which offered a few corrections.

CF OPTION level_compaction_dynamic_level_bytes

In the sections above I’ve described how MyRocks determines the size of each compaction layer. By default the rules will be to use the max_bytes_for_level_base to establish the size of the L1 compaction layer and then make each subsequent compaction layer X times larger than the previous where X is the value of the variable max_bytes_for_level_multiplier. However, there is another way that the size of each compaction layer can be determined and that’s when we shift from the standard definition to the dynamic definition.

In the dynamic definition of compaction level sizing, the size of each compaction level is determined by the amount of data in the bottom most compaction layer, as opposed to using the top most compaction layer like we would if we were using the standard definition.

The dynamic framework is first defined by using the max_bytes_for_level_base and max_bytes_for_level_multiplier variables, just like standard mode. For example, if we have max_bytes_for_level_base set to 268435456 (256Mb), max_bytes_for_level_multiplier set to 10, and num_levels set to 6, the following would be the maximum target compaction thresholds would be defined as follows.

L0: 256Mb (assuming you use default values noted in the level0_file_num_compaction_trigger section above)

L1: 256Mb

L2: 2.5 Gig

L3: 25 Gig

L4: 250 Gig

L5: 2.44 Tb

L6: 24.41 Tb

This defines the maximum compaction threshold. However the actual compaction threshold is changing dynamically based on the value of the total size of data in the bottom most compaction layer. Each compaction layer above it would be X/10 times the value of the compaction layer beneath it. To illustrate this, let’s use another example. Let’s use the exact tame variables, but assume there is 256 Gb of data in the L6 compaction layer. Each compaction layer threshold would be as follows.

L0: 256Mb (assuming you use default values noted in the level0_file_num_compaction_trigger section above)

L1: 2 Mb

L2: 26 Mb

L3: 262 Mb

L4: 2.56Gb

L5: 25.6 Gb

L6: 256 Gb

Assuming that the variables for data file sizes are default, the data file size would be 64Mb when it starts at L0. Given that L1 and L2 do not have a threshold size large enough to accommodate 64Mb, compaction will skip L1 and L2 and compact to L3 directly.

The major difference here is that under standard compaction, the data starts at L0, then fills L1, then L2, then L3, and so on until compaction starts to fill L6. With the dynamic method, the opposite is true. Data starts in L0 (which it always does) and then will start to fill L6, then L5 once its threshold exceeds the data file size, then L4 once its threshold exceeds the data file size, and so on.

You can think of the standard method like a top-down data fill method where dynamic fills that data a lot like the bottom half of an hourglass – the bottom has to fill to some extent before sand can rest on top of it. Somewhat like the diagram below:

Default: Off

I’m going to be covering compression and bloom filters in a later post and I think it would be best to review this section again after reading that post. The reason is that you can configure your column family to take on different characteristics for bloom filters and compression at different levels of compaction.

The main point here is that if you are customizing the characteristics of each compaction layer, then you’re going to need to take that into account if you enable this variable. However, if you choose not to implement compression or bloom filtering (not recommended), you may want to consider enabling this variable as it offers greater assurance of a stable LSM-tree structure.


Compaction layer size thresholds aren’t the only thing that can trigger compaction. There are two other triggers and the first of the two is an examination of how much sequential data in a data file is marked for deletion. If you have deleted a large batch of rows, you can save yourself a bit of overhead in the read process by removing data files that contain the records to be deleted as well as the data file that contains the delete markers.

This process occurs in multiple steps, the first of which is designated by the variable Rocksdb_compaction_sequential_deletes_file_size. Any file larger than the value of this variable will be checked for delete markers.

Default: 0 (bytes)

The default value disables this feature. Also, per the documentation, you can set this variable to -1 which, per the source code, also disables the feature.

With all the rest of the file size variables and column family options set to default, every data file in the data set should be approximately 64Mb. If you change the nature of file sizing, you may want to adjust this accordingly so only some files get checked, depending on your use case.

The only reason why I would consider disabling this feature is to reduce write amplification. The more compaction jobs that are triggered, the more write I/O will be experienced.


The next part of the process is to read the data file in search of sequential deletes. There has to be a certain number of sequential deletes and those sequential deletes need to be within a specific range (which is called a window in this case). That range of records is defined by the variable rocksdb_compaction_sequential_deletes_window.

Default: 0 (disabled)


Within the window, the number of sequential delete markers that need to be present in order to trigger a compaction job is defined by the variable Rocksdb_compaction_sequential_deletes.

Default: 0 (disabled)


There are two kinds of delete markers that can exist in the MyRocks data file. A delete(), which is the standard marker to delete the record, and a SingleDelete() which deletes the most recent entry for a record so long as the single delete marker and the record itself line up during compaction. The means that old versions of the record may still be available during read processes after the single delete operation is performed.

The variable rocksdb_compaction_sequential_deletes_count_sd determines if single delete markers should be counted when scanning for sequential delete markers in data files.

Default: OFF


The last mechanism that can be used to trigger compaction is manual compaction. Using variables in MyRocks, you can manually initiate compaction processes. The first variable being noted is rocksdb_compact_cf. Using this variable you can pass in the name of a column family and MyRocks will do a full compaction of the dataset from L0 all the way to the bottommost compaction layer. During my tests, I have found that this does not block incoming reads or writes much in the same way that automated compactions process also are non-blocking.

Default: Blank

When you pass in the name of a column family, the variable will automatically clear again. Considering that nature of how you interact with this variable, it’s not really a variable at all, but just a way to manually trigger the compaction process.


The other option for performing manual compaction is to call the variable rocksdb_force_flush_memtable_and_lzero_now. A very similar variable, rocksdb_force_flush_memtable_now, was mentioned in my earlier post about persisting in-memory data to disk as a way to flush memtables to L0. This variable work similarly, but adds a step of compacting L0 files to L1 for all column families.

Default: OFF

Much like rocksdb_compact_cf, the value of this variable will return to OFF once it has completed the requested flush and compaction, so it really doesn’t work like a variable but just a way to manually trigger a flush and L0 compaction.

Compaction Stats

There is a group of metrics available specific to compaction that’s available in the information_schema table ROCKSDB_COMPACTION_STATS. This will provide a set of compaction stats for each column family that you have in your data set. Instead of listing them all here, as there are many, I would recommend checking this page in the RocksDB wiki in order to get a better understanding of what is available to you.

Associated Metrics

Here are some of the metrics you should be paying attention to when it comes to compaction:

You can find the following information using system status variables.

  • Rocksdb_compact_read_bytes: The number of bytes that have been read by a compaction input process since the last MySQL restart.
  • Rocksdb_compact_write_bytes: The number of bytes that have been written by a compaction process since the last MySQL restart.
  • Rocksdb_compaction_key_drop_new: The number of times a record was removed by compaction due to a newer version of the record being available since the last MySQL restart.
  • Rocksdb_compaction_key_drop_obsolete: The number of times a record was removed by compaction due to the record no longer existing since the last MySQL restart.
  • Rocksdb_stall_l0_file_count_limit_slowdowns: The number of times that write slowdowns occurred due to compaction layer L0 being close to full since the last MySQL restart.
  • Rocksdb_stall_l0_file_count_limit_stops: The number of times that write stalls occurred due to compaction layer L0 being full since the last MySQL restart.
  • Rocksdb_stall_locked_l0_file_count_limit_slowdowns: The number of times that write slowdowns occurred due to compaction layer L0 being close to full at a time when compaction from L0 was already in progress since the last MySQL restart.
  • Rocksdb_stall_locked_l0_file_count_limit_stops: The number of times that write stalls occurred due to compaction layer L0 being full at a time when compaction from L0 was already in progress since the last MySQL restart.

In the information_schema.ROCKSDB_CFSTATS table you can find the following information about each column family.

  • COMPACTION_PENDING: Shows the current number of pending compaction requests

In the performance_schema, you may find the following setup instrument to be helpful.

  • wait/synch/mutex/rocksdb/signal manual compaction: Shows the amount of mutex time wait during the manual initiation of compaction.

In addition to the metrics noted above, I would recommend you take time to review the following tables as they all contain a wealth of compaction related metrics as well as how to link your tables and column families to data files on disk.

  • information_schema.ROCKSDB_COMPACTION_STATS
  • information_schema.ROCKSDB_DDL
  • information_schema.ROCKSDB_INDEX_FILE_MAP

In this post we covered the details regarding compaction, what it is, and why it’s needed. Compaction is a critical part of the MyRocks solution so I would recommend familiarizing yourself with this system as best you can before considering the implementation of MyRocks as part of your data platform.

Stay tuned for my next post where I’m going to cover compression and bloom filters!

MariaDB Track at Percona Live

Less than one month left until Percona Live. This time the Committee work was a bit unusual. Instead of having one big committee for the whole conference we had a few mini-committees, each responsible for a track. Each independent mini-committee, in turn, had a leader who was responsible for the whole process. I led the MariaDB track. In this post, I want to explain how we worked, which topics we have chosen, and why.

For MariaDB, we had seven slots: five for 50-minutes talks, two for 25-minutes talks and 19 submissions. We had to reject two out of three proposals. We also had to decide how many topics the program should cover. My aim here was to use the MariaDB track to demonstrate as many MariaDB unique features as possible. I also wanted to have as many speakers as possible, considering the number of slots we had available.

The committee agreed, and we tried our best for the program to cover the various topics. If someone sent us two or more proposals, we choose only one to allow more speakers to attend.

We also looked to identify gaps in submitted sessions. For example, if we wanted for a topic to be covered and no one sent a proposal with such a subject, we invited potential speakers and asked them to submit with that topic in mind. Or we asked those who already submitted similar talks to improve them.

In the end, we have five 50-minutes sessions, one MariaDB session in the MySQL track, two 25-minutes sessions, one tutorial, and one keynote. All of them are by different speakers.

The Program

The first MariaDB event will be a tutorial: “Expert MariaDB: Harness the Strengths of MariaDB Server” by Colin Charles on Tuesday, May 28

Colin started his MySQL career as a Community Engineer back in the MySQL AB times. He worked on numerous MySQL events, both big and small, including Percona Live’s predecessor, O’Reilly’s MySQL Conference and Expo. Colin joined Monty Program Ab, and MariaDB Corporation as a Chief Evangelist, then spent two years as Chief Evangelist at Percona. Now he is an independent consultant at his own company GrokOpen.

Colin will not only talk about unique MariaDB features up to version 10.4, but will also help you try all of them out. This tutorial is a must-attend for everyone interested in MariaDB.

Next day: Wednesday, May 29 – the first conference day – will be the MariaDB Track day.

MariaDB talks will start from the keynote by Vicentiu Ciorbaru about new MariaDB features in version 10.4. He will highlight all the significant additions in this version.

Vicentiu started his career at MariaDB Foundation as a very talented Google Summer of Code student. His first project was Roles. Then he worked a lot on MariaDB Optimizer, bug fixes, and code maintenance. At the same time, he discovered a talent for public speaking, and now he is the face of MariaDB Foundation.

We at the committee had a hard choice: either to accept his 50-minutes session proposal or ask him to make a keynote. This decision was not easy, because a keynote is shorter than 50 minutes. At the same time, though, everyone at the conference will be able to see it. Brand new features of version 10.4 are a very important topic. Therefore, we decided that it would be best to have Vicentiu as a keynote speaker.

Morning sessions

Sessions will start with a talk by Alexander Rubin “Opensource Column Store Databases: MariaDB ColumnStore vs. ClickHouse” Alex began his MySQL career as a web developer, then joined MySQL AB as a consultant. He then moved to Percona as Principal Architect. It was our loss when he left Percona to start applying his recommendations himself on behalf of a medical startup VirtualHealth! During his career as a MySQL consultant, he tried all the sexiest database products, loaded terabytes of data into them, ran the deadly intensive loads. He is the one who knows best about database strengths and weaknesses. I would recommend his session to everyone who is considering a column store solution.

Next talk is “Galera Cluster New Features” by Seppo Jaakola. This session is about the long-awaited Galera 4 library. Seppo is one of three founders of Codership Oy: the company which brought us Galera library. Before the year 2007, when the Galera library was first released, MySQL users had to choose between asynchronous replication and asynchronous replication (that’s not a typo). Seppo brought us a solution which allowed us to continue using InnoDB in the style we were used to using while writing to all nodes. The Galera library looks after the data consistency. After more than ten years the product is mature and leaving its competitors far behind. The new version brings us streaming replication technology and other improvements which relax usage limitations and make Galera Cluster more stable. I recommend this session for everyone who looks forward to a synchronous replication future.

Afternoon sessions

After the lunch break, we will meet MariaDB users Sandeep Jangra and Andre Van Looveren who will show how they use MariaDB at Walmart in their talk “Lessons Learned Building a Fully Automated Database Platform as a Service Using Open Source Technologies in the Cloud”. Sandeep and Andre manage more than 6000 MariaDB installations. In addition to setting up automation, they have experience with migration and upgrade. This talk will be an excellent case study, which I recommend to attend everyone who is considering implementing automation for a farm of MariaDB or MySQL servers.

Next topic is “MariaDB Security Features and Best Practices” by Robert Bindar.  Robert is a server Developer at MariaDB Foundation. He will cover best security practices for MariaDB deployment, including the latest security features, added to version 10.4

At 4:15 pm we will have two MariaDB topics in parallel

“MariaDB and MySQL – What Statistics Optimizer Needs Or When and How Not to Use Indexes” by Sergei Golubchik – a Member of the MariaDB Foundation Board – discovers optimization techniques which are often ignored in favor of indexes. Sergei worked on MySQL, and then on MariaDB, from their very first days. I’ve known him since 2006 when I joined the MySQL team. Each time when I am in trouble to find out how a particular piece of code works, just a couple of words from Sergei help to solve the issue! He has an encyclopedic knowledge on both MariaDB and MySQL databases. In this session, Sergei will explain which statistics optimizer we can use in addition to indexes. While he will focus on specific MariaDB features he will cover MySQL too. Spoiler: these are not only histograms!

Backups in the MySQL track…

In the parallel MySQL track, Iwo Panowicz and Juan Pablo Arruti will speak about backups in their “Percona XtraBackup vs. Mariabackup vs. MySQL Enterprise Backup” Iwo and Juan Pablo are Support Engineers at Percona. Iwo joined Percona two years ago, and now he is one of the most senior engineers in the EMEA team. Linux, PMM, analyzing core files, engineering best practices: Iwo is well equipped to answer all these and many more questions. Juan Pablo works in the American Support team for everything around MariaDB and MySQL: replication, backup, performance issues, data corruption… Through their support work, Iwo and Juan Pablo have had plenty of chances to find out strengths and weaknesses of different backup solutions.

Three tools, which they will cover in the talk, can be used to make a physical backup of MySQL and MariaDB databases, and this is the fastest and best recommended way to work with an actively used server. But what is the difference? When and why should you prefer one instrument over another? Iwo and Juan Pablo will answer these questions.

At the end of the day we will have two 25-minute sessions

Jim Tommaney will present “Tips and Tricks with MariaDB ColumnStore”. Unlike Alex Rubin, who is an end user of ColumnStore databases, Jim is from another side: development. Thus his insights into MariaDB ColumnStore could be fascinating. If you are considering ColumnStore: this topic is a must-go!

Daniel Black will close the day with his talk “Squash That Old Bug”. This topic is the one I personally am looking forward to the most! Not only because I stick with bugs. But, well… the lists of accepted patches which Daniel’s posts to MariaDB and to MySQL servers are impressive. Especially when you know how strict is the quality control for external patches in MariaDB and MySQL! In his talk, Daniel is going to help you to start contributing yourself. And to do it successfully, so your patches are accepted. This session is very important for anyone who has asked themselves why one or another MariaDB or MySQL bug has not been fixed for a long time. I do not know a single user who has not asked that question!


This blog about MariaDB track at Percona Live covers eight sessions, one keynote, one tutorial, 12 speakers, seven mini-committee members – two of whom are also speakers. We worked hard, and continue to work hard, to bring you great MariaDB program.

I cannot wait for the show to begin!

Photo by shannon VanDenHeuvel on Unsplash

Connector/Python Connection Attributes

MySQL Server has since version 5.6 supported connection attributes for the clients. This has allowed a client to provide information such as which program or connector the client is, the client version, the license, etc. The database administrator can use this information for example to verify whether all clients have been upgraded, which client is executing a particular query, and so forth.

In MySQL 8.0.16 this feature has been included for the X DevAPI in the MySQL connectors as well, including MySQL Connector/Python which I will cover in this blog. First though, let’s take a look at how the attributes are exposed in MySQL Server.

The built-in MySQL Connector/Python connection attributesConnection Attributes in MySQL Server

The connection attributes are made available in MySQL Server through two tables within the Performance Schema: session_account_connect_attrs and session_connect_attrs. The two tables have the same definition – the only difference is for which connections they show the connection attributes.

The session_account_connect_attrs table includes the attributes for connections using the same user account as for the one querying the table. This is useful if you want to grant permission for a user to check the attributes for their own connections but not for other connections.

On the other hand, session_connect_attrs shows the attributes for all connections. This is useful for the administrator to check the attributes for all users.


It is up to the client what attributes they want to expose and the values they provide. In that sense, you can only trust the attributes to the same extend that you trust the clients to submit correct values.

The tables have four columns:

  • PROCESSLIST_ID: This is the same ID as in SHOW PROCESSLIST or the PROCESSLIST_ID column in performance_schema.threads.
  • ATTR_NAME: This is the name of the attribute, for example _client_name.
  • ATTR_VALUE: This is the value for the attribute, for example mysql-connector-python.
  • ORDINAL_POSITION: The attributes have an order. The ordinal position specifies the position of the attribute. The first attribute for the connection has ordinal position 0, the next 1, and so forth.

The PROCESSLIST_ID and ATTR_NAME together form the primary key of the tables.

Now that you know how to query the table, let’s take a look at how it works with MySQL Connector/Python.

Connector/Python Attributes

There are essentially three different ways to use the connection attributes from MySQL Connector/Python. You can have them disabled. This is the default and means that no attributes will be provided. You can enable them and use the default attributes. And finally, there is support for providing custom attributes. Let’s look at each of the two cases where attributes are enabled.


If you want to learn more about MySQL Connector/Python, then I have written MySQL Connector/Python Revealed published by Apress. The book both covers the traditional Python Database API (PEP 249) and the X DevAPI which is new as of MySQL 8.

The book is available from Apress (print and DRM free ePub+PDF), Amazon (print and Kindle), Barnes & Noble (print), and others.

Enabled with Default Attributes

If you want your Python program to connect to MySQL using the default attributes, all you need to do is to set the connection-attributes option to True. You can do that in one of two ways depending on how you set the connection arguments

If you use a dictionary of arguments, you add connection-attributes as a key with the value set to True:

import mysqlx connect_args = { "host": "", "port": 33060, "user": "pyuser", "password": "Py@pp4Demo", "connection-attributes": True, }; db = mysqlx.get_session(**connect_args) p_s = db.get_schema("performance_schema") attr = p_s.get_table("session_account_connect_attrs") stmt = stmt.where("PROCESSLIST_ID = CONNECTION_ID()") stmt.order_by("ORDINAL_POSITION") result = stmt.execute() print("+------+-----------------+------------------------+---------+") print("| P_ID | ATTR_NAME | ATTR_VALUE | ORDINAL |") print("+------+-----------------+------------------------+---------+") fmt = "| {0:4d} | {1:<15s} | {2:<22s} | {3:7d} |" for row in result.fetch_all(): print(fmt.format(row[0], row[1], row[2], row[3])) print("+------+-----------------+------------------------+---------+") db.close()

The program creates the connection, then queries the performance_schema.session_account_connect_attrs table using the crud methods. Finally, the result is printed (note that the PROCESSLIST_ID and ORDINAL_POSITION columns have had their names shortened in the output to make the output less wide – the process list ID will differ in your output):

+------+-----------------+------------------------+---------+ | P_ID | ATTR_NAME | ATTR_VALUE | ORDINAL | +------+-----------------+------------------------+---------+ | 45 | _pid | 19944 | 0 | | 45 | _platform | x86_64 | 1 | | 45 | _os | Windows-10.0.17134 | 2 | | 45 | _source_host | MY-LAPTOP | 3 | | 45 | _client_name | mysql-connector-python | 4 | | 45 | _client_version | 8.0.16 | 5 | | 45 | _client_license | GPL-2.0 | 6 | +------+-----------------+------------------------+---------+

Notice that all of the attribute names start with an underscore. That means it is a built-in attribute. Attribute names starting with an underscore are reserved and can only be set by MySQL itself.

You can also connect using a URI, in that case the connection is made like the following example:

import mysqlx import urllib uri = "mysqlx://{0}:{1}@{2}:{3}/?connection-attributes=True".format( "pyuser", urllib.parse.quote('Py@pp4Demo', safe=''), "", "33060" ) db = mysqlx.get_session(uri)

The username, password, host, and port are added through the format() function to make the code less wide. The important thing here is the connection-attributes=True. You can also leave out =True as mentioning the connection-attributes option without any value is the same as enabling it.

What do you do, if you want to add your own customer attributes? That is supported as well.

Custom Attributes

This far the connection-attributes option has just been set to True. However, it also supports taking a list or dictionary as the argument. That can be used to set your own custom attributes.

The easiest way to understand this is to see an example:

import mysqlx attributes = { "application": "my_test_app", "foo": "bar", "foobar": "", } connect_args = { "host": "", "port": 33060, "user": "pyuser", "password": "Py@pp4Demo", "connection-attributes": attributes, }; db = mysqlx.get_session(**connect_args) p_s = db.get_schema("performance_schema") attr = p_s.get_table("session_account_connect_attrs") stmt = stmt.where("PROCESSLIST_ID = CONNECTION_ID()") stmt.order_by("ORDINAL_POSITION") result = stmt.execute() print("+------+-----------------+------------------------+---------+") print("| P_ID | ATTR_NAME | ATTR_VALUE | ORDINAL |") print("+------+-----------------+------------------------+---------+") fmt = "| {0:4d} | {1:<15s} | {2:<22s} | {3:7d} |" for row in result.fetch_all(): value = row[2] if row[2] is not None else "" print(fmt.format(row[0], row[1], value, row[3])) print("+------+-----------------+------------------------+---------+") db.close()

Notice in line 29 that it is checked whether the attribute value is None (NULL in SQL). When the attribute value is an empty string or no value like for the foobar attribute, it is returned as None in the result set.

Alternatively, you can specify the same three connection attributes using a list:

attributes = [ "application=my_test_app", "foo=bar", "foobar", ] connect_args = { "host": "", "port": 33060, "user": "pyuser", "password": "Py@pp4Demo", "connection-attributes": attributes, };

You can also use an URI of course. You need to use the list syntax for that:

uri = "mysqlx://{0}:{1}@{2}:{3}/" \ + "?connection-attributes=[application=my_test_app,foo=bar,foobar]".format( "pyuser", urllib.parse.quote('Py@pp4Demo', safe=''), "", "33060", )

These all give the same result (the process list ID will differ):

+------+-----------------+------------------------+---------+ | P_ID | ATTR_NAME | ATTR_VALUE | ORDINAL | +------+-----------------+------------------------+---------+ | 74 | _pid | 20704 | 0 | | 74 | _platform | x86_64 | 1 | | 74 | _os | Windows-10.0.17134 | 2 | | 74 | _source_host | MY-LAPTOP | 3 | | 74 | _client_name | mysql-connector-python | 4 | | 74 | _client_version | 8.0.16 | 5 | | 74 | _client_license | GPL-2.0 | 6 | | 74 | application | my_test_app | 7 | | 74 | foo | bar | 8 | | 74 | foobar | | 9 | +------+-----------------+------------------------+---------+

Notice that the built-in attributes are still included and the custom attributes have been added at the end.

That concludes this introduction to connection attributes with the MySQL Connector/Python X DevAPI. I will recommend to enable them by default even if just for the built-in attributes. The attributes can be very handy when debugging issues on the server.

Constant-Folding Optimization in MySQL 8.0

In MySQL 8.0.16 the optimizer has improved again! Comparisons of columns of numeric types with constant values are checked and folded or removed for invalid or out-of-rage values. The goal is to speed up query execution.

Percona Live Presents: The First Ever TiDB Track

The PingCAP team has always been a strong supporter of Percona and the wider open source database community. As the people who work day in and day out on TiDB, an open source NewSQL database with MySQL compatibility, open source database is what gets us in the morning, and there’s no better place to share that passion than Percona Live.

At this year’s Percona Live in Austin, Texas, we are particularly excited to bring you a full track of talks and demo on the latest development in TiDB during Day 1 of the conference.

Who would benefit from the TiDB track

The TiDB track is designed to share with developers, DBAs, and practitioners in general technical know-hows, reproducible benchmarks (no benchmark-eting), and best practices on how TiDB can solve their problems. There are 7 talks total by folks from PingCAP and Intel that cover the full gamut of how you can test, migrate, and use TiDB in the cloud to solve technical problems and deliver business value. Here’s a run down of the talk topics:

Phew! That’s a lot. I hope you are excited to join us for this track. As Peter Zaitsev and Morgan Tocker (one of the TiDB track speakers) noted in a recent Percona webinar, there’s a lot TiDB can do to help scale MySQL while avoiding common manual sharding issues. This track will peel the onion to show you all the fun stuff under the hood.

Whose presentations do you look forward to?

Besides the TiDB track, there are many other presentations we are excited about. In particular, I look forward to attending Stacy Yuan and Yashada Jadhav of PayPal’s talk on MySQL Security and Standardization, and Vinicius Grippa of Percona’s presentation on enhancing MySQL Security.

See you soon in Austin!

The post Percona Live Presents: The First Ever TiDB Track appeared first on Percona Community Blog.

MySQL Connectors 8.0.16 – Time to Catch up!

HI again! It’s been some time since I wrote and shared some of our updates with you. My intention back then was to give you an update with every release — if only all my plans worked out! However we’ve now had 3 releases since last I shared an update with you so it’s time that I updated everyone on what we’ve been working on.

All of our connectors (well except for ODBC) implement our new X DevAPI and so I’ll briefly go over these features and then break out some of the major connector specific features included.


Handling of default schema

We had some inconsistencies regarding how a default schema given on a connection string was handled by some of our connectors. If a user gives a schema on the connection string — such as this – it was expected that myschema would be the default schema set on the session. This would mean that any SQL issued against that session would work against that schema. Not all connectors were handling this the correct way.

var session = mysqlx.getSession("mysqlx://localhost/myschema") Various API standardization efforts

With many of our releases we’ll included various API standardizing efforts. We are always comparing our connectors and trying to make sure that the “core” API is essentially the same among all of them. When we see differences we schedule updates to correct those. Our goal is that if you understand the X DevAPI in one language then you can understand it in all the languages.

Prepared Statement support

With 8.0.16 we released prepared statement support when working with our X DevAPI/Document Store connectors. The first thing you may notice is that there isn’t any specific API for accessing prepared statements. This fits in with our general philosophy of letting the connectors “do the right thing” for the user rather than force the user to always write code. In this case the connector will notice that you are executing a “preparable” query more than once and will automatically make use of prepared statements for you. As of right now there is no way to configure or adjust this behavior. You should see a significant performance improvement from 8.0.15 to 8.0.16 when doing something like a loop of inserts or updates.

Connection Pooling Enhancements

An enhancement was included in 8.0.16 to improve our connection pooling by speeding how our connectors could “reset” the internal connection to the server. This is entirely an internal improvement. Developers will see this as a faster connection pool.

Connection Attributes

Connection attributes are a feature where you can pass in key value pairs when a connection is made. These values will be stored in the server performance schema and can be queried later as a way of “profiling” your application. Think of it simply as “tagging” your connections. These values can be given when the pool is created or when old style sessions are opened. Here is an example code snippet giving some connection attributes when opening an old-style session:

var session = mysqlx.getSession("mysqlx://localhost/schema ?connection-attributes=[key1=value1,key2=value2]") Connector specific Features

Now I’d like to make a quick list of some of the larger changes we made to our connectors that are *not* connected to the X DevAPI. These are not all of the changes in our connectors. Each connector puts out a release announcement giving the entire change list with each release.

  • Connector/CPP
    • Support for MSVC 2017
    • Improved internal string handling
    • Improved our CMaked-based build system
  • Connector/Python
    • Removed support for Django versions less then 1.11
  • Connector/Net
    • Can now use SSL certificates in the PEM file format

As always, we really appreciate your willingness to work with our products and look forward to your feedback.

What’s New in ProxySQL 2.0

ProxySQL is one of the best proxies out there for MySQL. It introduced a great deal of options for database administrators. It made possible to shape the database traffic by delaying, caching or rewriting queries on the fly. It can also be used to create an environment in which failovers will not affect applications and will be transparent to them. We already covered the most important ProxySQL features in previous blog posts:

We even have a tutorial covering ProxySQL showing how it can be used in MySQL and MariaDB setups.

Quite recently ProxySQL 2.0.3 has been released, being a patch release for the 2.0 series. Bugs are being fixed and the 2.0 line seems to start getting the traction it deserves. In this blog post we would like to discuss major changes introduced in ProxySQL 2.0.

Causal Reads Using GTID

Everyone who had to deal with replication lag and struggled with read-after-write scenarios that are affected by the replication lag will definitely be very happy with this feature. So far, in MySQL replication environments, the only way to ensure causal reads was to read from the master (and it doesn’t matter if you use asynchronous or semisynchronous replication). Another option was to go for Galera, which had an option for enforcing causal reads since, like, always (first it used to be wsrep-causal-reads and now it is wsrep-sync-wait). Quite recently (in 8.0.14) MySQL Group replication got similar feature. Regular replication, though, on its own, cannot deal with this issue. Luckily, ProxySQL is here and it brings us an option to define on per-query rule basis with what hostgroup reads which match that query rule should be consistent. The implementation comes with ProxySQL binlog reader and it can work with ROW binlog format for MySQL 5.7 and newer. Only Oracle MySQL is supported due to lack of required functionality in MariaDB. This feature and its technical details have been explained on ProxySQL official blog.

SSL for Frontend Connections

ProxySQL always had support for backend SSL connection but it lacked SSL encryption for the connections coming from clients. This was not that big of a deal given the recommended deployment pattern was to collocate ProxySQL on application nodes and use a secure Unix socket to connect from the app to the proxy. This is still a recommendation, especially if you use ProxySQL for caching queries (Unix sockets are faster than TCP connection, even local ones and with cache it’s good to avoid introducing unnecessary latency). What’s good is that with ProxySQL 2.0 there’s a choice now as it introduced SSL support for incoming connections. You can easily enable it by setting mysql-have_ssl to ‘true’. Enabling SSL does not come with unacceptable performance impact. Contrary, as per results from the official ProxySQL blog, the performance drop is very low.

ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE Native Support for Galera Cluster

Galera Cluster has been supported by ProxySQL almost since beginning but so far it was done through the external script that (typically) has been called from ProxySQL’s internal scheduler. It was up to the script to ensure that ProxySQL configuration was proper, the writer (or writers) has been correctly detected and configured in the writers hostgroup. The script was able to detect the different states Galera node may have (Primary, non-Primary, Synced, Donor/Desync, Joining, Joined) and mark the node accordingly as either available or not. The main issue is that the original script never was intended as anything other than the proof of concept written in Bash. Yet as it was distributed along with ProxySQL, it started to be improved, modified by external contributors. Others (like Percona) looked into creating their own scripts, bundled with their software. Some fixes have been introduced in the script from ProxySQL repository, some have been introduced into Percona version of the script. This led to confusion and even though all commonly used scripts handled 95% of the use cases, none of the popular ones really covered all the different situations and variables Galera cluster may end up using. Luckily, the ProxySQL 2.0 comes with native support for Galera Cluster. This makes ProxySQL support internally MySQL replication, MySQL Group Replication and now Galera Cluster. The way in how it’s done is very similar. We would like to cover the configuration of this feature as it might be not clear at the first glance.

As with MySQL replication and MySQL Group Replication, a table has been created in ProxySQL:

mysql> show create table mysql_galera_hostgroups\G *************************** 1. row *************************** table: mysql_galera_hostgroups Create Table: CREATE TABLE mysql_galera_hostgroups ( writer_hostgroup INT CHECK (writer_hostgroup>=0) NOT NULL PRIMARY KEY, backup_writer_hostgroup INT CHECK (backup_writer_hostgroup>=0 AND backup_writer_hostgroup<>writer_hostgroup) NOT NULL, reader_hostgroup INT NOT NULL CHECK (reader_hostgroup<>writer_hostgroup AND backup_writer_hostgroup<>reader_hostgroup AND reader_hostgroup>0), offline_hostgroup INT NOT NULL CHECK (offline_hostgroup<>writer_hostgroup AND offline_hostgroup<>reader_hostgroup AND backup_writer_hostgroup<>offline_hostgroup AND offline_hostgroup>=0), active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1, max_writers INT NOT NULL CHECK (max_writers >= 0) DEFAULT 1, writer_is_also_reader INT CHECK (writer_is_also_reader IN (0,1,2)) NOT NULL DEFAULT 0, max_transactions_behind INT CHECK (max_transactions_behind>=0) NOT NULL DEFAULT 0, comment VARCHAR, UNIQUE (reader_hostgroup), UNIQUE (offline_hostgroup), UNIQUE (backup_writer_hostgroup)) 1 row in set (0.00 sec)

There are numerous settings to configure and we will go over them one by one. First of all, there are four hostgroups:

  • Writer_hostgroup - it will contain all the writers (with read_only=0) up to the ‘max_writers’ setting. By default it is just only one writer
  • Backup_writer_hostgroup - it contains remaining writers (read_only=0) that are left after ‘max_writers’ has been added to writer_hostgroup
  • Reader_hostgroup - it contains readers (read_only=1), it may also contain backup writers, as per ‘writer_is_also_reader’ setting
  • Offline_hostgroup - it contains nodes which were deemed not usable (either offline or in a state which makes them impossible to handle traffic)

Then we have remaining settings:

  • Active - whether the entry in mysql_galera_hostgroups is active or not
  • Max_writers - how many nodes at most can be put in the writer_hostgroup
  • Writer_is_also_reader - if set to 0, writers (read_only=0) will not be put into reader_hostgroup. If set to 1, writers (read_only=0) will be put into reader_hostgroup. If set to 2, nodes from backup_writer_hostgroup will be put into reader_hostgroup. This one is a bit complex therefore we will present an example later in this blog post
  • Max_transactions_behind - based on wsrep_local_recv_queue, the max queue that’s acceptable. If queue on the node exceeds max_transactions_behind given node will be marked as SHUNNED and it will not be available for the traffic

The main surprise might be handling of the readers, which is different than how the script included in ProxySQL worked. First of all, what you have to keep in mind, is the fact, that ProxySQL uses read_only=1 to decide if node is a reader or not. This is common in replication setups, not that common in Galera. Therefore, most likely, you will want to use ‘writer_is_also_reader’ setting to configure how readers should be added to the reader_hostgroup. Let’s consider three Galera nodes, all of them have read_only=0. We also have max_writers=1 as we want to direct all the writes towards one node. We configured mysql_galera_hostgroups as follows:

SELECT * FROM mysql_galera_hostgroups\G *************************** 1. row *************************** writer_hostgroup: 10 backup_writer_hostgroup: 30 reader_hostgroup: 20 offline_hostgroup: 40 active: 1 max_writers: 1 writer_is_also_reader: 0 max_transactions_behind: 0 comment: NULL 1 row in set (0.00 sec)

Let’s go through all the options:


mysql> SELECT hostgroup_id, hostname FROM runtime_mysql_servers; +--------------+------------+ | hostgroup_id | hostname | +--------------+------------+ | 10 | | | 30 | | | 30 | | +--------------+------------+ 3 rows in set (0.00 sec)

This outcome is different than you would see in the scripts - there you would have remaining nodes marked as readers. Here, given that we don’t want writers to be readers and given that there is no node with read_only=1, no readers will be configured. One writer (as per max_writers), remaining nodes in backup_writer_hostgroup.


mysql> SELECT hostgroup_id, hostname FROM runtime_mysql_servers; +--------------+------------+ | hostgroup_id | hostname | +--------------+------------+ | 10 | | | 20 | | | 20 | | | 20 | | | 30 | | | 30 | | +--------------+------------+ 6 rows in set (0.00 sec)

Here we want our writers to act as readers therefore all of them (active and backup) will be put into the reader_hostgroup.


mysql> SELECT hostgroup_id, hostname FROM runtime_mysql_servers; +--------------+------------+ | hostgroup_id | hostname | +--------------+------------+ | 10 | | | 20 | | | 20 | | | 30 | | | 30 | | +--------------+------------+ 5 rows in set (0.00 sec)

This is a setting for those who do not want their active writer to handle reads. In this case only nodes from backup_writer_hostgroup will be used for reads. Please also keep in mind that number of readers will change if you will set max_writers to some other value. If we’d set it to 3, there would be no backup writers (all nodes would end up in the writer hostgroup) thus, again, there would be no nodes in the reader hostgroup.

Related resources  ClusterControl for ProxySQL  How to Cluster Your ProxySQL Load Balancers  How to Monitor Your ProxySQL with Prometheus and ClusterControl

Of course, you will want to configure query rules accordingly to the hostgroup configuration. We will not go through this process here, you can check how it can be done in ProxySQL blog. If you would like to test how it works in a Docker environment, we have a blog which covers how to run Galera cluster and ProxySQL 2.0 on Docker.

Other Changes

What we described above are the most notable improvements in ProxySQL 2.0. There are many others, as per the changelog. Worth mentioning are improvements around query cache (for example, addition of PROXYSQL FLUSH QUERY CACHE) and change that allows ProxySQL to rely on super_read_only to determine master and slaves in replication setup.

We hope this short overview of the changes in ProxySQL 2.0 will help you to determine which version of the ProxySQL you should use. Please keep in mind that 1.4 branch, even if it will not get any new features, it still is maintained.

Tags:  proxysql MySQL load balancer

(MySQL) Logged and Loaded: authentication_ldap_simple_log_status or authentication_ldap_sasl_log_status not effective?

A quick one in case anyone else hits the same problem as I encountered.  The documentation for authentication_ldap_sasl_log_status and  authentication_ldap_simple_log_status states these variables can be set to a value between 1 and 5 to control the types of messages logged.

If you set them and still find nothing is logged then sett log_error_verbosity to it’s maximum value of 3 and you should find the messages are output to the error log as expected.

Thats all!

Finding Tables without Primary Key

Having a primary key defined for each user table is best practice for performance in InnoDB. And when using Group Replication or InnoDB Cluster for automatic high availability it is even mandatory. So it is wise to check if you have tables running without primary key. You can identify these tables by running:
SELECT t.table_schema, t.table_name FROM tables AS t         LEFT JOIN key_column_usage AS c          ON (t.table_name = c.table_name AND             c.constraint_schema = t.table_schema AND             c.constraint_name = 'PRIMARY' )     WHERE t.table_schema NOT IN ("mysql", "information_schema",                                  "performance_schema", "sys")           AND c.constraint_name IS NULL          AND t.table_type = "BASE TABLE"; 
And if you want to make life even easier, you can add this as a report to the sys schema:
CREATE VIEW sys.schema_tables_without_pk AS SELECT t.table_schema, t.table_name FROM tables AS t         LEFT JOIN key_column_usage AS c          ON (t.table_name = c.table_name AND             c.constraint_schema = t.table_schema AND             c.constraint_name = 'PRIMARY' )     WHERE t.table_schema NOT IN ("mysql", "information_schema",                                  "performance_schema", "sys")           AND c.constraint_name IS NULL          AND t.table_type = "BASE TABLE"; 
It is easy to detect but a little more challenging to solve. You need to consider your application and potential load when adding a new primary key. One solution is to add a new auto_increment column. In many cases this might help already. In other cases you already have a natural primary key in your table definition, It's just not defined as such.
To add auto_increment columns to all affected tables (which I do not recommend without thinking about it and testing first!), you can use the beauty of the Python mode in MySQL Shell:
$ mysqlsh root@localhost:33060 --py

MySQL Shell 8.0.16

Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates.

Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.

Creating a session to 'root@localhost:39010'

Fetching schema names for autocompletion... Press ^C to stop.

Your MySQL connection id is 762 (X protocol)

Server version: 8.0.16-commercial MySQL Enterprise Server - Commercial

No default schema selected; type \use <schema> to set one.

MySQL  localhost:33060+ ssl  Py >l=session.get_schema("sys").get_table("schema_tables_without_pk").select().execute().fetch_all()

MySQL  localhost:33060+ ssl  Py >for val in l: session.sql("ALTER TABLE "+val[0]+"."+val[1]+" ADD COLUMN (__id int unsigned auto_increment PRIMARY KEY)");

Percona Monitoring and Management (PMM) 2.0.0-alpha2 Is Now Available

We are pleased to announce the launch of PMM 2.0.0-alpha2, Percona’s second Alpha release of our long-awaited PMM 2 project! In this release, you’ll find support for MongoDB Metrics and Query Analytics – watch for sharp edges as we expect to find a lot of bugs!  We’ve also expanded our existing support of MySQL from our first Alpha to now include MySQL Slow Log as a data source for Query Analytics, which enhances the Query Detail section to include richer query metadata.

  • MongoDB Metrics – You can now launch PMM 2 against MongoDB and gather metrics and query data!
  • MongoDB Query Analytics – Data source from MongoDB Profiler is here!
  • MySQL Query Analytics
    • Queries source – MySQL Slow Log is here!
    • Sorting and more columns – fixed a lot of bugs around UI

PMM 2 is still a work in progress – expect to see bugs and other missing features! We are aware of a number of issues, but please report any and all that you find to Percona’s JIRA.

This release is not recommended for Production environments. PMM 2 Alpha is designed to be used as a new installation – please don’t try to upgrade your existing PMM 1 environment.

MongoDB Query Analytics

We’re proud to announce support for MongoDB Query Analytics in PMM 2.0.0-alpha2!

Using filters you can drill down on specific servers (and other fields):

MongoDB Metrics

In this release we’re including support for MongoDB Metrics, which means you can add a local or remote MongoDB instance to PMM 2 and take advantage of the following view of MongoDB performance:

MySQL Query Analytics Slow Log source

We’ve rounded out our MySQL support to include Slow log – and if you’re using Percona Server with the Extended Slow Log format, you’ll be able to gain deep insight into the performance of individual queries, for example, InnoDB behavior.  Note the difference between the detail available from PERFORMANCE_SCHEMA vs Slow Log:


Slow Log:

Installation and configuration

The default PMM Server credentials are:

username: admin
password: admin

Install PMM Server with docker

The easiest way to install PMM Server is to deploy it with Docker. You can run a PMM 2 Docker container with PMM Server by using the following commands (note the version tag of 2.0.0-alpha2):

docker create -v /srv --name pmm-data-2-0-0-alpha2 perconalab/pmm-server:2.0.0-alpha2 /bin/true docker run -d -p 80:80 -p 443:443 --volumes-from pmm-data-2-0-0-alpha2 --name pmm-server-2.0.0-alpha2 --restart always perconalab/pmm-server:2.0.0-alpha2

Install PMM Client

Since PMM 2 is still not GA, you’ll need to leverage our experimental release of the Percona repository. You’ll need to download and install the official percona-release package from Percona, and use it to enable the Percona experimental component of the original repository.  See percona-release official documentation for further details on this new tool.

Specific instructions for a Debian system are as follows:

wget sudo dpkg -i percona-release_latest.generic_all.deb

Now enable the correct repo:

sudo percona-release disable all sudo percona-release enable original experimental

Now install the pmm2-client package:

apt-get update apt-get install pmm2-client

Users who have previously installed pmm2-client alpha1 version should remove the package and install a new one in order to update to alpha2.

Please note that having experimental packages enabled may affect further packages installation with versions which are not ready for production. To avoid this, disable this component with the following commands:

sudo percona-release disable original experimental sudo apt-get update

Configure PMM

Once PMM Client is installed, run the pmm-admin setup command with your PMM Server IP address to register your Node within the Server:

# pmm-agent setup --server-insecure-tls --server-address=<IP Address>:443

We will be moving this functionality back to pmm-admin config in a subsequent Alpha release.

You should see the following:

Checking local pmm-agent status... pmm-agent is running. Registering pmm-agent on PMM Server... Registered. Configuration file /usr/local/percona/pmm-agent.yaml updated. Reloading pmm-agent configuration... Configuration reloaded.

Adding MySQL Metrics and Query Analytics (Slow Log source)

The syntax to add MySQL services (Metrics and Query Analytics) using the new Slow Log source:

sudo pmm-admin add mysql --use-slowlog --username=pmm --password=pmm

where username and password are credentials for accessing MySQL.

Adding MongoDB Metrics and Query Analytics

You can add MongoDB services (Metrics and Query Analytics) with the following command:

pmm-admin add mongodb --use-profiler --use-exporter --username=pmm --password=pmm

You can then check your MySQL and MongoDB dashboards and Query Analytics in order to view your server’s performance information!

We hope you enjoy this release, and we welcome your comments on the blog!

About PMM

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL®, MongoDB®, and PostgreSQL performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL®, MongoDB®, and PostgreSQL® servers to ensure that your data works as efficiently as possible.

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

Summary – Mydbops Database Meetup (Apr-2019)

Conglomeration, Collaboration and Celebration of Database Administrators

Founders of Mydbops envisioned contributing knowledge back to the community. This vision is shaping up in its 3rd edition of the Meetup held on Saturday the 27th of April, 2019. This meetup edition had drawn a good amount of members from the Open Source Database Administrative Community, to the venue.  The core agenda was set on “High Availability concepts in ProxySQL and Maxscale”. There were also presentations in MongoDB Internals along with MySQL Orchestrator and its implementation excellence at Ola (ANI Technologies Pvt. Ltd.)

The participants from various organisations like MariaDB, TeleDNA, CTS, OLA, Infosys, Quikr and more had gathered for the meetup. The participants from various organisations are as depicted here (In Percentage).

Introduction to the session started off with vision and mission of “contribution to the Open Source Database Community”.  History of the past journey towards the 3rd edition of Mydbops Database Meet up was revisited with gratitude of all its contributors. Conglomerate collaboration of Tri-Party was at display – Participants, Organisers and Presenters. 

Keynote by Benedict Henry & Manosh Malai

The first set of speakers touched upon the core topic of ProxySQL High Availability. Reverse Proxy, Replication Breakage Handling, basics of ProxySQL Handling & Topology and ProxySQL Clustering were touched upon by Mr. Aakash M and Mr.Vignesh Prabhu.  They were also MySQL 5.7 Certified, they dealt the concepts and implementation of the ProxySQL High Availability in detail.

ProxySQL High Availability (Clustering) from Mydbops Presentation by S.Vignesh & M. Akash – Mydbops

More than a decade long experienced Mr.Pramod R Mahto, Technical Support Engineer at MariaDB Corporation gave an insightful and detailed talk on MariaDB Maxscale – High Availability Solution. His talk focused on Router Filters, Auto-Failover, Auto-Rejoin, Switchover and other relevant modules in MariaDB Maxscale. The detailing was so insightful, the Q&A session went beyond the allotted time and many queries were raised.

MariaDB Maxscale – HA Solution from Mydbops Presentation by Pramod R Mahto – MariaDB Corporation

During the High-Tea and Networking break, there was also an appeal for increasing the frequency of these future meet ups. The high and helpful usage of Mydbops’ blogs was highlighted by many of the participants.


The session reconvened with the add-on topic of “MongoDB wired Tiger Internals: Journey to Transactions” by Mr. Manosh Malai and Mr. Ranjith of Mydbops. Horizontal & Vertical Scalable, Wired Tiger Architecture, Wired Tiger Internals (Sessions/Cursors/Schema), MVCC Workflow and Transaction Data Flush Time. The enthusiasm of the participants on NoSQL and its presenter was witnessed by the number of questions raised and clarified during the Q&A session.

MongoDB WiredTiger Internals: Journey To Transactions from Mydbops Presentation by Manosh Malai & Ranjith – Mydbops

Healing of MySQL Topology with MySQL Orchestrator was the topic of the last session of the day with Mr. Krishna Ramanathan and Mr.Anil Yadav as the presenters from OLA (ANI Technologies Pvt. Ltd.)  Orchestrator, pre-failover process, healing & post-failover process were the topics of the discussion with the specific demo from the Ola Cabs.

MySQL topology healing at OLA. from Mydbops Presentation by Anil Yadav & Krishna Ramanathan – OLA

The speakers were felicitated with gifts and beautiful bouquets by the founders of Mydbops, Karthik P R, Vinoth Kanna R S and Kabilesh P R at the end of event.  Group photo of all the happy and enlightened participants was shot.

Felicitation of the Speakers3rd Mydbops Database Meetup Conglomerater

Special thanks to Mr. Manosh Malai, Mr. Selva Venkatesh, Mr. Benedict Henry for seamlessly and smoothly organising the event and all the attendees/participants who had actually helped us in making this event a grand success. Thanks for the Venue sponsor Thought factory ( Axis Bank Innovation Lab).

The Next event is tentatively scheduled for July 2019. Follows us to know the exact date, time and venue of 4th Mydbops Database Meetup.

MySQL Connector/Python 8.0 – A year in

It’s been a year since MySQL 8.0 became GA and 8.0.16 has just been released. A good time to look at what happened with MySQL Connector/Python over the last few years.

pypi presence

When we created our connector we knew hat providing it via PyPI was important and we used PyPI as distribution channel. Later PEP 470 was published, which changed how packages are hosted and we introduced the C Extension, which required re-working the packaging. It took us a bit time to get all things right, but for a while we are now back on PyPI and you can get it not only from or downloads page, but also with a simple install using the pip tool:

$ pip install mysql-connector-python Collecting mysql-connector-python Downloading (13.0MB) |████████████████████████████████| 13.0MB 2.0MB/s Collecting protobuf>=3.0.0 (from mysql-connector-python) Downloading (1.2MB) |████████████████████████████████| 1.2MB 1.8MB/s Requirement already satisfied: setuptools in ./lib/python3.5/site-packages (from protobuf>=3.0.0->mysql-connector-python) (41.0.1) Collecting six>=1.9 (from protobuf>=3.0.0->mysql-connector-python) Downloading Installing collected packages: six, protobuf, mysql-connector-python Successfully installed mysql-connector-python-8.0.16 protobuf-3.7.1 six-1.12.0

After installation, which of course also works in a virtual environment, usage is just as it had been installed using the other packages, without hurdles as dependencies like protobuf are installed automatically:

$ python Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import mysql.connector >>> cnx = mysql.connector.connect(user='root', database='sys') >>> cursor = cnx.cursor() >>> cursor.execute("SHOW TABLES") >>> for (table_name) in cursor: ... print(table_name) ... ('host_summary',) ('host_summary_by_file_io',) ('host_summary_by_file_io_type',) ....

Making the C extension default

Initially our Connector was written in 100% Python. This gives great portability to all platforms and all Python runtime implementations. However en- and decoding the network packages and doing all the related handling can take some time. To bring you the best performance we, some while ago, introduced the C Extension, which builds around libmysql and by using C can notably improve the performance. With 8.0 we took the big step and made it the default. Thus users now automatically benefit from the performance boost, without a change to the application. If however you want to use the pure python implementation just set the use_pure option to True and the old default is back. The pure Python version is still maintained for maximum compatibility and used automatically if the C extension can’t be loaded. If the C extension is available can be easily checked:

>>> mysql.connector.HAVE_CEXT True

Introduction of X DevAPI and Document Store

A big change in all parts of MySQL 8.0 was the introduction of he MySQL Document Store, with a new network protocol and a new API. The MySQL Document Store allows easy access to JSON-style documents and CRUD access to tables directly from a more high-level API. Jesper gave a good introduction, so I won’t repeat it here. Of course you get the X DevAPI support as part of the PyPI package and of course the boost using the C extension is default as well.

EOL for MySQL Connector/Python 2

In January we have put the 2.1 series out of support to be able to to fully focus on 8.0. But don’t be afraid by the big jump in the version number. Version 8.0 is fully compatible. The only potential break is that we are defaulting now to the C extension as mentioned above. he only effect should be higher performance, but if that causes an issue you can go back to the pure Python version and please file a bug about why you need that.

More to come

Predictions are hard, especially about the future, but what I know for sue is, that there is a bright future for the MySQL Connector/Python. The main focus for the future is in three areas:

  • Making sure we give access to all the great current and upcoming features in the MySQL Server to Python users
  • Improve the X DevAPI to make writing new-style applications even more productive and simpler.
  • Improve integration into the Python ecosystem by continuing work on our Django integration and implementing different requests from the SQLAlchemy community.

Not really surprising goals, I guess. If you have any needs, please reach out to us!

Let the wrong one in! MySQL AD Authentication with authentication_ldap_simple

I mentioned in my previous blog entry that had I encountered an issue with this plugin, and I think now that it has been fixed in the latest MySQL versions (released on 2019-04-25) it’s reasonable to share my findings.

The following tests are with MySQL Version 8.0.13. I start by installing the plugin, pointing it at my Windows Domain Controller and creating a user associated with the DN of my Windows account.

mysql> INSTALL PLUGIN authentication_ldap_simple SONAME ''; Query OK, 0 rows affected (0.05 sec) mysql> SET GLOBAL authentication_ldap_simple_server_host=''; Query OK, 0 rows affected (0.00 sec) mysql> CREATE USER 'patrick'@'%' -> IDENTIFIED WITH authentication_ldap_simple -> BY 'CN=patrick,CN=Users,DC=WINDOWS,DC=DOMAIN'; Query OK, 0 rows affected (0.08 sec)

Next we test that everything works OK by trying to log in with my correct Windows password.

[patrick@WIN-CLIENT] C:\> mysql >> ` >> --user=patrick ` >> --password=Password123 ` >> --enable-cleartext-plugin mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 63 Server version: 8.0.13-commercial MySQL Enterprise Server - Commercial ... mysql>

Also providing the wrong password denies access. Everything is as expected so far.

[patrick@WIN-CLIENT] C:\> mysql ` >> ` >> --user=patrick ` >> --password=WrongPassword ` >> --enable-cleartext-plugin mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'patrick'@''...

However what would you expect to happen if I try to log in to this account without providing a password?  Let’s see…

[patrick@WIN-CLIENT] C:\> mysql ` >> ` >> --user=patrick ` >> --enable-cleartext-plugin Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 30 Server version: 8.0.13-commercial MySQL Enterprise Server - Commercial ... mysql> select current_user(); +----------------+ | current_user() | +----------------+ | patrick@% | +----------------+ 1 row in set (0.00 sec)

Huh? It lets me in!  After double checking everything I raised an SR for this and a bug was created and fixed in 8.0.16. Apparently the 5.7 branch was also affected, and this fix is also in 5.7.26.

Bug #29637712: The authentication_ldap_simple plugin could enforce authentication incorrectly.

Let’s test in 8.0.16. Note one difference here is that I have to set variable authentication_ldap_simple_group_search_attr to ” to disable AD group checking. This behaviour, related to proxy users,  seems to have been introduced in 8.0.14, but on my lab setup this step breaks authentication completely so I disable it.

mysql> INSTALL PLUGIN authentication_ldap_simple SONAME ''; Query OK, 0 rows affected (0.00 sec) mysql> SET GLOBAL authentication_ldap_simple_bind_base_dn='DC=WINDOWS,DC=DOMAIN'; Query OK, 0 rows affected (0.00 sec) mysql> SET GLOBAL authentication_ldap_simple_group_search_attr=''; Query OK, 0 rows affected (0.00 sec) mysql> CREATE USER 'patrick'@'%' -> IDENTIFIED WITH authentication_ldap_simple -> BY 'CN=patrick,CN=Users,DC=WINDOWS,DC=DOMAIN'; Query OK, 0 rows affected (0.00 sec)

I won’t repeat the tests for correct and incorrect password handling, they still behave as before, but I will show you behaviour without specifying a password.

[patrick@WIN-CLIENT] C:\> mysql ` >> ` >> --user=patrick ` >> --enable-cleartext-plugin ERROR 1045 (28000): Access denied for user 'patrick'@'' (using password: YES) [patrick@WIN-CLIENT] C:\>

I’m not sure if there is anyone using authentication_ldap_simple due to it sending passwords unencrypted between database and domain controller, but if there is, I’d suggest checking whether you are susceptible to this issue and if so applying latest patchset ASAP.

I’m also not sure if this is specific to LDAP authentication with Active Directory or other Directory Services are affected.  I also wonder whether authentication_ldap_sasl is affected, but I don’t have configuration to check that out.