Planet MySQL

Does the Version Number Matter?

Yes, it does! In this blog post, I am going to share my recent experiences with ProxySQL and how important the database software version number can be.


I was working on a migration to Percona XtraDB Cluster (PXC) with ProxySQL, fortunately on a staging environment first so we could catch any issues (like this one).

We installed Percona XtraDB Cluster and ProxySQL on the staging environment and repointed the staging application to ProxySQL. At first, everything looked great. We were able to do some application tests and everything looked good. I advised the customer to do more testing to make sure everything works well.

Something is wrong, but what?

A few days later the customer noticed that their application was not working properly.

We started investigating. Everything seemed well-configured, and the only thing we could see in the application log was the following:

2018-04-20 11:28:31,169 [ default-threads - 42] ERROR Error in lifecycle management : org.hibernate.StaleStateException : Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1 { 103} (method: error) org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1 at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched( at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(

Based on this error I still did not know what is wrong. Were some of the queries failing because of PXC, ProxySQL or some other settings?

We redirected the application to one of the nodes from PXC, and everything worked fine. We tried HAproxy as well, and everything worked again. We knew something was happening around ProxySQL which is causing the problem. But we still could not find the problem. Every query went through ProxySQL without any issue.

Debug log is our savior

The customer finally enabled the application debug logging so we could see which query was failing:

delete from TABLENAME where ID='11' and Timestamp ='2018-04-20 16:15:03';

I was confused at first: this is a kind of simple query, what could be wrong? Let’s investigate it on the cluster. When I tried to select the data on the cluster, it gave me back zero results. That’s OK, maybe the row was already deleted?

For this investigation, the slow query logging was enabled and long_query_time set to 0 to log all the queries. I checked the slow query log looking for queries like this. What I found helped me realize what the problem was:

delete from TABLENAME where ID=10 and Timestamp ='2018-04-20 11:17:22.35'; delete from TABLENAME where ID=24 and Timestamp ='2018-04-20 11:17:31.602'; delete from TABLENAME where ID=43 and Timestamp ='2018-04-20 11:18:13.2'; delete from TABLENAME where ID=22 and Timestamp ='2018-04-20 11:11:02.854'; delete from TABLENAME where ID=11 and Timestamp ='2018-04-20 11:21:57'; delete from TABLENAME where ID=64 and Timestamp ='2018-04-20 11:18:34'; delete from TABLENAME where ID=47 and Timestamp ='2018-04-20 10:38:35'; delete from TABLENAME where ID=23 and Timestamp ='2018-04-20 11:30:03';

I hope you see the difference! The first four lines have fractional seconds! At that time, the application was pointed to the cluster directly. So ProxySQL cut off the fractional seconds? That would be a nasty bug.

I checked the application log again with the debug information, and I could see the application does not even use the fractional seconds in the queries when it points to ProxySQL. This is why the query was failing (does not delete any rows), because in the table all the rows had fractional seconds but the queries were not using them.

So why does the application not use fractional seconds with ProxySQL?

First of all, fractional seconds were introduced in MySQL 5.6.4. The application is a Java-based application with Jboss and Hibernate. I knew ProxySQL reports MySQL 5.5. Maybe the application/connector reads the version number and makes decisions based on that?

It was quite easy to test this theory by just changing the version number in ProxySQL like this:

update global_variables set variable_value="5.7.21" where variable_name='mysql-server_version'; load mysql variables to run;save mysql variables to disk;

The application had to be restarted (probably it was caching the previous settings) but after that everything was working as expected.

But be careful, now it will report 5.7.21 for all the hostgroups. What if you have multiple hostgroups with different MySQL versions? It would be nice if you could define this for every hostgroup.


The solution was very easy, but finding the source of the problem took a long time. If you are planning to use ProxySQL, I would always recommend changing the mysql-server_version to match to the underlying MySQL server version number because who knows which connector or application checks the version and makes a decision based on that.

There is another example here where Marco Tusa had a very similar problem with a Java connector.

The post Does the Version Number Matter? appeared first on Percona Database Performance Blog.

Conference review Percona Live Santa Clara 2018

Percona Live Santa Clara, an annual event where open source database users, developers and enthusiasts come together, was held in April at the Santa-Clara convention centre. Pythian was well represented once more with no less than five presentations and a total of nine attendees.

This year the conference was condensed to two days of breakout sessions and one day of tutorials. Though it was shorter in length, the organizers broadened their horizons by including not only MySQL and MongoDB tracks, but this year they even put together a full PostgreSQL track. Moving from MySQL only to multiple technologies, inspired this year’s tagline: polyglot persistence conference. The increase in number of sessions allowed for a lot more options, but the condensed schedule made it much harder to choose which sessions to attend!

My observation from last year’s European conference in Dublin was that ProxySQL and ClickHouse were hot topics is something I noticed again in 2018. ProxySQL was the winner of the MySQL Community Award for the second year in a row. René Cannaò, ProxySQL main author, was present to accept the award and confirmed his commitment to make the software even more feature rich than it already is. We look forward to the upcoming 2.0 release.

The community dinner is traditionally held on Tuesday night of the conference. Pythian is, and has been for many years now, the proud organizer company of this fine event. Our open-source director, Derek Downey, had arranged for a limited edition t-shirt for every attendee. The venue was again at Pedro’s restaurant where a Mexican buffet was waiting for us. We had a good turn up and we had good fun. Olé!

Last, but not least, Percona announced that they will say goodbye to Santa Clara and will move the conference to another location next year. They kept the answer as to where they will go to themselves, but the guesswork in the community runs full circle. I’ve heard suggestions from Hawaii to Toronto and even Cleveland showed up in the list. We’ll have to wait and see what it is going to be.

But before we do, next up is Percona Live Europe which will be held in Frankfurt, Germany November 5-8 this year. Pythian is looking forward to meeting you there!

When “log sequence number X is in the future” is a non-issue

While seeding a bunch of new slaves for an infrastructure (and after I had successfully seeded and started replication on several, and I can verify that the database is consistent), one of my fellow DBAs checks the mysql error log and says “INNODB CORRUPTION!!”.

All of my servers were spewing these errors.

2018-05-09T10:29:10.420943-05:00 0 [ERROR] InnoDB: Your database may be corrupt or you may have copied the InnoDB tablespace but not the InnoDB log files. Please refer to for information about forcing recovery. 2018-05-09T10:29:10.421717-05:00 0 [ERROR] InnoDB: Page [page id: space=XXXX, page number=XXXXXXXX] log sequence number XXXXXXXXXXX is in the future! Current system log sequence number XXXXXXXXX.

But — No.  I do not have InnoDB corruption in this case.  I discovered a brand new gripe with MySQL & InnoDB.  This is not a ‘bug’ but more of an ‘unfortunate feature’ and I invite others to tell me that I am wrong about this.  How to reproduce:

  • Configure MySQL so that redo logs are in a separate partition from the datadir
  • Stop mysql, normally, cleanly
  • Back up and transfer and restore the datadir onto a new slave database system by whatever binary method you prefer. (in my case it was to create a zfs snapshot on the datadir partition and start MySQL back up and send that snapshot to a new slave).
  • The new slave is configured identically.  It:
    • has a fresh MySQL install.
    • has been started up, previously.
    • has been shut down, cleanly.
    • has redo logs in a separate partition (corresponding to the fresh installation).

If you are a MySQL DBA you now know what I did.  I have a clean/consistent MySQL data directory from one server paired with redo logs from a brand new server.  The brand new server’s MySQL was shut down cleanly, too, though, so nothing was replayed.  Because the innodb_log_file_size is the same on both servers, MySQL started up.  But, it also decided to adopt the system LSN from the innodb log file.  Thus we have a nasty issue of finding LSNs in the future in the tablespace for every transaction.  We’re now spewing fountains of MySQL error logs.

In this case – this is not a “real issue”.  There is no corruption.  MySQL/ InnoDB is just spewing errors.

To keep this from occurring, all I have to do (before starting the seeded slave the first time) is delete those ib_logfile*s on the destination redo log partition and let MySQL create them on start.  You only get one chance to do it right.  When MySQL creates the redo logs fresh, they contain the proper system LSN, obviously copied back out from ibdata1.  If you try nuking the log files and letting them recreate again AFTER the first start, it’s too late.  The system LSN is now in ibdata1 and a “too low” value will still be used.

There is a viable fix without doing “crazy stuff”.  I can shut down the seed source machine and grab the ib_logfiles, copy them to the new slave (after clean shutdown) and then start the new slave back up again.  Voila.  LSNs not in the future anymore.

My opinion is:

  • MySQL should not start up if the log LSN differs more than <threshold> from the one in ibdata1 – so that you can get an opportunity to delete redo logs and let them be recreated.  Why not? If you know your MySQL snapshot was created after a clean shutdown, it’s fine.
  • OR if the redo logs don’t contain any checkpoints that need replaying against the server, the system LSN from ibdata1 should be used (like it does when it starts without any redo logs).

Anyway I saw many posts on the web about this.  Your database is NOT always corrupt when you see this error message in your logs.

You don’t need to use some GDB hack with a perfectly quiet database.  You can use some of the hacks out there for writing enough data to advance the LSN if you want though.

BUT one solution that I couldn’t find anywhere is this:

If you are seeding machines and your seed source is still viable – just shut down MySQL on that seed source and grab its redo logs.  Shut down the destination cleanly as well and put the ib_logfiles from the source into place on the destination and restart.  Voila, the errors disappear.

You do have the choice of just “living with” the errors… and knowing that they are complete BS.  But that leaves the unfortunate discovery by someone who doesn’t know what was done and the panic starts all over again.

End of gripe.

MySQL 8.0: InnoDB now supports Instant ADD COLUMN

Instant DDL has been one of the most requested InnoDB features for a very long time. With ever larger and rapidly growing datasets the ability to do DDL instantly is a must have feature in any web scale database.  Developers constantly need to add new columns to meet the constantly changing business requirements.  …

Deploying PMM on Linode: Your $5-Per-Month Monitoring Solution

In this blog, I will show you how to install PMM on Linode as a low-cost database monitoring solution.

Many of my friends use Linode to run their personal sites, as well as small projects. While Linode is no match for Big Cloud providers in features, it is really wonderful when it comes to cost and simplicity: a Linode “nanode” instance offers 1GB of memory, 1 core, 20GB of storage and 1TB of traffic for just $5 a month.

A single Linode instance is powerful enough to use with Percona Monitoring and Management (PMM) to monitor several systems, so I use Linode a lot when I want to demonstrate PMM deployment through Docker, rather than Amazon Marketplace.

Here are step-by-step instructions to get you started with Percona Monitoring and Management (PMM) on Linode in five minutes (or less):

Step 1:  Pick the Linode Type, Location and launch it.

Step 2: Name your Linode

This step is optional and is not PMM-related, but you may want to give your Linode an easy-to-remember name instead of something like “linode7796908”. Click on Linode Name and then on “Settings” and enter a name in “Linode Label”.

Step 3:  Deploy the Image

Click on Linode Name and then on “Deploy an Image”.

I suggest choosing the latest Ubuntu LTS version and allocating 512MB for the swap file, especially on a Linode with a small amount of memory. Remember to set a strong root password, as Linode allows root password login by default from any IP.

Step 4: Boot Linode

Now prepare the image you need to boot your Linode. Click on the Boot button for that:

Step 5: Login to the system and install Docker

Use your favorite SSH client to login to the Linode you created using “root” user and password you set at Step 3, and install Docker:

apt install

Step 6: Run PMM Server

Here are detailed instructions to install the PMM Server on Docker. Below are the commands to do basic installation:

docker pull percona/pmm-server:latest docker create   -v /opt/prometheus/data   -v /opt/consul-data   -v /var/lib/mysql   -v /var/lib/grafana   --name pmm-data   percona/pmm-server:latest /bin/true docker run -d   -p 80:80   --volumes-from pmm-data   --name pmm-server   --restart always   percona/pmm-server:latest

Note: This deploys PMM Server without authentication. For anything but test usage, you should set a password by following instructions on this page.

You’re done!

You’ve now installed PMM Server and you can see it monitoring itself by going to the server IP with a browser.

Now you can go ahead and install the PMM Client on the nodes you want to monitor!

The post Deploying PMM on Linode: Your $5-Per-Month Monitoring Solution appeared first on Percona Database Performance Blog.

MySQL Performance : 8.0 RW & Binlog impact

In the previous article I've intentionally skipped the topic related to Binlog impact on MySQL 8.0 Performance because it's not a short story, nor a simple one..

In fact, for most of people Binlog in MySQL is generally representing an additional overhead, and historically it was true. Since MySQL 5.6 there is Binlog Group Commit (BGC) feature available, and it was rather doing well, decreasing the gap between "binlog=OFF" and "binlog=ON sync_bin=1". However, storage vendors are making flash drives more and more faster (and cheaper) from year to year.. And when we delivered MySQL 5.7 the scope of Binlog impact moved with code and flash storage improvements -- the main impact was no more coming from the I/O operations related to Binlog, but from the Binlog code itself ! -- indeed, this may sound odd initially, but let's go to "pictures" to see it better in details ;-))

So far, I'll reuse the same Skylake server as before, but will reduce it to 1S (1CPU Socket, 24cores-HT) -- so, you don't believe it's all because I'm using a "big HW" (while 2S HW is the most common server config today in all data centers where people are running MySQL, and having 16-24cores per CPU Socket is what is todays "commodity HW") -- but well, let's stay with 24cores-HT for the following experiment ;-)) And as the storage it'll be the same Optane drive with EXT4 filesystem.

I'll now replay the same Sysbench OLTP_RW and Update-NoKEY tests as before, but using only 1S (24cores-HT) and with the same previous my.conf but with 4 additional variations :

  • 1) binlog=OFF
  • 2) binlog=ON sync_bin=0
  • 3) binlog=ON sync_bin=1000
  • 4) binlog=ON sync_bin=1
and the following graphs are the obtained results, each graph is representing a given test workload growing with number of concurrent user sessions from 1, 2, 4, .. to 1024 :

Sysbench OLTP_RW 10Mx8-tables @MySQL-5.7
Comments :
  • as you can see, on a higher load enabling Binlog is helping a higher TPS !
  • (well, in fact it's helping not to gain, but rather not to loose TPS on high load)
  • but in any case, you're seeing here a positive impact ! ;-))
  • and you can understand that in such a case it was difficult to blame Binlog code along MySQL 5.7 time ;-))

Specially that even more important "positive impact" happens on much more aggressive writes with Update-NoKey workload :

Sysbench Update-NoKey 10Mx8-tables @MySQL-5.7
Comments :
  • as you can see, enabling Binlog is helping a lot MySQL 5.7 to not loose performance on high load..
  • and this is all because it helps to "hide" the REDO log contention we have in 5.7

And it helped well, until we did not improve the whole REDO log code in MySQL 8.0 ;-)) -- since then the same story is looking completely different now :

Sysbench OLTP_RW 10Mx8-tables @MySQL-8.0
Comments :
  • only negative impact from Binlog in MySQL 8.0
  • and 20K TPS instead of 25K TPS is a pretty huge difference..

Sysbench Update-NoKey 10Mx8-tables @MySQL-8.0
Comments :
  • but since the Writes are more aggressive in your workload, the impact then is even more dramatic..
  • more that x2 times TPS drop..

In case you're still suspecting the above is only valid for Skylake & Optane, here are the similar results from 1S Broadwell 22cores-HT and other vendor flash drive :

MySQL 5.7 :
MySQL 8.0 :

This is the reason why Binlog stuff in MySQL 8.0 must be fixed asap, and I really hope it'll happen soon ! -- seems like the whole story around Binlog must be re-visited and re-implemented..

BTW, if you have a patch, you're more than welcome ! ;-))


New Webinar on How to Migrate to Galera Cluster for MySQL & MariaDB

Join us on Tuesday May 29th for this new webinar with Severalnines Support Engineer Bart Oles, who will walk you through what you need to know in order to migrate from standalone or a master-slave MySQL/MariaDB setup to Galera Cluster.

When considering such a migration, plenty of questions typically come up, such as: how do we migrate? Does the schema or application change? What are the limitations? Can a migration be done online, without service interruption? What are the potential risks?

Galera Cluster has become a mainstream option for high availability MySQL and MariaDB. And though it is now known as a credible replacement for traditional MySQL master-slave architectures, it is not a drop-in replacement.

It has some characteristics that make it unsuitable for certain use cases, however, most applications can still be adapted to run on it.

The benefits are clear: multi-master InnoDB setup with built-in failover and read scalability.

Join us on May 29th for this walk-through on how to migrate to Galera Cluster for MySQL and MariaDB.

Sign up below!

Date, Time & Registration Europe/MEA/APAC

Tuesday, May 29th at 09:00 BST / 10:00 CEST (Germany, France, Sweden)

Register Now

North America/LatAm

Tuesday, May 29th at 09:00 PDT (US) / 12:00 EDT (US)

Register Now

  • Application use cases for Galera
  • Schema design
  • Events and Triggers
  • Query design
  • Migrating the schema
  • Load balancer and VIP
  • Loading initial data into the cluster
  • Limitations:
    • Cluster technology
    • Application vendor support
  • Performing Online Migration to Galera
  • Operational management checklist
  • Belts and suspenders: Plan B
  • Demo

Bartlomiej Oles is a MySQL and Oracle DBA, with over 15 years experience in managing highly available production systems at IBM, Nordea Bank, Acxiom, Lufthansa, and other Fortune 500 companies. In the past five years, his focus has been on building and applying automation tools to manage multi-datacenter database environments.

We look forward to “seeing” you there and to insightful discussions!

Tags:  MySQL MariaDB galera percona mysql cluster galera cluster percona xtradb webinar

MySQL master discovery methods, part 3: app & service discovery

This is the third in a series of posts reviewing methods for MySQL master discovery: the means by which an application connects to the master of a replication tree. Moreover, the means by which, upon master failover, it identifies and connects to the newly promoted master.

These posts are not concerned with the manner by which the replication failure detection and recovery take place. I will share orchestrator specific configuration/advice, and point out where cross DC orchestrator/raft setup plays part in discovery itself, but for the most part any recovery tool such as MHA, replication-manager, severalnines or other, is applicable.

We discuss asynchronous (or semi-synchronous) replication, a classic single-master-multiple-replicas setup. A later post will briefly discuss synchronous replication (Galera/XtraDB Cluster/InnoDB Cluster).

App & service discovery

Part 1 and part 2 presented solutions where the app remained ingorant of master's identity. This part takes a complete opposite direction and gives the app ownership on master access.

We introduce a service discovery component. Commonly known are Consul, ZooKeeper, etcd, highly available stores offering key/value (K/V) access, leader election or full blown service discovery & health.

We satisfy ourselves with K/V functionality. A key would be mysql/master/cluster1 and a value would be the master's hostname/port.

It is the app's responsibility at all times to fetch the identity of the master of a given cluster by querying the service discovery component, thereby opening connections to the indicated master.

The service discovery component is expected to be up at all times and to contain the identity of the master for any given cluster.

A non planned failover illustration #1

Master M has died. R gets promoted in its place. Our recovery tool:

  • Updates the service discovery component, key is mysql/master/cluster1, value is R's hostname.


  • Listen on K/V changes, recognize that master's value has changed.
  • Reconfigure/refresh/reload/do what it takes to speak to new master and to drop connections to old master.
A non planned failover illustration #2

Master M gets network isolated for 10 seconds, during which time we failover. R gets promoted. Our tool (as before):

  • Updates the service discovery component, key is mysql/master/cluster1, value is R's hostname.

Clients (as before):

  • Listen on K/V changes, recognize that master's value has changed.
  • Reconfigure/refresh/reload/do what it takes to speak to new master and to drop connections to old master.
  • Any changes not taking place in a timely manner imply some connections still use old master M.
Planned failover illustration

We wish to replace the master, for maintenance reasons. We successfully and gracefully promote R.

  • App should start connecting to R.

The app is the complete owner. This calls for a few concerns:

  • How does a given app refresh and apply the change of master such that no stale connections are kept?
    • Highly concurrent apps may be more difficult to manage.
  • In a polyglot app setup, you will need all clients to use the same setup. Implement same listen/refresh logic for Ruby, golang, Java, Python, Perl and notably shell scripts.
    • The latter do not play well with such changes.
  • How can you validate that the change of master has been detected by all app nodes?

As for the service discovery:

  • What load will you be placing on your service discovery component?
    • I was familiar with a setup where there were so many apps and app nodes and app instances, such that the amount of connections was too much for the service discovery . In that setup caching layers were created, which introduced their own consistency problems.
  • How do you handle service discovery outage?
    • A reasonable approach is to keep using last known master idendity should service discovery be down. This, again, plays better wih higher level applications, but less so with scripts.

It is worth noting that this setup does not suffer from geographical limitations to the master's identity. The master can be anywhere; the service discovery component merely points out where the master is.

Sample orchestrator configuration

An orchestrator configuration would look like this:

"ApplyMySQLPromotionAfterMasterFailover": true, "KVClusterMasterPrefix": "mysql/master", "ConsulAddress": "", "ZkAddress": "srv-a,srv-b:12181,srv-c", "PostMasterFailoverProcesses": [ “/just/let/me/know about failover on {failureCluster}“, ],

In the above:

  • If ConsulAddress is specified, orchestrator will update given Consul setup with K/V changes.
  • At 3.0.10, ZooKeeper, via ZkAddress, is still not supported by orchestrator.
  • PostMasterFailoverProcesses is here just to point out hooks are not strictly required for the operation to run.

See orchestrator configuration documentation.

MySQL Performance : 8.0 and Sysbench OLTP_RW / Update-NoKEY

This post is following previously published OLTP_RO results for MySQL 8.0 (latin1 and utf8mb4 charsets), and now is focusing on Sysbench RW workloads, particularly "mixed" OLTP_RW and Update-NoKey :

  • OLTP_RW : while this workload has writes, it's mainly driven by reads (OLTP_RO + 2 updates + delete + insert)
  • Update-NoKey : aggressively bombarding UPDATE queries (but with no changes on indexed columns)
The same 2S Skylake server was used as in previous tests :

Server configuration :
  • OS : Oracle Linux 7.4
  • CPU : 48cores-HT Intel Skylake 2.7Ghz (2CPU sockets (2S), Intel(R) Xeon(R) Platinum 8168 CPU)
  • RAM: 172GB
  • Storage : x2 Intel Optane flash drives (Intel (R) Optane (TM) SSD P4800X Series)
    • volume : RAID-0 via MDADM
    • filesystem : EXT4
And I'm following mostly the same test conditions as previously explained for MySQL 5.7 GA -- similar variations in options (spin delay = 6;24;96 / thread concurrency = 0;64;128 / taskset = 1S/2S, etc.) to let each Engine to show its best possible TPS/QPS results.

However, as running the test with all these config variations is taking a significant time, I've slightly reduced the scope of investigation to the following :
  • trx_commit = 1 : along with our work on InnoDB REDO re-design we not only fixed the biggest related bottleneck, but also discovered and partially fixed several other issues around REDO (also mostly historical, but still) -- keeping all this in mind, I'd rather suggest you today to use "1" (flushing REDO log on COMMIT) whenever possible -- specially that with all the progress we're seeing on HW/Storage improvement last years -- the penalty of "1" with 8.0 becomes much less dramatic than before -vs- "2" (flush REDO log once per second), and also a big enough total size for the whole REDO space (currently I'm using x16 or x32 log files of 1GB each), more about later..
  • PFS = off : I'm intentionally now switching Performance Schema OFF just because it's not a bottleneck, but a pure "overhead" (as many other things as well) -- my main target in all benchmark investigations is "to see what is our next bottleneck", and as HW resources are always limited, any additional overhead will help to "hide" the real problem.. While PFS overhead is part of MySQL QA testing, and every overhead higher than 5% for "default instrumentation" is considered as a bug (mind to file a bug if you see it bigger in your case!) -- while from the other side many users are asking to see more an more instrumentation enabled by default regardless overhead (and this "balance" between overhead and benefit from built-in instrumentation is generally can be observed only case by case).
  • Checksums = off : this is also a pure "overhead" and not a bottleneck, while since CRC32 is supported, generally you'll not hit any problem..
  • Charset = latin1 : while most of interest is moving to UTF8, I'm continuing to test with "latin1" for the same reasons as UTF8 -vs- latin1 "overhead" which may hide you more important problems (while using UTF8 in 8.0 is giving you a direct gain -vs- any previous MySQL release, but I'm rather looking to point on problems than hide them)..
  • DoubleWrite = off : this, however, is a big problem and a big bottleneck, but the fix was already developed by Sunny since 2 years now, we worked on this together, and I can confirm you you'll not see any TPS drop as soon as your storage is able to follow (as you "writing twice", e.g. x2 times more) -- but the code is still NOT part of 8.0 because "there is always something more important to do" ;-)) -- please feel free to urge Sunny to push re-designed DoubleWrite code to 8.0 asap !! (Sunny's twitter : @sunbains) -- while for my part I need to see "what is after" once the new code is delivered..
  • Binlog = off : this is another big problem, and on the same time both bottleneck and overhead.. -- but this one rather need a very particular attention, so I'll skip it here to say you more later..

The full list of all config options you may always find at the end of the article, while here are the final results :

Sysbench OLTP_RW 10Mx8-tables TPS
Comments :
  • over 45K TPS with MySQL 8.0 !
  • around 35K TPS with MySQL 5.7 -- interesting that similar result was obtained in the past with 5.7 on 4S 72cores-HT Broadwell server, and now 2S Skylake 48cores-HT is just enough to get the same ;-))
  • NOTE : and we're still far from the max possible TPS to get from this HW ! => work in progress..
While looking on the same result expressed in QPS we can see that we're more and more close to 1M QPS obtained on the same server with pure OLTP_RO :

Sysbench Update-NoKey TPS
Comments :
  • near 250K TPS with MySQL 8.0 !
  • and indeed, there was a huge performance drop in 5.7 comparing to 5.6
  • extremely happy to see this fixed finally with 8.0 ;-))
  • (and no idea what to advice to MariaDB who just adopted InnoDB from 5.7)...

Then, for those who are curious about "overheads", the same test results but only for MySQL 8.0 with turned PFS=on and checksums=crc32 :

Update-NoKey :
As you can see, the difference is really small and remains under 5% (in case you expected to see something more big here ;-))

However :
  • even the results with MySQL 8.0 are looking better, we're yet far from scaling on Writes !
  • work is still in progress..

the full list of configuration options I've used :
# general max_connections=4000 table_open_cache=8000 table_open_cache_instances=16 max_prepared_stmt_count=512000 back_log=1500 default_password_lifetime=0 default_authentication_plugin=mysql_native_password character_set_server=latin1 collation_server=latin1_swedish_ci skip-character-set-client-handshake ssl=0 performance_schema=OFF skip_log_bin=1 transaction_isolation=REPEATABLE-READ # files innodb_file_per_table innodb_log_file_size=1024M innodb_log_files_in_group=32 innodb_open_files=4000 # buffers innodb_buffer_pool_size=128000M innodb_buffer_pool_instances=16 innodb_log_buffer_size=64M # tune innodb_doublewrite=0 innodb_thread_concurrency=0 innodb_flush_log_at_trx_commit=1 innodb_max_dirty_pages_pct=90 innodb_max_dirty_pages_pct_lwm=10 join_buffer_size=32K sort_buffer_size=32K innodb_use_native_aio=1 innodb_stats_persistent=1 innodb_spin_wait_delay=6 innodb_max_purge_lag_delay=300000 innodb_max_purge_lag=0 innodb_flush_method=O_DIRECT innodb_checksum_algorithm=none innodb_io_capacity=10000 innodb_io_capacity_max=40000 innodb_lru_scan_depth=9000 innodb_change_buffering=none innodb_read_only=0 innodb_page_cleaners=16 innodb_undo_log_truncate=off # perf special innodb_adaptive_flushing=1 innodb_flush_neighbors=0 innodb_read_io_threads=16 innodb_write_io_threads=16 innodb_purge_threads=4 innodb_adaptive_hash_index=0 # monitoring innodb_monitor_enable='%'
And if you wish to reproduce the same tests, you can find the MySQL 8.0 starter note here, and test related instructions from here and then here.

Thank you for using MySQL !


MySQL InnoDB Cluster Tutorial Videos

We just published a series of 4 video tutorials about our native, full-stack, out-of-the-box and easy to use High Availability and Scaling solution for MySQL: InnoDB Cluster!

If you want to see the new MySQL Shell 8.0 in action to create a MySQL InnoDB Cluster and try it out yourself, just take a look!…

MySQL master discovery methods, part 2: VIP & DNS

This is the second in a series of posts reviewing methods for MySQL master discovery: the means by which an application connects to the master of a replication tree. Moreover, the means by which, upon master failover, it identifies and connects to the newly promoted master.

These posts are not concerned with the manner by which the replication failure detection and recovery take place. I will share orchestrator specific configuration/advice, and point out where cross DC orchestrator/raft setup plays part in discovery itself, but for the most part any recovery tool such as MHA, replication-manager, severalnines or other, is applicable.

We discuss asynchronous (or semi-synchronous) replication, a classic single-master-multiple-replicas setup. A later post will briefly discuss synchronous replication (Galera/XtraDB Cluster/InnoDB Cluster).

Master discovery via VIP

In part 1 we saw that one the main drawbacks of DNS discovery is the time it takes for the apps to connect to the promoted master. This is the result of both DNS deployment time as well as client's TTL.

A quicker method is offered: use of VIPs (Virtual IPs). As before, apps would connect to,, etc. However, these would resolve to specific VIPs.

Say resolves to We let this address float between servers. Each server has its own IP (say 10.20.0.XXX) but could also potentially claim the VIP

VIPs can be assigned by switches and I will not dwell into the internals, because I'm not a network expert. However, the following holds:

  • Acquiring a VIP is a very quick operation.
  • Acquiring a VIP must take place on the acquiring host.
  • A host may be unable to acquire a VIP should another host holds the same VIP.
  • A VIP can only be assigned within a bounded space: hosts connected to the same switch; hosts in the same Data Center or availability zone.

A non planned failover illustration #1

Master M has died, the box had a power failure. R gets promoted in its place. Our recovery tool:

  • Attempts to connect to M so that it can give up the VIP. The attempt fails because M is dead.
  • Connects to R and instructs it to acquire the VIP. Since M is dead there is no objection, and R successfully grabs the VIP.
  • Any new connections immediately route to the new master R.
  • Clients with connections to M cannot connect, issue retries, immediately route to R.
A non planned failover illustration #2

Master M gets network isolated for 30 seconds, during which time we failover. R gets promoted. Our tool:

  • Attempts to connect to M so that it can give up the VIP. The attempt fails because M is network isolated.
  • Connects to R and instructs it to acquire the VIP. Since M is network isolated there is no objection, and R successfully grabs the VIP.
  • Any new connections immediately route to the new master R.
  • Clients with connections to M cannot connect, issue retries, immediately route to R.
  • 30 seconds later M reappears, but no one pays any attention.
A non planned failover illustration #3

Master M box is overloaded. It is not responsive to new connections but may slowly serves existing connections. Our tool decides to failover:

  • Attempts to connect to M so that it can give up the VIP. The attempt fails because M is very loaded.
  • Connects to R and instructs it to acquire the VIP. Unfortunately, M hasn't given up the VIP and still shows up as owning it.
  • All existing and new connections keep on routing to M, even as R is the new master.
  • This continues until some time has passed and we are able to manually grab the VIP on R, or until we forcibly network isolate M or forcibly shut it down.

We suffer outage.

Planned failover illustration

We wish to replace the master, for maintenance reasons. We successfully and gracefully promote R.

  • M is available and responsive, we ask it to give up the VIP, which is does.
  • We ask R to grab the VIP, which it does.
  • All new connections route to R.
  • We may still see old connections routing to M. We can forcibly network isolate M to break those connections so as to cause reconnects, or restart apps.

As with DNS discovery, the apps are never told of the change. They may be forcibly restarted though.

Grabbing a VIP is a quick operation. However, consider:

  • It is not guaranteed to succeed. I have seen it fail in various situations.
  • Since releasing/acquiring of VIP can only take place on the demoted/promoted servers, respectively, our failover tool will need to:
    • Remote SSH onto both boxes, or
    • Remote exec a command on those boxes
  • Moreover, the tool will do so sequentially. First we must connect to demoted master to give up the VIP, then to promoted master to acquire it.
  • This means the time at which the new master grabs the VIP depends on how long it takes to connect to the old master to give up the VIP. Seeing that the old master had trouble causing failover, we can expect correlation to not being able to connect to old master, or seeing slow connect time.
  • An alternative exists, in the form of Pacemaker. Consider Percona's Replication Manager guide for more insights. Pacemaker provides a single point of access from where the VIP can be moved, and behind the scenes it will communicate to relevant nodes. This makes it simpler on the failover solution configuration.
  • We are constrained by physical location.
  • It is still possible for existing connection to keep on communicating to the demoted master, even while the VIP has been moved.
VIP & DNS combined

Per physical location, we could choose to use VIP. But should we need to failover to a server in another DC, we could choose to combine the DNS discovery, discussed in part 1.

We can expect to see faster failover time on a local physical location, and longer failover time on remote location.

Sample orchestrator configuration

What kind of remote exec method will you have? In this sample we will use remote (passwordless) SSH.

An orchestrator configuration would look like this:

"ApplyMySQLPromotionAfterMasterFailover": true, "PostMasterFailoverProcesses": [ "ssh {failedHost} 'sudo ifconfig the-vip-interface down'", "ssh {successorHost} 'sudo ifconfig the-vip-interface up'", "/do/what/you/gotta/do to apply dns change for {failureClusterAlias} to {successorHost}" ],

In the above:

  • Replace SSH with any remote exec method you may use.
    • But you will need to set up the access/credentials for orchestrator to run those operations.
  • Replace ifconfig with service quagga stop/start or any method you use to release/grab VIPs.

See orchestrator configuration documentation.

Where to ask questions about MyRocks and RocksDB

Best places to discuss MyRocks:
Other places to discuss MyRocks:
  • MyRocks group for FB MySQL - this existed before MyRocks made it into Percona and MariaDB. You are better off using the MariaDB and Percona groups.
  • Bugs for MyRocks in FB MySQL are here. Again, if using Percona or MariaDB please use their forums for bugs.

How Binary Logs Affect MySQL 8.0 Performance

As part of my benchmarks of binary logs, I’ve decided to check how the recently released MySQL 8.0 performance is affected in similar scenarios, especially as binary logs are enabled by default. It is also interesting to check how MySQL 8.0 performs against the claimed performance improvements in redo logs subsystem.

I will use a similar setup as in my last blog with MySQL 8.0, using the utf8mb4 charset.

I have a few words about MySQL 8.0 tuning. Dimitri’s recommends in his blog posts using innodb_undo_log_truncate=off and innodb_doublewrite=0. However, in my opinion, using these setting are the same as participating in a car race without working breaks: you will drive very fast, but it will end badly. So, contrary to Dimitri’s recommendations I used innodb_undo_log_truncate=on and innodb_doublewrite=1.

Servers Comparison

For the first run, let’s check the results without binary logs vs. with binary logs enabled, but with sync_binlog=1 for Percona Server for MySQL 5.7 vs. MySQL 8.0.

In tabular form:

Binary log Buffer pool, GB MYSQL8 PS57 Ratio PS57/MySQL8 binlog 5 768.0375 771.5532 1.00 binlog 10 1224.535 1245.496 1.02 binlog 20 1597.48 1625.153 1.02 binlog 30 1859.603 1979.328 1.06 binlog 40 2164.329 2388.804 1.10 binlog 50 2572.827 2942.082 1.14 binlog 60 3158.408 3528.791 1.12 binlog 70 3883.275 4535.281 1.17 binlog 80 4390.69 5246.567 1.19 nobinlog 5 788.9388 783.155 0.99 nobinlog 10 1290.035 1294.098 1.00 nobinlog 20 1745.464 1743.759 1.00 nobinlog 30 2109.301 2158.267 1.02 nobinlog 40 2508.28 2649.695 1.06 nobinlog 50 3061.196 3334.766 1.09 nobinlog 60 3841.92 4168.089 1.08 nobinlog 70 4772.747 5140.316 1.08 nobinlog 80 5727.795 5947.848 1.04


Binary Log Effect

In tabular form:

Buffer pool, GB server binlog nobinlog Ratio nobinlog / binlog 5 MYSQL8 768.0375 788.9388 1.03 5 PS57 771.5532 783.155 1.02 10 MYSQL8 1224.535 1290.0352 1.05 10 PS57 1245.496 1294.0983 1.04 20 MYSQL8 1597.48 1745.4637 1.09 20 PS57 1625.153 1743.7586 1.07 30 MYSQL8 1859.603 2109.3005 1.13 30 PS57 1979.328 2158.2668 1.09 40 MYSQL8 2164.329 2508.2799 1.16 40 PS57 2388.804 2649.6945 1.11 50 MYSQL8 2572.827 3061.1956 1.19 50 PS57 2942.082 3334.7656 1.13 60 MYSQL8 3158.408 3841.9203 1.22 60 PS57 3528.791 4168.0886 1.18 70 MYSQL8 3883.275 4772.7466 1.23 70 PS57 4535.281 5140.316 1.13 80 MYSQL8 4390.69 5727.795 1.30 80 PS57 5246.567 5947.8477 1.13



It seems that binary logs have quite an effect MySQL 8.0, and we see up to a 30% performance penalty as opposed to the 13% for Percona Server for MySQL 5.7.

In general, for in-memory workloads, Percona Server for MySQL 5.7 outperforms MySQL 8.0 by 10-20% with binary logs enabled, and 4-9% without binary logs enabled.

For io-bound workloads (buffer pool size <= 30GB), the performance numbers for Percona Server for MySQL and MySQL are practically identical.

Hardware spec

Supermicro server:

  • Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
  • 2 sockets / 28 cores / 56 threads
  • Memory: 256GB of RAM
  • Storage: SAMSUNG  SM863 1.9TB Enterprise SSD
  • Filesystem: ext4/xfs
  • Percona-Server-5.7.21-20
  • OS: Ubuntu 16.04.4, kernel 4.13.0-36-generic
Extra Raw Results, Scripts and Config

My goal is to provide fully repeatable benchmarks. I have shared all scripts and settings I used in the following GitHub repo:


The post How Binary Logs Affect MySQL 8.0 Performance appeared first on Percona Database Performance Blog.

How Binary Logs (and Filesystems) Affect MySQL Performance

I want to take a closer look at MySQL performance with binary logs enabled on different filesystems, especially as MySQL 8.0 comes with binary logs enabled by default.

As part of my benchmarks of the MyRocks storage engine, I’ve noticed an unusual variance in throughput for the InnoDB storage engine, even though we spent a lot of time making it as stable as possible in Percona Server for MySQL. In the end, the culprit was enabled binary logs. There is also always the question, “If there is a problem with EXT4, does XFS perform differently?” To answer that, I will repeat the same benchmark on the EXT4 and XFS filesystems.

You can find our previous experiments with binary logs here:

Benchmark Setup

A short overview of the benchmark setup:

  • Percona Server for MySQL 5.7.21
  • InnoDB storage engine
  • In contrast to the previous benchmark, I enabled foreign keys, used REPEATABLE-READ isolation level, and I used UTF8 character sets. Because of these changes, the results are not really comparable with the previous results.
  • The dataset is the same: sysbench-tpcc with ten tables and 100 warehouses, resulting in a total of 1000 warehouses, and about a 90GB dataset size.
  • I will use innodb_buffer_pool_size 80GB, 70GB, and 60GB to emulate different IO loads and evaluate how that affects binary logs writes.
Initial Results

For the first run, let’s check the results without binary logs vs. with binary log enabled, but with sync_binlog=0:

We can see that results without binary logs are generally better, but we can also see that with binary logs enabled and sync_binglog=0, there are regular drops to 0 for 1-2 seconds. This basically results in stalls in any connected application.

So, enabling binary logs may result in regular application stalls. The reason for this is that there is a limit on the size of the binary log file (max_binlog_size), which is 1GB. When the limit is reached, MySQL has to perform a binary log rotation. With sync_binlog=0, all previous writes to the binary log are cached in the OS cache, and during rotation, MySQL forces synchronous flushing of all changes to disk. This results in complete stalls every ~40 seconds (the amount of time it takes to fill 1GB of binary log in the above tests).

How can we deal with this? The obvious solution is to enable more frequent sync writes of binary logs. This can be achieved by setting sync_binlog > 0. The popular choice is the most strict, sync_binlog=1, providing the most guarantees. The strict setting also comes with noted performance penalties. I will also test sync_binlog=1000 and sync_binlog=10000, which means perform synchronous writes of binary logs every 1000 and 10000 transactions, respectively.

The Results

The same results in a tabular format with median throughput (tps, more is better)

Bp sync_binlog 0 1 1000 10000 nobinlog 60 GB 4174.945 3598.12 3950.19 4205.165 4277.955 70 GB 5053.11 4541.985 4714 4997.875 5328.96 80 GB 5701.985 5263.375 5303.145 5664.155 6087.925


Some conclusions we can make:

  • sync_binlog=1 comes with the biggest performance penalty, but with minimal variance. This is comparable to running without binary logs.
  • sync_binlog=0 provides best (for enabled binary logs) performance, but the variance is huge.
  • sync_binlog=1000 is a good compromise, providing better performance than sync_binlog=1 with minimal variance.
  • sync_binlog=10000 might not be good, showing less variance than with 0, but it is still big.

So what value should we use? This is probably a choice between sync_binlog=1 or some value like 1000. It depends on your use case and your storage solution. In the case of slow storage, sync_binlog=1 may show a bigger penalty compared to what I can see on my enterprise SATA SSD SAMSUNG SM863.


All of the above results were on an EXT4 filesystem. Let’s compare to XFS. Will it show different throughput and variance?

The median throughput in tabular format:

sync_binlog Buffer pool (GB) EXT4 XFS 0 60 4174.945 3902.055 0 70 5053.11 4884.075 0 80 5701.985 5596.025 1 60 3598.12 3526.545 1 70 4541.985 4538.455 1 80 5263.375 5255.38 1000 60 3950.19 3620.05 1000 70 4714 4526.49 1000 80 5303.145 5150.11 10000 60 4205.165 3874.03 10000 70 4997.875 4845.85 10000 80 5664.155 5557.61 No binlog 60 4277.955 4169.215 No binlog 70 5328.96 5139.625 No binlog 80 6087.925 5957.015


We can observe the general trend that median throughput on XFS is a little worse than with EXT4, with practically identical variance.

The difference in throughput is minimal. You can use either XFS or EXT4.

Hardware Spec

Supermicro server:

  • Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
  • 2 sockets / 28 cores / 56 threads
  • Memory: 256GB of RAM
  • Storage: SAMSUNG  SM863 1.9TB Enterprise SSD
  • Filesystem: ext4/xfs
  • Percona-Server-5.7.21-20
  • OS: Ubuntu 16.04.4, kernel 4.13.0-36-generic
Extra Raw Results, Scripts and Config

My goal is to provide fully repeatable benchmarks. To that effect, I’ve shared all the scripts and settings I used in the following GitHub repo:

The post How Binary Logs (and Filesystems) Affect MySQL Performance appeared first on Percona Database Performance Blog.

This Week in Data with Colin Charles 38: Percona Live Europe 2018 and PostgreSQL

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

The week after Percona Live Santa Clara 2018 tends to be much quieter, aided by the fact that I took a few days away during Labor Day. The next thing to look out for is Percona Live Europe 2018, which at this stage is really a note to let you save the dates: November 5-7 2018, at the Radisson Blu, in Frankfurt. There is no call for papers yet, there is no committee, and it is not listed yet at the Percona Live Conferences page. Hang in there! We’ll open the call for papers soon!

Now that Percona is in the PostgreSQL space, it seems prudent that there will also be more PostgreSQL content here. A great resource naturally is Planet PostgreSQL. There also seems to be another resource on The internals of PostgreSQL, and as books go, Mastering PostgreSQL in Application Development sure looks very interesting. Do you have recommended resources?

Releases Link List Upcoming appearances Feedback

I look forward to feedback/tips via e-mail at or on Twitter @bytebot.

The post This Week in Data with Colin Charles 38: Percona Live Europe 2018 and PostgreSQL appeared first on Percona Database Performance Blog.

MySQL for Visual Studio – Rollback during installation/uninstallation

A number of reports have been raised against versions 1.2.7 and 2.0.5 where the installation/uninstallation of MySQL for Visual Studio would rollback in Windows 7 environments preventing the operation from completing. We are glad to announce this has been addressed in the upcoming 1.2.8 version.


What is causing the failure?

It’s been identified that in most cases the rollback occurs when Visual Studio fails to execute the “devenv /updateconfiguration /log” command. Said command signals Visual Studio to merge the Visual Studio packages on the system and to check the MEF cache for any changes allowing it to register/unregister MySQL for Visual Studio. The failure is caused by faulty permissions when trying to execute any “devenv” command and is attributed to a known issue in Windows 7 environments with Visual Studio 2017 as indicated by several online reports such as:


Changes applied to MySQL for Visual Studio installer

MySQL for Visual Studio 1.2.8 msi installer has been updated to attempt to execute the command and to display a warning in the event that the execution fails, preventing the installation/uninstallation from rolling back. However, when said warning is raised, users are recommended to manually execute the command via the Developer Command Prompt for VS2017 to ensure the changes are reflected in Visual Studio.

The Developer Command Prompt for VS2017 is a tool included as part of the installation of Visual Studio 2017. It can be easily accessed by going to All Programs -> Visual Studio 2017 -> Visual Studio Tools. Alternatively the user can open any command prompt window and navigate to “<drive>:\Program Files (x86)\Microsoft Visual Studio\2017\<vs_version>\Common7\IDE” from where the “devenv /updateconfiguration /log” command can be executed as well.


Installation/uninstallation of MySQL for Visual Studio 1.2.7 / 2.0.5 workaround

As previously stated, the released versions of MySQL for Visual Studio 1.2.7 and 2.0.5 are affected by this issue, however there is a known workaround which is detailed below:

  1. If the user has custom settings set in the Visual Studio environment, proceed to export them via the Tools->Import and Export Settings option as they will be lost when executing step 2.
  2. Go to “%localappdata%\Microsoft\VisualStudio\15.0_<instance_id>” and delete the “privateregistry.bin” file.
  3. Attempt the installation/uninstallation again, it should succeed.
  4. Open Visual Studio. If for any reason Visual Studio fails to start with an “Unknown error” or any other error, again delete the “privateregistry.bin” file and then open Visual Studio.
  5. Import settings if applicable.

Percona Live 2018 Community Report

So, after a whirlwind few days, Percona Live 2018 has been and gone. There was a great energy about the conference, and it was fantastic to meet so many open source database enthusiasts and supporters. A few things that I experienced:

  • Your great willingness to share knowledge. It was a fantastic place to learn for those who have experience from a different field of technology. Almost everyone seemed to be very open and generous with their time.
  • The “superstars” from our industry are not so scary. They are as willing to be open and generous with their experience and views as any of the other attendees, and equally as interested in making new discoveries.
  • There aren’t many times you can sit down to a (community) dinner, to share food and anecdotes with people from USA, UK, Germany and Armenia at the same time. I thoroughly enjoyed the company, and wish there were more opportunities for similar encounters. Thanks to Pythian for setting that up.
  • My Percona colleagues are wonderful, committed human beings with more than a passing interest in music – the Percona Sessions have got to happen…
  • That you can run a very long way in a day between the Santa Clara Convention Center and the Hyatt Regency Hotel.

I had very many positive conversations with delegates. You offered any criticisms along with a suggestion of how we should tweak things for the better. Our community is a creative, generous, problem-solving machine, though I shouldn’t be surprised at that.

So, with only a few more duties to complete, I’d like to thank you for your company. For those that did not make it to this year’s event, I hope that you might be persuaded to join us in the future — either at Percona Live Europe 2018 or at Percona Live 2019.

Packt Prizes

Our media sponsor, Packt, generously provided us with three free ebooks and two free instruction videos as prizes for delegates:

  1. Mastering MongoDB 3.x
  2. MySQL 8 Cookbook
  3. MongoDB Administrator’s Guide
  4. Elastic Databases and Data Processing with AWS [Video]
  5. AWS Administration – Database, Networking, and Beyond [Video]

There are another 10 titles for which we can offer delegates a 50% discount: you should have received your emails. Thanks are due again to Packt.

Community Blog

While I have your attention, I’d like to let you know about the forthcoming Percona community blog. Having been some time in the planning, this is starting really soon, and is like a year-round, online, Percona Live. We already have some keen writers for this, but if you would be interested in creating content (whether written, podcast or webcast) for the community blog, then please get in touch. The brief is very wide — as long as your submission is relevant to the open source database community then it would be welcome.

Finally, I would like to invite feedback on how to make the event shine even brighter — please drop me an email if you have suggestions or ideas. Meanwhile, I hope you enjoy these photographs of the MySQL Community Awards Winners, presented at PL18. You can read more about this community initiative.

Perhaps you’ll be able to join us in Frankfurt in November? Time to start thinking about those submissions for the call for papers!

Or perhaps next year at Percona Live Open Source Database Conference in 2019 – wherever it may be!

Photographs: Randy Tunnell Photography

The post Percona Live 2018 Community Report appeared first on Percona Database Performance Blog.

MySQL for Visual Studio 1.2.8 has been released

Dear MySQL users,

The MySQL Windows Experience Team is proud to announce the release
of MySQL for Visual Studio 1.2.8.

This is a maintenance release for 1.2.x. It can be used for
production environments.

MySQL for Visual Studio is a product that includes all of the
Visual Studio integration functionality to create and manage
MySQL databases when developing .NET applications.

MySQL for Visual Studio is installed using the MySQL Installer
for Windows which comes in 2 versions:
* Full (386.6 MB) which includes a complete set of MySQL products
with their binaries included in the downloaded bundle.
* Web (1.7 MB – a network install) which will just pull MySQL
for Visual Studio over the web and install it when run.

You can download MySQL Installer from our official Downloads page at

MySQL for Visual Studio can also be downloaded by using the product
standalone installer found at

Changes in MySQL for Visual Studio 1.2.8 (2018-05-04, General

* Functionality Added or Changed

* Bugs Fixed

Functionality Added or Changed

* When Microsoft Visual Studio receives a request to
register (or unregister) MySQL for Visual Studio within
the IDE during the installation, Visual Studio might not
execute the command properly if the host is running
Windows 7. This fix identifies when Visual Studio does
not register MySQL for Visual Studio as requested and
then provides an alert to perform the registration
manually from the Developer Command Prompt for Visual
Studio using the following command:

devenv /updateconfiguration /log

(Bug #27365261, Bug #87902)

* MySQL for Visual Studio now supports the MySQL 8.0
release series (requires Connector/Net 6.9.12, 6.10.7, or
8.0.11) including:

+ MySQL data dictionary, which uses INFORMATION_SCHEMA
tables rather than tables in the mysql database (see
MySQL Data Dictionary

+ The caching_sha2_password authentication plugin
introduced in MySQL 8.0 (see Caching SHA-2 Pluggable
In addition, MySQL for Visual Studio now requires that
.NET Framework 4.5.2 (or later) be installed for use
within Visual Studio 2012 and Visual Studio 2013.

Bugs Fixed

* The Website Configuration Tool was unable to add an
ADO.NET entity data model to an ADO.NET web application
successfully. In addition to aligning the web.config file
with the installed version of Connector/Net, this fix
also disables the Entity Framework 5 selection when the
installed connector no longer includes a provider for EF5
(Connector/Net 6.10 and higher). (Bug #27593219)

* Queries made with a valid MySQL connection received no
response from the server. (Bug #27584991)

* When the MySQL for Visual Studio parser was unable to
parse an SQL statement, it generated an unhandled
exception that caused Visual Studio to exit unexpectedly.
This fix enables the parser to handle exceptions properly
and updates the parser to include support for CREATE USER
syntax in the MySQL 8.0 release series. (Bug #27580303)

* The version for the MySql.Web assembly was incorrectly
extracted, which prevented the assembly from loading
properly and the MySQL Website Configuration Tool from
launching. (Bug #27450530)

* Attempting to open the MySQL Web Configuration Tool, with
MySQL Connector/Net and MySQL for Visual Studio
prerequisites installed properly, displayed an error
message instead of opening the tool. (Bug #27365141, Bug

On Behalf of MySQL Release Engineering Team,
Surabhi Bhat

Partial (Optimised) JSON updates and Replication in MySQL 8.0

           MySQL is the most widely used  Relational Database Management System in the open source world. MySQL stepped into the NoSQL world by introducing the JSON Data Type in MySQL 5.7 release. In this blog post I am going to explain one of the major advantage of optimisation made in JSON Replication .

This was done from the MySQL 8.0.3 release.

What happened before 8.0.3 ?

Before MySQL 8.0.3, Whatever changes (Updates) made in JSON document, it will log as a full document in binary log & replicate the same into slave. The JSON data is stored as a blob internally. Below is an example of how it is logged as full document in binary log ?

Example –

Server version - 5.7.22-log MySQL Community Server (GPL) My Binlog settings, log-bin Binlog_format = ROW Binlog_row_image = minimal Creating table with JSON data type, mysql> create table json_repl (id int primary key, emp_de json); Query OK, 0 rows affected (0.01 sec)    Inserting the values, mysql> insert into json_repl values(1,'{"tag": "desk1 addr1 history1 grade1", "book": "table1 and emp_historicaldata", "emp_id": "aaaa1", "designation": "MySQL DBA", "position_grade": "A1 grade1 and MASTER in MySQL"}'); Query OK, 1 row affected (0.01 sec) Updating the record with JSON function JSON_REPLACE. mysql> update json_repl set emp_de = json_replace(emp_de,'$.emp_id','a1'); Query OK, 1 row affected (0.01 sec) Rows matched: 1  Changed: 1 Warnings: 0 Binlog view , mysqlbinlog -vvv --base64-output=DECODE-ROWS mydbopslabs13-bin.000011 > log57 cat log57 ### UPDATE `json_store`.`json_repl` ### WHERE ###   @1=1 /* INT meta=0 nullable=0 is_null=0 */ ### SET ###   @2='\x00\x05\x00<B2>\x00\x27\x00\x03\x00*\x00\x04\x00.\x00\x06\x004\x00\x0b\x00?\x00\x0e\x00\x0cM\x00\x0ci\x00\x0c<87>\x00\x0c<8A>\x00\x0c<94>\x00tagbookemp_iddesignationposition_grade\x1bdesk1 addr1 history1 grade1\x1dtable1 and emp_historicaldata\x02a1\x09MySQL DBA\x1dA1 grade1 and MASTER in MySQL' /* JSON meta=4 nullable=1 is_null=0 */ # at 666596 #180421 23:00:58 server id 1  end_log_pos 666627 CRC32 0xbf0a14ea       Xid = 138659 COMMIT/*!*/;


Here, the JSON_REPLACE function is used to update the value in emp_id field . But, it is logging the full document in binary log file .

What happened after 8.0.3 ?

From MySQL 8.0.3, MySQL community released the new variable to avoid the full document logging into the binary log . The new variable  binlog_row_value_options.  The valid options are PARTIAL_JSON or empty(‘’) . More information in the worklog (WL 2955)

Below, I have explained how the replication is differed from older versions .

Example –

MySQL Version - MySQL 8.0.3 My Binlog settings ( my.cnf ) log-bin binlog_format=row binlog_row_image=minimal binlog_row_value_options=partial_json Creating table with JSON data type, mysql> create table json_repl (id int primary key, emp_de json); Query OK, 0 rows affected (0.01 sec)    Inserting the values, mysql> insert into json_repl values(1,'{"tag": "desk1 addr1 history1 grade1", "book": "table1 and emp_historicaldata", "emp_id": "aaaa1", "designation": "MySQL DBA", "position_grade": "A1 grade1 and MASTER in MySQL"}'); Query OK, 1 row affected (0.01 sec) Updating the record with JSON function JSON_REPLACE. mysql> update json_repl set emp_de = json_replace(emp_de,'$.emp_id','a1'); Query OK, 1 row affected (0.01 sec) Rows matched: 1  Changed: 1 Warnings: 0 Binlog view , mysqlbinlog -vvv --base64-output=DECODE-ROWS mydbopslabs15-bin.000008 > log803 cat log803 ### UPDATE `json_store`.`json_repl` ### WHERE ###   @1=1 /* INT meta=0 nullable=0 is_null=0 */ ### SET ###   @2=JSON_REPLACE(@2, '$.emp_id', 'a1') /* JSON meta=4 nullable=1 is_null=0 */ # at 516165 #180421 23:02:27 server id 3  end_log_pos 516196 CRC32 0x43c90650       Xid = 127163 COMMIT/*!*/;


Now, we are can observe only the updated field in binlog . Hope, it clearly explains the optimisation in JSON .

How it Benefits in Production Environment ?

Of course, there are many benefits. One of the major advantage is less disk usage .

Lower disk usage (IO)  –

By avoiding to logging the full document, we can save the disk usage (IO). I am just curious to know how much disk we can save by this approach, so i have made an test with 1 core records in both MySQL 5.7 and MySQL 8.0.3 . Below i am sharing the findings and my test results as well .

I have initiated the below script for insert the record in both MySQL 5.7 & MySQL 8.0.3.

For INSERT, [sakthi@mydbopslabs13 mysql]# cat #bin/bash for i in `seq 2 10000000`; do mysql -e "insert into json_store.json_repl values($i,'{\"emp_id\": \"aaaa$i\", \"tag\": \"desk$i addr$i history$i grade$i\", \"book\": \"table$i and emp_historicaldata\", \"position_grade\": \"A$i grade$i and MASTER in MySQL\", \"designation\": \"MySQL DBA\"}');" done


After inserting the records, i have decided to update every values based on primary key with the below script in both MySQL 5.7 and MySQL 8.0.3. Then we can finally analyze the binlog growth . 

[sakthi@mydbopslabs13 mysql]# cat #/bin/bash mysql -e "flush logs;" for i in `seq 1 10000000`; do mysql -e "update json_store.json_repl set emp_de = json_replace(emp_de,'$.emp_id','a$i') where id=$i;" done


I have analysed the binlog growth in both MySQL 5.7 & MySQL 8.0.3 post JOSN Update. It is really good .

Binlog Growth for update in MySQL 5.7, rw-r-----. 1 mysql mysql 1.1G  Apr 21 20:05 mydbopslabs13-bin.0000017 rw-r-----. 1 mysql mysql 1.1G  Apr 21 20:34 mydbopslabs13-bin.0000018 rw-r-----. 1 mysql mysql 1.1G  Apr 21 21:02 mydbopslabs13-bin.0000019 rw-r-----. 1 mysql mysql 1.1G  Apr 21 21:29 mydbopslabs13-bin.0000020 rw-r-----. 1 mysql mysql 1.1G  Apr 21 21:57 mydbopslabs13-bin.0000021 rw-r-----. 1 mysql mysql 1.1G  Apr 21 22:31 mydbopslabs13-bin.0000022 rw-r-----. 1 mysql mysql 652M Apr 21 23:00 mydbopslabs13-bin.0000023

Total – 6 GB 652 MB

Binlog Growth for update in MySQL 8.0, -rw-r-----. 1 mysql mysql 1.1G  Apr 21 20:07 mydbopslabs15-bin.000012 -rw-r-----. 1 mysql mysql 1.1G  Apr 21 20:38 mydbopslabs15-bin.000013 -rw-r-----. 1 mysql mysql 1.1G  Apr 21 21:09 mydbopslabs15-bin.000014 -rw-r-----. 1 mysql mysql 1.1G  Apr 21 21:35 mydbopslabs15-bin.000015 -rw-r-----. 1 mysql mysql 1.1G  Apr 21 22:11 mydbopslabs15-bin.000016 -rw-r-----. 1 mysql mysql 74M   Apr 21 22:17 mydbopslabs15-bin.000017

Total – 5 GB 74 MB

Binlog Growth Graph –

From the analysis we have saved approximately 1.5 GB of disk space. It will be vary based on yours JSON data .

Less Network Traffic –

We can transfer the small amount of data over network instead of the full document . By this approach We can save the unwanted network usage in replication too.

Can avoid Replication Lag –

Replicating the big document (blob) might cause slower writes to the slave. So, there is a chance for replication lag.  We can avoid the replication  lag by this approach .

Limitations –

This only take effect with the below JSON functions. Without the below functions, it will log the full document even the partial_json enabled .


Input column and target column should be the same .

It only affect the ROW based replication Environment . Statement based replication will always works as its nature .

Conclusion –

JSON is very good for store the NoSQL data . It is very good for read purpose . MySQL has introduced the JSON and keep improving it . Definitely, JSON will play an important role for making the MySQL better . At Mydbops, We are keep testing the new things on MySQL and related tools, will be coming back with new blog soon.

Image Courtesy :

Ubuntu 18.04 (LTS) LAMP server tutorial with Apache, PHP 7.2, and MySQL

LAMP is short for Linux, Apache, MySQL, PHP. This tutorial shows how you can install an Apache 2.4 web server on an Ubuntu 18.04 (Bionic Beaver) server with PHP 7.2 support (mod_php) and MySQL / MariaDB and how to setup an SSL certificate with Let's encrypt. Additionally, I will install PHPMyAdmin to make MySQL administration easier. A LAMP setup is a perfect basis for CMS systems like Joomla, Wordpress or Drupal.