Planet MySQL

OurSQL Episode 204: Just For Laughs


When we went monthly at the end of 2014, did you worry that we would forget our blooper show? This month's podcast is a blooper reel just for you - about 35 minutes of put-a-smile-on-your-face fun with insight as to how things work - or don't work - behind-the-scenes.

I hope your 2015 is going well!

PlanetMySQL Voting: Vote UP / Vote DOWN

WebScaleSQL builds for the MySQL Community

We have been looking at the WebScaleSQL project with great excitement. As with any new enhancements to the MySQL world, we need to test extensively to ensure we can give PSCE customers the best advice possible. Since this project is source only, we decided to add WebScaleSQL builds to our repo, so we could examine the changes being introduced by all the different collaborators.

So what is WebscaleSQL?

WebScaleSQL is a collaboration among engineers from several companies that face the same challenges in deploying MySQL at scale, and seek greater performance from a database technology tailored for their needs.

– WebScaleSQL, Frequently Asked Questions

What makes this project so special, is the level of collaboration between some of the most prestigious teams working with the MySQL technology, currently WebScaleSQL includes contributions from MySQL engineering teams at Alibaba, Facebook, Google, LinkedIn, and Twitter.

The repo is available and is currently producing builds for Debian, with work going on to broaden the builds to CentOS and other distributions.

The repo functions much like any other, use the following steps to download test builds.


apt-key adv --keyserver --recv-keys 2570689B deb main



Let us know if you find this useful and watch out for more posts on the subject as we add more builds.

PlanetMySQL Voting: Vote UP / Vote DOWN

Business continuity with geographically distributed multi-master MySQL clusters

Global data access can greatly expand the reach of your business. Continuent's multi-site multi-master (MSMM) solutions enable applications to accept write traffic in multiple locations across on-premises and vCloud Air. This includes the following important real-world use cases: Improve performance for globally distributed users registering hardware devices by permitting updates on the
PlanetMySQL Voting: Vote UP / Vote DOWN

a multisource replication scenario: 10 masters, 1 slave.

A customer asked whether we they could have 10 or more masters consolidate all the data into a single central slave. After getting a bit more information from them and seeing the application functionality, it was clear that MySQL Labs 5.7.5 Multi Source Replication could be a good candidate. Why?:
– Each master is independent from the rest of the masters.
– One-way traffic: there is only one way to update a row, and that’s from the master.
– All the masters use the same schema and table, but no single master will ever need to, nor be able to update a row from another master.
– PK determined via app & master env.

Multisource replication is still in, but here’s what I did to test it out.

First, I read:

Then, downloaded what I needed from


And tried for myself:

cd /opt/mysql/msr mkdir -p 3100/data 3001/data 3002/data 3003/data 3004/data 3005/data 3006/data 3007/data 3008/data 3009/data 3010/data

How many masters do I want to test? Well.. for now 10, but I’ll see later how far I can go.

My setup will look like the following, all on the same host:

datadirs, port, socket location, server-id (1-10, 100):

3100               the single slave

3001 – 3010   the 10 masters

and a my.cnf that will let me use multi_mysqld and make my life a whole lot easier, as all this is being done on a single machine:

[mysqld_safe] log-error                       =/opt/mysql/msr/mysqld.log [mysqld_multi] #no-log log                             =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/msr/mysqld_multi.log mysqld                          =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/bin/mysqld_safe mysqladmin                      =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/bin/mysqladmin user                            =root password                        =pass [mysqld] basedir                         =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64 performance-schema-instrument   ='%=ON' [mysqld3100] server-id                       =3100 mysqld                          =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/bin/mysqld_safe ledir                           =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/bin port                            =3100 pid-file                        =/opt/mysql/msr/3100/ socket                          =/opt/mysql/msr/3100/mysql.sock datadir                         =/opt/mysql/msr/3100/data log-error                       =/opt/mysql/msr/3100/msr.err innodb_buffer_pool_size         =40M innodb_file_per_table           =1 innodb_log_buffer_size          =2M innodb_log_file_size            =12M innodb_flush_log_at_trx_commit  =2 innodb_data_file_path           =ibdata1:12M;ibdata2:12M:autoextend innodb_open_files               =50 language                        =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/share/english table_open_cache                =80 open_files_limit                =100 query_cache_size                =0 max_connections                 =10 user                            =mysql # Replication #log-bin                        =3100 binlog-row-image                =minimal binlog-rows-query-log-events    =1 log-slave-updates               =TRUE gtid-mode                       =ON enforce-gtid-consistency =TRUE master-info-repository =TABLE relay-log-info-repository =TABLE sync_binlog =1 sync_master_info =1 slave-parallel-workers =2 slave_transaction_retries =0 binlog-checksum =CRC32 master-verify-checksum =1 slave-sql-verify-checksum =1 binlog-rows-query-log-events =1 binlog_format =ROW report-host =pchost report-port =3100 replicate-ignore-db =nexus [mysqld3001] server-id =3001 mysqld =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/bin/mysqld_safe ledir =/usr/local/mysql-5.7.5-labs-msr-preview-linux-el6-x86_64/bin port =3001 pid-file =/opt/mysql/msr/3001/ ... [mysqld3002] ... [mysqld3003] ... .. [mysqld3010] ...

All entries were the same for each [mysqld300x] except for the obvious, i.e. port, datadir, socket, pid, etc.

And once that’s done, install each server’s datadir, from 3100, 3001 to 3010:

bin/mysql_install_db --user=mysql --datadir=/opt/mysql/msr/3100/data --basedir=/usr/local/mysql-5.7.5-m15-linux-glibc2.5-x86_64 bin/mysqld_multi --defaults-file=/usr/local/mysql/msr/my.cnf  start 3100 cat ~/.mysql_secret bin/mysqladmin -uroot -p -S/opt/mysql/msr/3100/mysql.sock password 'pass'

Remember, this is 5.7.5 Labs, i.e. be aware of some changes occurring here (bin/mysql_install_db).

Let’s comment out the log-bin entry on all 10 masters first, as I don’t want some things I’m about to do to be logged and cause replication to break before I start it all up:

bin/mysqld_multi --defaults-file=/usr/local/mysql/msr/my.cnf  stop 3001-3010 bin/mysqld_multi --defaults-file=/usr/local/mysql/msr/my.cnf  start 3001-3010

Now, on all the masters, 3001-3010 and on the slave, 3100, create the table:

 create database coredb;  use coredb;  create table recording_data (    centre_code SET('A','AB','AC','AL','B','BI','BU','C','CA','CE','CO','CU') NOT NULL,    case_id INT NOT NULL,    user_email VARCHAR(55) NOT NULL,    start_info DATETIME NOT NULL,    end_info DATETIME NOT NULL,    INDEX (user_email),    PRIMARY KEY (centre_code,case_id)  );

The app at each master has programming logic and config that will force each user to use the default value for centre_code for that specific centre and not be able to adjust this in anyway.

Also, case_id could be autoincrement, but for simplistic sake I will force a common id just to prove my point.

These 2 factors mean that when multi source replication occurs on the slave, each row will be unique from each master, i.e. no need for conflict resolution or headaches along those lines.

Now, for the masters, uncomment log-bin again, as we’ll obviously need it, and restart masters 3001-3010.

Create the replication user on each master (from a single slave, so it will be same one for all the masters):

create user 'rpl'@'localhost' identified by ''; grant replication slave on *.*   to 'rpl'@'localhost';

Lets get replicating: go to the slave and create all the channels for each individual master:

change master to master_host='localhost', master_user='rpl', master_port=3001,  master_auto_position=1 for channel "CHANNEL1"; change master to master_host='localhost', master_user='rpl', master_port=3002,  master_auto_position=1 for channel "CHANNEL2"; change master to master_host='localhost', master_user='rpl', master_port=3003,  master_auto_position=1 for channel "CHANNEL3"; ... change master to master_host='localhost', master_user='rpl', master_port=3010,  master_auto_position=1 for channel "CHANNEL10";

And start each channel:

start slave for channel "CHANNEL1"; start slave for channel "CHANNEL2"; start slave for channel "CHANNEL3"; ... start slave for channel "CHANNEL10";

Right.. all’s well, but now for some data.

We insert some data, unique to each master, and see what happens:

On master 3001:  use coredb  insert into recording_data values   ('A',1234567890,'',sysdate()-1,sysdate()+1),   ('A',1234567891,'',sysdate()-1,sysdate()+1),   ('A',1234567893,'',sysdate()-1,sysdate()+1)  ; On master 3002:  use coredb  insert into recording_data values   ('AB',1234567890,'',sysdate()-1,sysdate()+1),   ('AB',1234567891,'',sysdate()-1,sysdate()+1),   ('AB',1234567893,'',sysdate()-1,sysdate()+1)  ; ... On master 3010:  use coredb  insert into recording_data values   ('CE',1234567890,'',sysdate()-1,sysdate()+1),   ('CE',1234567891,'',sysdate()-1,sysdate()+1),   ('CE',1234567893,'',sysdate()-1,sysdate()+1)  ;

And on the slave 3100, what do we see:

bin/mysql -uroot -p -S/opt/mysql/msr/3100/mysql.sock mysql> use coredb; Database changed mysql> select * from recording_data ; +-------------+------------+------------------+---------------------+---------------------+ | centre_code | case_id    | user_email       | start_info          | end_info            | +-------------+------------+------------------+---------------------+---------------------+ | A           | 1234567890 | | 2015-01-26 19:02:08 | 2015-01-26 19:02:10 | | A           | 1234567891 | | 2015-01-26 19:02:08 | 2015-01-26 19:02:10 | | A           | 1234567893 | | 2015-01-26 19:02:08 | 2015-01-26 19:02:10 | | AB          | 1234567890 | | 2015-01-26 19:02:25 | 2015-01-26 19:02:27 | | AB          | 1234567891 | | 2015-01-26 19:02:25 | 2015-01-26 19:02:27 | | AB          | 1234567893 | | 2015-01-26 19:02:25 | 2015-01-26 19:02:27 | | AC          | 1234567890 | | 2015-01-27 12:59:48 | 2015-01-27 12:59:50 | | AC          | 1234567891 | | 2015-01-27 12:59:48 | 2015-01-27 12:59:50 | | AC          | 1234567893 | | 2015-01-27 12:59:48 | 2015-01-27 12:59:50 | | AL          | 1234567890 | | 2015-01-27 13:13:35 | 2015-01-27 13:13:37 | | AL          | 1234567891 | | 2015-01-27 13:13:35 | 2015-01-27 13:13:37 | | AL          | 1234567893 | | 2015-01-27 13:13:35 | 2015-01-27 13:13:37 | | B           | 1234567890 | | 2015-01-27 13:13:56 | 2015-01-27 13:13:58 | | B           | 1234567891 | | 2015-01-27 13:13:56 | 2015-01-27 13:13:58 | | B           | 1234567893 | | 2015-01-27 13:13:56 | 2015-01-27 13:13:58 | | BI          | 1234567890 | | 2015-01-27 13:15:16 | 2015-01-27 13:15:18 | | BI          | 1234567891 | | 2015-01-27 13:15:16 | 2015-01-27 13:15:18 | | BI          | 1234567893 | | 2015-01-27 13:15:16 | 2015-01-27 13:15:18 | | BU          | 1234567890 | | 2015-01-27 13:15:34 | 2015-01-27 13:15:36 | | BU          | 1234567891 | | 2015-01-27 13:15:34 | 2015-01-27 13:15:36 | | BU          | 1234567893 | | 2015-01-27 13:15:34 | 2015-01-27 13:15:36 | | C           | 1234567890 | | 2015-01-27 13:15:52 | 2015-01-27 13:15:54 | | C           | 1234567891 | | 2015-01-27 13:15:52 | 2015-01-27 13:15:54 | | C           | 1234567893 | | 2015-01-27 13:15:52 | 2015-01-27 13:15:54 | | CA          | 1234567890 | | 2015-01-27 13:16:11 | 2015-01-27 13:16:13 | | CA          | 1234567891 | | 2015-01-27 13:16:11 | 2015-01-27 13:16:13 | | CA          | 1234567893 | | 2015-01-27 13:16:11 | 2015-01-27 13:16:13 | | CE          | 1234567890 | | 2015-01-27 13:16:30 | 2015-01-27 13:16:32 | | CE          | 1234567891 | | 2015-01-27 13:16:30 | 2015-01-27 13:16:32 | | CE          | 1234567893 | | 2015-01-27 13:16:30 | 2015-01-27 13:16:32 | +-------------+------------+------------------+---------------------+---------------------+ 30 rows in set (0,00 sec)

And as mentioned in MySQL Labs: Multi Source Replication – examples now to use Performance Schema to see all my masters:

mysql> select * from performance_schema.replication_execute_status_by_worker; +--------------+-----------+-----------+---------------+----------------------------------------+-------------------+--------------------+----------------------+ | CHANNEL_NAME | WORKER_ID | THREAD_ID | SERVICE_STATE | LAST_SEEN_TRANSACTION                  | LAST_ERROR_NUMBER | LAST_ERROR_MESSAGE | LAST_ERROR_TIMESTAMP | +--------------+-----------+-----------+---------------+----------------------------------------+-------------------+--------------------+----------------------+ | channel1     |         1 |        27 | ON            | c1b91daa-a55a-11e4-be70-b86b23917877:6 |                 0 |                    | 0000-00-00 00:00:00  | | channel1     |         2 |        28 | ON            | c1b91daa-a55a-11e4-be70-b86b23917877:7 |                 0 |                    | 0000-00-00 00:00:00  | | channel2     |         1 |        44 | ON            | fd35c073-a55e-11e4-be8c-b86b23917877:7 |                 0 |                    | 0000-00-00 00:00:00  | | channel2     |         2 |        45 | ON            | fd35c073-a55e-11e4-be8c-b86b23917877:8 |                 0 |                    | 0000-00-00 00:00:00  | | channel3     |         1 |        50 | ON            | 3183d919-a569-11e4-bece-b86b23917877:1 |                 0 |                    | 0000-00-00 00:00:00  | | channel3     |         2 |        51 | ON            | 3183d919-a569-11e4-bece-b86b23917877:2 |                 0 |                    | 0000-00-00 00:00:00  | | channel4     |         1 |        54 | ON            | 2205d704-a56c-11e4-bee1-b86b23917877:1 |                 0 |                    | 0000-00-00 00:00:00  | | channel4     |         2 |        55 | ON            | 2205d704-a56c-11e4-bee1-b86b23917877:2 |                 0 |                    | 0000-00-00 00:00:00  | | channel5     |         1 |        58 | ON            | 3e007b4e-a56c-11e4-bee2-b86b23917877:1 |                 0 |                    | 0000-00-00 00:00:00  | | channel5     |         2 |        59 | ON            | 3e007b4e-a56c-11e4-bee2-b86b23917877:4 |                 0 |                    | 0000-00-00 00:00:00  | | channel6     |         1 |        62 | ON            | fecf246b-a616-11e4-833c-7c7a91bc3176:1 |                 0 |                    | 0000-00-00 00:00:00  | | channel6     |         2 |        63 | ON            | fecf246b-a616-11e4-833c-7c7a91bc3176:2 |                 0 |                    | 0000-00-00 00:00:00  | | channel7     |         1 |        66 | ON            | 6ccfd319-a617-11e4-833f-7c7a91bc3176:1 |                 0 |                    | 0000-00-00 00:00:00  | | channel7     |         2 |        67 | ON            | 6ccfd319-a617-11e4-833f-7c7a91bc3176:2 |                 0 |                    | 0000-00-00 00:00:00  | | channel8     |         1 |        70 | ON            | a800f433-a617-11e4-8341-7c7a91bc3176:1 |                 0 |                    | 0000-00-00 00:00:00  | | channel8     |         2 |        71 | ON            | a800f433-a617-11e4-8341-7c7a91bc3176:2 |                 0 |                    | 0000-00-00 00:00:00  | | channel9     |         1 |        74 | ON            | bd6fac63-a617-11e4-8341-7c7a91bc3176:1 |                 0 |                    | 0000-00-00 00:00:00  | | channel9     |         2 |        75 | ON            | bd6fac63-a617-11e4-8341-7c7a91bc3176:2 |                 0 |                    | 0000-00-00 00:00:00  | | channel10    |         1 |        78 | ON            | d1d47dec-a618-11e4-8348-7c7a91bc3176:1 |                 0 |                    | 0000-00-00 00:00:00  | | channel10    |         2 |        79 | ON            | d1d47dec-a618-11e4-8348-7c7a91bc3176:2 |                 0 |                    | 0000-00-00 00:00:00  | +--------------+-----------+-----------+---------------+----------------------------------------+-------------------+--------------------+----------------------+ 20 rows in set (0,00 sec)

So what’s next? In my “to do” list I’ve got:

– Evaluate performance impact on the slave as a new master is added. Today: 10, tomorrow: 50, next week: the www.

– See how the slave impact increases and hence, server resources. CPU will be key I expect, due to the number of channels, but then also will depend on how frequently each master sends it’s own load of bin logs and the size of them, network will be hard for me to look into here, as it’s all self-contained….


PlanetMySQL Voting: Vote UP / Vote DOWN

Load Balancing for MySQL with HAProxy - Webinar Replay in English & French

January 30, 2015 By Severalnines In this joint webinar series with our friends from the HAProxy team, we covered the concepts around the popular open-source HAProxy load balancer, and demonstrated how to use it with SQL-based database clusters. We also discussed HA strategies for HAProxy with Keepalived and Virtual IP. 


Thanks to everyone who participated in these two sessions this week! Please see below for details on next week’s follow up session 'Performance Tuning for HAProxy & MySQL'.


The topics covered this week included: 

  • What is HAProxy?
  • SQL Load balancing for MySQL
  • Failure detection using MySQL health checks
  • High Availability with Keepalived and Virtual IP
  • Use cases: MySQL/MariaDB Galera Cluster, MySQL NDB Cluster and MySQL Replication
  • Alternative methods: Database drivers with inbuilt cluster support, MySQL proxy, MaxScale, ProxySQL


Watch the replay - English Load Balancing MySQL with HAProxy - Webinar Replay - English from Severalnines AB


Watch the replay - French Load Balancing MySQL with HAProxy - Webinar Replay - French from Severalnines AB


Read the slides Load Balancing MySQL with HAProxy - Slides from Severalnines AB


There is a follow up session next week on February 4th:


Performance Tuning for HAProxy and MySQL (English)

Performance Tuning for HAProxy and MySQL (French)


Baptiste Assmann, Product Manager at HAProxy Technologies, will be the main speaker at these sessions. 


The topics that will be covered include: 

  • Inside HAProxy
  • HAProxy multi-process: advantages, limitations, configuration
  • Dynamic re-configuration
  • What can HAProxy tell you about your application and your database
  • Weakness in MySQL client library
  • Hints for short live connections
  • Hints for persistent connections
  • HAProxy active/active failover setups
  • Security considerations


read more

PlanetMySQL Voting: Vote UP / Vote DOWN

Connectors Updated

Following on the heels of last week’s update to the Java client, the MariaDB project is pleased to today announce updates to both MariaDB Connector/C and Connector/ODBC. They are both Stable (GA) releases.

See the Release Notes and Changelogs for detailed information on each of these releases and contain many bug fixes and enhancements.

Download Connector/C 2.1.0

Release Notes Changelog

About MariaDB Connector/C

Download Connector/ODBC 1.0.0

Release Notes Changelog

About MariaDB Connector/ODBC

Thanks, and enjoy MariaDB!

PlanetMySQL Voting: Vote UP / Vote DOWN

Log Buffer #408, A Carnival of the Vanities for DBAs

This Log Buffer Edition covers various innovative blog posts from various fields of Oracle, MySQL and SQL Server. Enjoy!!!


A user reported an ORA-01114 and an ORA-27069 in a 3rd party application running against an Oracle 11.1 database.

Oracle SOA Suite 12c: Multithreaded instance purging with the Java API.

Oracle GoldenGate for Oracle Database has introduced several features in Release

Upgrade to 12c and Plugin – one fast way to move into the world of Oracle Multitenant.

The Oracle Database Resource Manager (the Resource Manager) is an infrastructure that provides granular control of database resources allocated to users, applications, and services. The Oracle Database Resource Manager (RM) enables you to manage multiple workloads that are contending for system and database resources.

SQL Server:

Database ownership is an old topic for SQL Server pro’s.

Using T-SQL to Perform Z-Score Column Normalization in SQL Server.

The APPLY operator allows you to join a record set with a function, and apply the function to every qualifying row of the table (or view).

Dynamic Management Views (DMVs) are a significant and valuable addition to the DBA’s troubleshooting armory, laying bare previously unavailable information regarding the under-the-covers activity of your database sessions and transactions.

Grant Fritchey reviews Midnight DBA’s Minion Reindex, a highly customizable set of scripts that take on the task of rebuilding and reorganizing your indexes.


It’s A New Year – Take Advantage of What MySQL Has To Offer.

MySQL High Availability and Disaster Recovery.

MariaDB Galera Cluster 10.0.16 now available.

Multi-threaded replication with MySQL 5.6: Use GTIDs!

MySQL and the GHOST: glibc gethostbyname buffer overflow.

PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB Galera Cluster 10.0.16 now available

Download MariaDB Galera Cluster 10.0.16

Release Notes Changelog What is MariaDB Galera Cluster?

MariaDB APT and YUM Repository Configuration Generator

The MariaDB project is pleased to announce the immediate availability of MariaDB Galera Cluster 10.0.16. This is a Stable (GA) release.

See the Release Notes and Changelog for detailed information on this release and the What is MariaDB Galera Cluster? page in the MariaDB Knowledge Base for general information about the MariaDB Galera Cluster series.

Note: It can take up to 24 hours before all mirrors and repositories are updated with a new release.

Thanks, and enjoy MariaDB!

PlanetMySQL Voting: Vote UP / Vote DOWN

Speaking at FOSDEM: Pseudo GTID and easy replication management

This coming Sunday I'll be presenting Pseudo GTID and easy replication management at FOSDEM, Brussels.

There's been a lot of development on Pseudo GTID these last few weeks. In this talk I'll show you how you can use Pseudo GTID instead of "normal" GTID to easily repoint your slaves, recover from intermediate master failure, promote slaves to masters as well as emply crash safe replication without crash safe replication.

Moreover, I will show how you can achieve all the above with less constraints than GTID, and for bulk operations -- with less overhead and in shorter time. You will also see that Pseudo GTID is a non intrusive solution which does not require you to change anything in your topologies.

Moral: I'll try and convince you to drop your plans for using GTID in favor of Pseudo GTID.

We will be employing Pseudo GTID as the basis for high availability and failover at on many topologies, and as a safety mechanism in other topologies where we will employ Binlog servers.

PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Backup Strategy – Slides

Send to Kindle

Hi there, it’s been a few months since Percona Live London has finished, but I managed just now to get sometime to post the slides from my presentation about backups. Please, take a look and feel free to comment.

Pdf Version

Send to Kindle
PlanetMySQL Voting: Vote UP / Vote DOWN

GHOST vulnerability (CVE-2015-0235) Percona response

Cloud security company Qualys announced Tuesday the issues prevalent in glibc since version 2.2 introduced in 2000-11-10 (the complete Qualys announcement may be viewed here). The vulnerability, CVE-2015-0235, has been dubbed “GHOST.”

As the announcement from Qualys indicates, it is believed that MySQL and by extension Percona Server are not affected by this issue.

Percona is in the process of conducting our own review into the issue related to the Percona Server source code – more information will be released as soon as it is available.

In the interim the current advisory is to update your glibc packages for your distributions if they are in fact vulnerable. The C code from the Qualys announcement may aid in your diagnostics, section 4 of this document or via this gist. I also wrote a very quick python script to help identify processes which may be running libc that you can access here.

Compiling the above and executing it will yield an output indicating if your glibc version is believed to be vulnerable or not vulnerable.

Distribution Resource Resource Links
    1. RedHat BZ:
    2. RedHat EL5 Errata:
    3. RedHat EL6 / 7 Errata:
    4. Ubuntu USN: (affects 10.04 12.04)
    5. Debian security tracker:

Distributions which use musl-libc ( are not affected by this issue.



Robert Barabas – Percona
Raghavendra Prabhu – Percona
Laura Byrnes – Percona

The post GHOST vulnerability (CVE-2015-0235) Percona response appeared first on MySQL Performance Blog.

PlanetMySQL Voting: Vote UP / Vote DOWN

Multi-threaded replication with MySQL 5.6: Use GTIDs!

MySQL 5.6 allows you to execute replicated events in parallel as long as data is split across several databases. This feature is named “Multi-Threaded Slave” (MTS) and it is easy to enable by setting slave_parallel_workers to a > 1 value. However if you decide to use MTS without GTIDs, you may run into annoying issues. Let’s look at two of them.

Skipping replication errors

When replication stops with an error, a frequent approach is to “ignore now and fix later.” This means you will run SET GLOBAL sql_slave_skip_counter=1 to be able to restart replication as quickly as possible and later use pt-table-checksum/pt-table-sync to resync data on the slave.

Then the day when I hit:

mysql> show slave status; [...] Last_SQL_Error: Worker 0 failed executing transaction '' at master log mysql-bin.000017, end_log_pos 1216451; Error 'Duplicate entry '1001' for key 'PRIMARY'' on query. Default database: 'db1'. Query: 'INSERT INTO sbtest1 (id, k, c, pad) VALUES (0, 5320, '49123511666-22272014664-85739796464-62261637750-57593947547-00947134109-73607171516-11063345053-55659776318-82888369235', '11400300639-05875856680-20973514928-29618434959-69429576205')' Exec_Master_Log_Pos: 1005432

I tried to use the trick:

mysql> set global sql_slave_skip_counter=1; mysql> start slave;


mysql> show slave status; [...] Last_SQL_Error: Worker 0 failed executing transaction '' at master log mysql-bin.000017, end_log_pos 1216451; Error 'Duplicate entry '1001' for key 'PRIMARY'' on query. Default database: 'db1'. Query: 'INSERT INTO sbtest1 (id, k, c, pad) VALUES (0, 5320, '49123511666-22272014664-85739796464-62261637750-57593947547-00947134109-73607171516-11063345053-55659776318-82888369235', '11400300639-05875856680-20973514928-29618434959-69429576205')' Exec_Master_Log_Pos: 1005882

Note that the position reported with Exec_Master_Log_Pos has moved forward, but I still have my duplicate key error. What’s wrong?

The issue is that the positions reported by SHOW SLAVE STATUS are misleading when using MTS. Quoting the documentation about Exec_Master_Log_Pos:

When using a multi-threaded slave (by setting slave_parallel_workers to a nonzero value in MySQL 5.6.3 and later), the value in this column actually represents a “low-water” mark, before which no uncommitted transactions remain. Because the current implementation allows execution of transactions on different databases in a different order on the slave than on the master, this is not necessarily the position of the most recently executed transaction.

So the solution to my problem is first to make sure that there is no execution gap, and only then to skip the offending event. There is a specific statement for the first part:

mysql> start slave until sql_after_mts_gaps;

And now I can finally skip the error and restart replication:

mysql> set global sql_slave_skip_counter=1; mysql> start slave; mysql> show slave statusG Slave_IO_Running: Yes Slave_SQL_Running: Yes

The last thing to do is of course to resync the slave.


If you cannot trust the output of SHOW SLAVE STATUS to get the current binlog position, it means that taking a backup from a slave with parallel replication is tricky.

For instance, if you run mysqldump --dump-slave=2 to get the binlog position of the master, mysqldump will first run STOP SLAVE and then SHOW SLAVE STATUS. Is stopping the slave sufficient to avoid execution gaps? Actually, no.

The only option then seems to be: run STOP SLAVE followed by START SLAVE UNTIL SQL_AFTER_MTS_GAPS, followed by mysqldump while replication is stopped. Not very handy!

GTIDs to the rescue!

The solution for both issues is to use GTIDs.

They help when you want to skip an event because when using GTIDs, you must explicitly specify the transaction you will be skipping. It doesn’t matter whether there are execution holes.

They also help for backups because mysqldump takes the position from gtid_executed which is updated at each transaction commit (XtraBackup does that too).


If your application uses several databases and if you’re fighting with replication lag, MTS can be a great feature for you. But although GTIDs are not technically necessary, you’ll be exposed to tricky situations if you don’t use them.

Is everything rosy when using both GTIDs and MTS? Not exactly… But that will be the topic for a separate post!

By the way, if you are in the Brussels area this weekend, come see me and other great speakers at the MySQL and friends devroom at FOSDEM!

The post Multi-threaded replication with MySQL 5.6: Use GTIDs! appeared first on MySQL Performance Blog.

PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL and the GHOST: glibc gethostbyname buffer overflow

Qualys announced they had found an exploit in one of the Linux core libraries – glibc – using a buffer overflow technique in gethostbyname(). They have developed a proof-of-concept in which simply sending a specially created e-mail to a mail server enabled them to create a remote shell to the Linux machine. According to Qualys, “This bypasses all existing protections (like ASLR, PIE and NX) on both 32-bit and 64-bit systems.”. How does it affect MySQL servers? Read on!

The vulnerable library call provides DNS name resolution and although it’s been obsoleted in favour of other calls, it is still widely used by both legacy and modern applications.

Having a quick search, MySQL does not seem to rely on gethostbyname() almost at all. A few uses can be found in MySQL Cluster code (NDB), however it’s likely only to resolve node names set in the cluster configuration, which poses no risk to system security as one needs to have the administrative rights in the first place to be able to set those names. However, the MySQL sources carry an example UDF that calls gethostbyname(), so it may be possible if the example is installed, the system could also be vulnerable if a remote attacker can pass unfiltered input to such a UDF. You may have also developed and installed your own UDF that makes use of the problematic library call, which could make the server vulnerable.

For a dedicated database server the attack vector is probably very limited, but if you have a multi-purpose server then it could be wide open to attacks.

So our conclusion is that it does not affect MySQL directly in any way, however other software running on the same server can be used as an attack vector, as such great care should be taken to eradicate that risk.

Full details of the vulnerability can be found here.

PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB 10.0.16 now available

Download MariaDB 10.0.16

Release Notes Changelog What is MariaDB 10.0?

MariaDB APT and YUM Repository Configuration Generator

The MariaDB project is pleased to announce the immediate availability of MariaDB 10.0.16. This is a Stable (GA) release.

See the Release Notes and Changelog for detailed information on this release and the What is MariaDB 10.0? page in the MariaDB Knowledge Base for general information about the MariaDB 10.0 series.

Thanks, and enjoy MariaDB!

PlanetMySQL Voting: Vote UP / Vote DOWN

Log Buffer #407, A Carnival of the Vanities for DBAs

This Log Buffer Edition keeps the aim high and brings few of the best blog posts from Oracle, SQL Server and MySQL.


3 Modes for Moving Data to the BI Applications DW from a Source Application Database.

JSON for APEX Developers.

Neelakanth Nadgir posted a useful utility that prints out various statistics about the ZFS Adaptive Replacement Cache (ARC).

Obtaining Bonus Depreciation Methods for Oracle Fixed Assets.

Existing News – Java Cloud Service just got an update – WebLogic Server 12.1.3

SQL Server:

Tracking Database DDL Changes with SQLVer.

While a diminished level of control is certainly a factor to consider when contemplating migration of on-premises systems to Microsoft Azure, especially when dealing with PaaS resources such as Azure SQL Database, you have a range of techniques at your disposal that allow you to control and monitor both the status of and access to your Azure-resident services.

A Custom Execution Method – Level 19 of the Stairway to Integration Services

When a SQL Server database is operating smoothly and performing well, there is no need to be particularly aware of the transaction log, beyond ensuring that every database has an appropriate backup regime and restore plan in place.

SQL Server Reporting Services Basics: Deploying Reports


Mo’ Data, Mo’ Problems

FromDual.en: FromDual Backup Manager for MySQL 1.2.1 has been released

Tracking MySQL query history in long running transactions

MySQL Cluster 7.4.3 RELEASE CANDIDATE now available

Streaming MariaDB backups in the cloud.

PlanetMySQL Voting: Vote UP / Vote DOWN

MMUG11: Talk about binlog servers at Madrid MySQL Users Group meeting on 29th January 2015

Madrid MySQL Users Group will have its next meeting on Thursday, the 29th of January.

I will be giving a presentation on the MySQL binlog server and how it can be used to help scale reads and be used for other purposes.  If you have (more than) a few slaves this talk might be interesting for you.  The meeting will be in Spanish. I hope to see you there.

Details can be found on the group’s Meetup page here:

La próxima reunión de Madrid MySQL Users Group tendrá lugar el jueves 29 de enero. Ofreceré una presentación sobre el MySQL binlog server y como se puede utilizar para ayudar con la escalada de lecturas a la base de datos y para otros usos. La reunión será en español. Espero veros allí.

Se puede encontrar más detalles en la página del grupo:

PlanetMySQL Voting: Vote UP / Vote DOWN

Performance issues and fixes -- MySQL 5.6 Semi-Synchrnous Replication

 Long time ago I wrote a blog post that we started using Semi-Synchronous replication in production at Facebook. We are running our own MySQL facebook-5.6 branch, which added many replication (including Semisync) patches. Some of them were also merged with WebScaleSQL.

 Recently I heard from people from community that they were suffering from Semisync performance issues in official MySQL 5.6. I took some time to review our previous MySQL bug reports, then realized that some important bugs were either still "verified" or inefficiently fixed. Two most affecting bug reports were and We fixed both at our branch so I haven't paid much attention after that, but people outside Facebook are certainly affected.

 In this post, I'm going to describe some effective configurations to get better Semisync throughput on master and slaves, by showing simple benchmark numbers. I used three machines -- client and master and semisync slave --, all running on pure flash. They are located within very close distance. I created 100 databases and enabled Multi-Threaded-Slave, and ran 100 mysqlslap processes for 100 databases, with 30 concurrent connections each (3000 concurrent connections in total). All queries were auto-committed inserts and I used InnoDB storage engine on both master and slaves.

1. Set master_info_repository=TABLE
 MySQL 5.6 and 5.7 have a performance bug that writing FILE based master info (and relay log info) files are very expensive. This is especially serious for Semisync replication, since this bug slows down IO thread. On Semisync, slow IO threads takes longer time to send ACK back to the master, so it slows down master throughput as well. Default master_info_repository is FILE, so without changing this parameter to TABLE, you are affected by this bug.
 Here are benchmark results between FILE and TABLE.

5.6.22 master_info_repository=FILE :  5870/s
5.6.22 master_info_repository=TABLE:  7365/s

 These numbers were the number of commits per second on both master and slave instances. Slave didn't lag for most experiments, thanks to multi-threaded slave feature.
 Please don't confuse between master_info_repository and relay_log_info_repository parameters. relay_log_info_repository has to be TABLE, otherwise crash safe slave doesn't work. master_info_repository works on both FILE and TABLE, but I suggest to use TABLE for performance reasons.

2. Reduce durability on master
 Older 5.6 had a bug that slaves couldn't continue replication after crashed master's recovery, even if setting fully durable configurations. Here is a closed bug report.
 This bug report was closed, but it caused some performance regression. Master extended LOCK_log mutex holding duration -- releasing LOCK_log mutex after calling fsync(). This certainly fixed the bug, but caused performance regression because LOCK_log was very hot mutex in 5.6 -- both Binlog Dump thread and application threads need to hold the lock. Hopefully reducing durability (I mean setting sync_binlog=0 and innodb_flush_log_at_trx_commit=0|2) mitigates the regression a lot. When using Semisync replication, you are most certainly to promote a slave on master failure, so durability on master does not matter much == You can reduce durability on master.

5.6.22 master_info_repository=TABLE, full durability:  7365/s
5.6.22 master_info_repository=TABLE, less durability:  9800/s

3. Loss Less Semisync
 MySQL 5.7 improved replication performance a lot. Probably the most effective improvement was that Binlog Dump thread no longer held LOCK_log mutex. In addition to that, 5.7 introduced "Loss-Less Semisync" feature.
 If you read my previous Semisync blog post carefully, you may have noticed that 1. we backported "Loss-Less Semisync" from 5.7, and 2. we got better throughput with Loss-Less Semisync than Normal Semisync. This was because Loss-Less Semisync actually reduced mutex contentions -- LOCK_commit and LOCK_binlog_. My 5.7 benchmark result was as follows.

5.7 Normal Semisync:    12302/s
5.7 Loss Less Semisync: 14500/s
(master_info_repository=TABLE, less durable)

 Reducing LOCK_log contention on Binlog Dump thread and introducing Loss Less Semisync were major contributors in MySQL 5.7 performance improvements. At Facebook, we ported both in our 5.6 branch. It would be interesting for community if these can be available on other distributions, since using 5.7 in production will not happen anytime soon.

In addition to the above three configurations, there are still some considerations to make Semisync throughput not bad / better.

4. Semisync mysqlbinlog
 At Facebook, we implemented mysqlbinlog to speak Semisync protocol, and used it as a Semisync replication reader. On replication slaves, IO thread and SQL thread conflict with internal mutexes. As I mentioned above, slow IO thread slows down Semisync master throughput. Semisync mysqlbinlog doesn't have such slowdown, because it doesn't have SQL thread. So using Semisync mysqlbinlog instead of Semisync slave can improve master throughput.

5.7 Loss Less Semisync + Semisync slave: 14500/s (on both master and slave)
5.7 Loss Less Semisync + Semisync mysqlbinlog + async slave: 48814/s on master, 15363/s on slave

 This shows Semisync mysqlbinlog improved master throughput. But this is actually not so good news -- because slave lags a lot. Fundamentally we need to fix mutex contentions between IO thread and SQL threads.

 There are some open performance bugs in GTID. Especially 5.7 one is serious. If you really need high throughput Semisync, you need to carefully benchmark with GTID (and ask Oracle to fix!).

Here are whole table definitions and mysqlslap commands I used for benchmark.

for i in `seq 1 100`
mysql -e "drop database test$i"
mysql -e "create database test$i"
mysql test$i -e "create table t1 (id int auto_increment primary key, \
value int, value2 varchar(100)) engine=innodb"

for i in `seq 1 100`
mysqlslap --host=master --query="insert into test$i.t1 \
values (null, 1, 'abcdefghijklmn')" --number-of-queries=100000 --concurrency=30 &

PlanetMySQL Voting: Vote UP / Vote DOWN

Presenting Undrop for InnoDB Toolkit on Percona Live 2015

I’ll be presenting Undrop for InnoDB data recovery toolkit on Percona Live 2015. The conference takes place in Santa Clara on 13-16 April.

You may wonder why do I care if I plan to drop none of my production databases. To name few reasons:

  • Taxes, death and data loss are inevitable
  • Good knowledge of how InnoDB stores data on disk help to design fast and scalable databases
  • The toolkit can be used to troubleshoot bugs in InnoDB.

So bookmark the session, it’s going to be a great discussion.

Undrop for InnoDB | Percona Live MySQL Conference 2015



The post Presenting Undrop for InnoDB Toolkit on Percona Live 2015 appeared first on Backup and Data Recovery for MySQL.

PlanetMySQL Voting: Vote UP / Vote DOWN

How to Bootstrap MySQL/MariaDB Galera Cluster

January 27, 2015 By Severalnines

Unlike standard MySQL server and MySQL Cluster, the way to start a MySQL/MariaDB Galera Cluster is a bit different. Galera requires you to start a node in a cluster as a reference point, before the remaining nodes are able to join and form the cluster. This process is known as cluster bootstrap. Bootstrapping is an initial step to introduce a database node as primary component, before others see it as a reference point to sync up data.


How does it work?


When Galera starts with the bootstrap command on a node, that particular node will reach Primary state (check the value of wsrep_cluster_status). The remaining nodes will just require a normal start command and they will automatically look for existing Primary Component (PC) in the cluster and join to form a cluster. Data synchronization then happens through either incremental state transfer (IST) or snapshot state transfer (SST) between the joiner and the donor.


So basically, you should only bootstrap the cluster if you want to start a new cluster or when no other nodes in the cluster is in PRIMARY state. Care should be taken when choosing the action to take, or else you might end up with split clusters or loss of data.


The following example scenarios illustrate when to bootstrap the cluster:


How to start Galera cluster?


The 3 Galera vendors use different bootstrapping commands (based on the software’s latest version). On the first node, run:

  • Codership: $ service mysql bootstrap
  • Percona XtraDB Cluster: $ service mysql bootstrap-pxc
  • MariaDB Galera Cluster: $ service mysql bootstrap


The above command is just a wrapper and what it actually does is to start the MySQL instance on that node with gcomm:// as the wsrep_cluster_address variable. You can also manually define the variables inside my.cnf and run the standard start/restart command. However, do not forget to change wsrep_cluster_address back again to contain the addresses to all nodes after the start.


read more

PlanetMySQL Voting: Vote UP / Vote DOWN

Reading RBR binary logs with pt-query-digest

For purposes of auditing anything that goes on our servers we're looking to parse the binary logs of all servers (masters), as with "Anemomaster". With Row Based Replication this is problematic since pt-query-digest does not support parsing RBR binary logs (true for 2.2.12, latest at this time).

I've written a simple script that translates RBR logs to SBR-like logs, with a little bit of cheating. My interest is that pt-query-digest is able to capture and count the queries, nothing else. By doing some minimal text manipulation on the binary log I'm able to now feed it to pt-query-digest which seems to be happy.

The script of course does not parse the binary log directly; furthermore, it requires the binary log to be extracted via:

mysqlbinlog --verbose --base64-output=DECODE-ROWS your-mysql-binlog-filemame.000001

The above adds the interpretation of the RBR entires in the form of (unconventional) statements, commented, and strips out the cryptic RBR text. All that is left is to do a little manipulation on entry headers and uncomment the interpreted queries.

The script can be found in my gist repositories. Current version is as follows:

#!/usr/bin/python # # Convert a Row-Based-Replication binary log to Statement-Based-Replication format, cheating a little. # This script exists since Percona Toolkit's pt-query-digest cannot digest RBR format. The script # generates enough for it to work with. # Expecting standard input # Expected input is the output of "mysqlbinlog --verbose --base64-output=DECODE-ROWS <binlog_file_name>" # For example: # $ mysqlbinlog --verbose --base64-output=DECODE-ROWS mysql-bin.000006 | python | pt-query-digest --type=binlog --order-by Query_time:cnt --group-by fingerprint # import fileinput def convert_rbr_to_pseudo_sbr(): inside_rbr_statement = False for line in fileinput.input(): line = line.strip() if line.startswith("#") and "end_log_pos" in line: for rbr_token in ["Update_rows:", "Write_rows:", "Delete_rows:", "Rows_query:", "Table_map:",]: if rbr_token in line: line = "%s%s" % (line.split(rbr_token)[0], "Query\tthread_id=1\texec_time=0\terror_code=0") if line.startswith("### "): inside_rbr_statement = True # The "### " commented rows are the pseudo-statement interpreted by mysqlbinlog's "--verbose", # and which we will feed into pt-query-digest line = line.split(" ", 1)[1].strip() else: if inside_rbr_statement: print("/*!*/;") inside_rbr_statement = False print(line) convert_rbr_to_pseudo_sbr()






PlanetMySQL Voting: Vote UP / Vote DOWN