Planet MySQL

OurSQL Episode 200: Information Security and Privacy


For our 200th episode, we interview security expert Bruce Schneier. We talk about plenty of topics including airport security and the TSA, PRISM and the NSA, wholesale surveillance, surveillance backwards in time, finding people who have disposable cellphones, about searches and co-travelers, why Facebook does not offer the ability to pay for your account, a bit about Firefox and its propensity to act in the user's interest, and the future of our public information.

PlanetMySQL Voting: Vote UP / Vote DOWN

innodb_flush_log_at_timeout in 5.6

A setting that isn't used much but could be used more is innodb_flush_log_at_timeout, which controls how often InnoDB does an fsync from the OS buffer to disk. Here's how it interacts with innodb_flush_log_at_trx_commit.
PlanetMySQL Voting: Vote UP / Vote DOWN

Deciphering Galera Version Numbers

Mon, 2014-09-15 11:29guillaumelefranc

It's important to understand which Galera version you are running, for proper compatibility of MariaDB with the Galera replication library, and also to check which featureset you have access to. However, Galera version numbering can be difficult to decipher because of its complex scheme.

Version numbering in Galera consists of three different components:

  • Galera wsrep (write-set replication) Patch for MariaDB or MySQL
  • Galera Replication Shared Library (
  • Galera wsrep API

The patch component version number appears usually together with the API version number in the server version string of MariaDB. Below is a typical message displayed immediately after logging into MariaDB Galera Cluster with the mariadb client:

Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 80475 Server version: 5.5.38-MariaDB-wsrep-log MariaDB Server, wsrep_25.10.r3997

In the last line here, the last string (i.e., wsrep_25.10.r3997) shows the Galera version. The first number after wsrep is the Galera wsrep API version number, which is 25 here. In this situation, you will need to use the Galera shared library with the same API version.

The second number above is the patch release for the MySQL or MariaDB branch to which it relates. In this example, 10 corresponds to the tenth release of the wsrep patch for MariaDB 5.5.

The last numbering component above, r3997 refers to the last bazaar commit in the Galera wsrep mysql patch source tree. In this case, all of the fixes up to revision 3997 have been included. If you want, you can check the Launch Pad site for a complete list of revisions (

As mentioned before, the Galera replication shared library should be the same version as the Galera wsrep API version. If you're installing the library for the first time, check the package version number before installing it. However, if the API is already installed, you can check which version the server is already running by executing the following within the mysql or mariadb client:

SHOW GLOBAL STATUS LIKE 'wsrep_provider_version'; +------------------------+---------------+ | Variable_name | Value | +------------------------+---------------+ | wsrep_provider_version | 25.3.5(rXXXX) | +------------------------+---------------+

The version numbering of a Galera replication library always follows the form of xx.y.z. The first number is the Galera wsrep API version. As mentioned earlier, you will need to use the Galera library and API version which has the same version as the MariaDB or MySQL patch release that the server is using. In this example, the API version is 25. This is the same as the version of Galera shown in the previous example. This is consistent.

The second number is the Galera major release version number. In this example, it's part of the 3.x branch of Galera replication library. We could also use the 2.x branch, which happens to be a stable version, as long as it also supports the API version 25. The version string would then be something like, 25.2.x.

Finally, the last number of the results above is the Galera minor release version number (i.e., 5). In this example, it's the fifth release in the 3.x branch.

So, the results of the SHOW statement above indicate that the server is using the Galera API version 25, major release 3, minor release 5. That's consistent with the MariaDB Galera server, which is using version 25.

Within parentheses in the results of the SHOW statement, there is one more value. This is the bazaar revision number. This isn't important to our purposes. In fact, it's now obscured because the developers at Codership has moved to github for version control of the Galera library ( The git doesn't provide revision numbers; it's based on tagging instead.

Now that we've pulled apart and identified the components of the Galera version numbers, you can see that the numbering method is logical. You need only to ensure that the MariaDB Galera version installed on the server agrees with the version of the Galera API and library installed on the server. If they're not, you will probably need to use a different version of one or the other.

Tags: ClusteringDBAGalera About the Author Guillaume Lefranc

Guillaume Lefranc is a Senior Consultant and Remote DBA Services Engineer, delivering performance tuning and high availability services worldwide.

PlanetMySQL Voting: Vote UP / Vote DOWN

Speaking about libAttachSQL at Percona Live London

As many of you know I'm actively developing libAttachSQL and am rapidly heading towards the first beta release.  For those who don't, libAttachSQL is a lightweight C connector for MySQL servers with a non-blocking API.  I am developing it as part of my day job for HP's Advanced Technology Group.  It was in-part born out of my frustration when dealing with MySQL and eventlet in Python back when I was working on various Openstack projects.  But there are many reasons why this is a good thing for C/C++ applications as well.

What you may not know is I will be giving a talk about libAttachSQL, the technology behind it and the decisions we made to get here at Percona Live London.  The event is on the 3rd and 4th of November at the Millennium Gloucester Conference Centre.  I highly recommend attending if you wish to find out more about libAttachSQL or any of the new things going on in the MySQL world.

As for the project itself, I'm currently working on the prepared statement code which I hope to have ready in the next few days.  0.4.0 will certainly be a big release in terms of changes.  There has been feedback from some big companies which is awesome to hear and I have fixed a few problems they have found for 0.4.0.  Hopefully you will be hearing more about that in the future.

For anyone there I'll be in London from the 2nd to the 5th of November and am happy to meet with anyone and chat about the work we are doing.
PlanetMySQL Voting: Vote UP / Vote DOWN

MEB copies binary logs and relay logs to support PITR and cloning of master/slave

With MySQL Enterprise Backup(MEB) 3.9.0 we had introduced full instance backup feature for cloning the MySQL server. Now with MEB 3.11.0 we have enhanced the feature by copying all the master-slave setup files like MySQL server binary logs(will be referred as 'binlogs'), binary log index files, relay logs of slave, relay log index files, master info of slave, slave info files. As part of full instance backup, copying of binlog files is default behavior MEB-3.11.0 onwards. DBA should be aware of the fact that current full instance backup is bigger than the backups with old MEB's.

As every event on MySQL production database goes as a entry to binlog files in particular format, binlog files could be huge. Backing of huge binlog and/or relaylog files should not impact the performance of MySQL server. Hence, all the binlog files, except the current binlog used by server, are copied just like the innodb .ibd files without locking tables. Binlog files currently being used by server and added after backup started, are copied during read the lock which is acquired by MEB for copying meta files and redo logs.

DBA gets the following benefits:


1) Direct cloning of  master and slave possible from backup

Earlier DBA had to copy binlog files manually in order to setup  master/slave. Now, MEB 3.11 by default copies all the files including the global variables needed for setting up master-slave. Hence DBA can clone master or slave with the same state of backed-up server.

Now, DBA need not to use --slave-info option to copy the binlog info for setting up the slave after restore. By copying master and slave info files,  DBA can fetch the information of up to which master binlog position,  slave SQL thread has executed and IO threads has read etc. With this information along with relay logs, binlogs, DBA can easily setup slave from backed-up slave content

2) Backup of binary logs helps in Point In Time Recovery (PITR)

First let us understand what is PITR by above example. Consider DBA has taken full backup on Sunday(assume date as 14-09-2014), and incremental backups on Tuesday(date as 16-09-2014), Thursday(date as 18-09-2014). It means DBA can only restore database up to full backup or incremental backups in other words database can be restored either up to Sunday or up to Tuesday, Thursday,  but not in between let say Monday or Wednesday. Because backup is just a snapshot of data when it was taken. Hence backup taken once can't be restored in between without change log.  That's where binlog helps in restoring to a certain point of time, which is called Point-In-Time-Recovery(PITR). As binlogs captures all the events of a server with timestamps. Therefore to restore in between DBA need to have base data i.e. full backup and incremental binlogs.

Let's look at our example, below are the points to recover server to Wednesday 12 PM(assume date as 17-09-2014)
a) Restore the backup up to latest backup before PITR time(Here, restore Tuesday's incremental)
b) Get the SQL statements using below mysqlbinlog command up to PITR from the immediate next incremental binlogs(Here get SQL statements up to Wednesday from Thursday's incremental binlogs of binlog.000005, binlog.000006, binlog.000007)

mysqlbinlog --start-datetime=<latest backup time before PITR time> \
         --stop-datetime=<PITR point> \
         <incremental binlogs from immediate next backup>  > <SQL file>

For our above example, the command is
mysqlbinlog --start-datetime="2014-09-16 12:00:00" \
         --stop-datetime="2014-09-17 12:00:00" \
         binlog.000005 binlog.000006 binlog.000007  > mysql_PITR_restore.sql

Read Point-in-Time (Incremental) Recovery Using the Binary Log for more details about PITR using Binary logs.

c) Execute the SQL statements obtained on the restored server, server is restored to PITR point

3) Backing up relay-logs from slave server helps avoiding unnecessary pull of logs from master once it is restored

Let us understand this by an example

Slave has 1 relay log with master binlog positions from 1 to 100

SQL thread at slave reads from relaylog and apply events on slave. Now assume SQL thread currently executed statements 1 to 20 and 21 to 100 are yet to be executed.

If DBA takes backup without copying relay log, when he/she restores the backup as slave, it asks master from the binlog position 21. So restored slave need to pull the logs of binlog position 21 to 100 from master. More network I/O needed as usually slave is on different machine.

As MEB takes backup of relay log, slave can avoid pulling the logs for binlog positions 21 to 100. Now restored slave asks master from binlog positions 101 onwards. This way slave don't pull logs from master which are present in slave backup, there by reducing network I/O which is costly than disk I/O.

Unlike binary logs, relaylogs are mostly deleted automatically once applied by SQL thread, as a result few relay logs exist at any point of time. So all the relay logs are copied for all the backup types full, incremental, partial without major impact on backup size and time.

4) Copied binary logs remains consistent with the backup data

Earlier DBA had to copy binlog files manually in order to setup master/slave. Data files are copied by MEB and binlogs are copied by DBA at two different times, so there is a possibility of binlog files not consistent with the backed-up data.

Lets consider following example:
1. MEB takes backup of the server without binlogs at 1 PM
2. DBA has copied binlogs from the server at 1:30 PM
From 1 PM to 1:30, lets say 100 events logged in binlogs

Now to use these binlog files, DBA has to either execute 100 events on server or have to remove 100 events from binlog files.

Consider another example:

1. DBA has copied binlogs from the server at 1:30 PM
2. MEB takes backup of the server without binlogs at 2 PM
From 1:30 PM to 2 PM, lets say 100 events went into backup data

Now DBA has to copy the missing binlog files again from the running server.
With MEB 3.11.0 onwards, binlogs and the data are copied at the same time, so they are consistent with each other.

Options to avoid binlogs/relay logs:
If DBA is not concerned about backing up binlog files then he/she can use --skip-binlog and --skip-relaylog to skip relay log files in backup. It is advisable to use these options if he/she don't plan to clone server or want PITR.

For Master, to skip only binlogs:
./mysqlbackup --skip-binlog --backup-dir=back_dir --socket=server_sock backup

For Slave, to skip relay-logs
./mysqlbackup --skip-relaylog --backup_dir=back_dir --socket=server_sock backup

For Slave which is also a master, to skip both binlogs and relay logs
./mysqlbackup --skip-binlog --skip-relaylog backup_dir=back_dir --socket=server_sock

Options for offline backup:
MEB also supports offline backup. In order to copy binlog and/or relaylog, MEB searches for default values of log-bin-index(default: host_name-bin.index), relay-log-index(default: host_name-relay-bin.index), relaylog-info-file(default:, master-info-file(default: at default location that is in server's 'datadir'. And if MEB finds those files then it successfully backs up those files. In case those files are configured with different values, DBA need to provide --log-bin-index = PATH, --relay-log-index = PATH, --relaylog-info-file = PATH, --master-info-file=PATH options to MEB in order to copy them.

To enrich the full instance backups that MySQL Enterprise Backup has been performing since release 3.9.0, all the replication setup files are included as part of all the backups in 3.11.0. With these files as well as all the global variables, plugin details, MEB now takes the responsibility of giving all the details to DBA for cloning any server. Read MEB 3.11.0 documentation for more details and many other great features.

PlanetMySQL Voting: Vote UP / Vote DOWN

Choosing the Right Data Types for Your MySQL Database

A well designed database uses the most suitable data type for each item. Data type categories are numeric, temporal, character and binary. Follow the ABC rule for choosing data types: Appropriate, Brief, Complete.

To learn more about data types and other key topics for getting started on the MySQL Database, take the MySQL for Beginners training course.

You can take this 4-day training course through the following formats:

  • Training-on-Demand: Start training within 24 hours of registration, following lecture material at your own pace through streaming video and booking time on a lab environment to do lab exercises to suit your schedule.
  • Live-Virtual Event: Attend a live event from your own desk, no travel required. Choose from a selection of events on the schedule to suit different timezones.
  • In-Class Event: Travel to an education center to attend a class. Below is a selection of the events already on the schedule.



 Delivery Language

 Sao Paolo, Brazil

 6 October 2014

 Brazilian Portuguese

 London, England

 11 November 2014


 Rome, Italy

 3 November 2014


 Nairobi, Kenya

 10 November 2014


 Riga, Latvia

 3 November 2014


 Petaling Jaya, Malaysia

 20 October 2014


 Mexico City, Mexico

 17 September 2014


 San Pedro Garza Garcia, Mexico

 24 November 2014


 Utrecht, Netherlands

 3 November 2014


To register for an event, request an additional event or learn more about the authentic MySQL Curriculum, go to

PlanetMySQL Voting: Vote UP / Vote DOWN

#DBHangOps 09/18/14 -- Postmortems, MySQL SYS, and more!

#DBHangOps 09/18/14 -- Postmortems, MySQL SYS, and more!

Hello everybody!

Join in #DBHangOps this Thursday, September, 18, 2014 at 11:00am pacific (18:00 GMT), to participate in the discussion about:

  • Postmortems
    • How do you do postmortems?
    • What's your postmortem process like?
    • Has your postmortem process changed recently?
  • MySQL SYS (from Mark Leith) -- Any particular features you'd like to see?
  • Mixing data with metadata (requested by Shlomi)

Be sure to check out the #DBHangOps twitter search, the @DBHangOps twitter feed, or this blog post to get a link for the google hangout on Thursday!

See all of you on Thursday!

PlanetMySQL Voting: Vote UP / Vote DOWN

Making MySQL Better More Quickly

With the upcoming release of MySQL 5.7 I begin to see a problem which I think needs attention at least for 5.8 or whatever comes next.

  • The GA release cycle is too long, being about 2 years and that means 3 years between upgrades in a production environment
  • More people use MySQL and the data it holds becomes more important. So playing with development versions while possible becomes harder.  This is bad for Oracle as they do not get the feedback they need to adjust the development of new features and have to best guess the right choices.
  • Production DBAs do want new features and crave them if it makes our life easier, if performance improves, but we also have to live in an environment which is sufficiently stable.  This is a hard mixture of requirements to work with.
  • In larger environments the transition from one major version to another, even when automated can take time. If any gotcha comes along then it may interrupt that process and leave us with a mixed environment of old and new, or simply in the state of not being able to upgrade at all.  Usually that pause may not be long but even new minor versions of MySQL are not released that frequently so from getting an issue fixed to seeing it released and then upgrading all servers to this new version is again another round of upgrades.

I would like to see Oracle provide new features and make MySQL better. They are doing that and it is clear that since I have been using 5.0 professionally up to the current 5.7 a huge amount has changed. The product is much more stable and performs much better, but my workload has also increased so I am still looking for more features and an easier life. I am an optimist that is for sure.

One issue that I believe holds back earlier experimentation is that MySQL is not modular. Even the engines that you can use in it, if built as plugins, do not seem to be switchable from one minor version to another. This leads to 2 issues:

  • any breakage or bug (and all software has bugs, that is inevitable) requires you when it is fixed to upgrade to a new version. That new version has changes in many different components. Sometimes that is fine but sometimes that may bring in new bugs which cause their own problems
  • potentially the developers of MySQL could replace a “GA module” with a more experimental version of that module which maybe has more features, could perform better but maybe breaks. Changing a single module is hopefully much safer than changing a full binary for a development version, and that should be much easier to do on spare machines. A module such as this would be something I could much more easily test than installing 5.7.4 on lots of machines.

However, the problem is that MySQL is not modular and that is where several people have explained to me my madness and how hard it is to achieve things like this. My current employer likes to push out changes in small chunks, look at the result of those small changes and then if they seem good, go ahead and do more. If something goes wrong, back it out and look elsewhere to do things. Doing the same on a database server not designed that way may well be hard, but making small changes along these lines would I think longer term help improve things and give the people that use a GA MySQL the opportunity to try out new ideas, give feedback quickly and allow things to evolve.

Inevitably when you start to build interfaces like this some interfaces need to change to allow to allow for a larger redesign of the innards of a system. That is fine, when it happens we’ll move over to that and a DEV version will have these new much improved features and we may have to wait longer for that.

What modules might I be talking about when I talk about modularising MySQL?  I’ll agree I do not know the code other than having glanced at it on several occasions but there are some quite clear functional parts to MySQL:

  • the engines have often been plugins, though now InnoDB is a bit of an exception. I still wonder if that is necessary whatever MySQL’s design.  However these plugins do not seem to have a completely clear interface with MySQL as I have seen plugins for example for something like Spider or TokuDB which work for a specific MySQL or MariaDB version. That just shows that whatever this interface is it is not designed to be stable and swappable between different MySQL minor versions.  Doing something to make that better would mean that people who build a new engine can build it once for a a major version and know that on binaries built the same way the files they produce should just plug in without issue unchanged. Me dreaming? Perhaps but no-one worries if I upgrade my db4 rpm from 4.7.25 to 4.7.29 that all the applications that use it will break: the expectation is clear: it should not make any difference at all. Why does something like this not work with MySQL engine code?
  • logging has been rather inconsistent for a long time. I think it may improve in 5.7, but however it’s built, build it as a module. If I want to replace that module with something new that stores all my log data in a Sybase or DB2 database MySQL should not care, assuming the module does the right thing and there are settings to configure this appropriately.  The point being also that if there is a bug in the logging, the bug can be fixed and the module replaced with a bug-free version, without necessarily requiring me to upgrade the whole server.
  • Replication is generally split into 2 parts: the writing to binlogs and the reading of those binlogs from a master, storing them locally and reloading the relay logs and processing them.
    • I have seen bugs in replication, mainly in the more complex SQL thread component where the same change could potentially apply. Swap out the module for a fixed one.
    • MySQL 5.6 was supposed to make life great with replication and we would not get stuck in a situation where a crashed server would come up, out of sync with its master, and because of that we would need to reclone the server again. Even when moving over to using the master_info_repository and relay_log_info_repository settings to TABLE you can have issues. The quick fix implemented by Oracle of relay_log_recovery = 1 sounds great. It is a quick, cheap and cheerful solution which works assuming you never have delayed slaves.  Different environments I maintain do not follow this pattern and I have servers with a deliberate multi-hour delay, which can be useful for recovering from issues. Also copying large databases between datacentres may take several days, triggering after starting the system a need to pull logs and process them for several days. A mistaken restart would lose all that data and require it to be downloaded again which is costly. So I have discussed with colleagues a theoretical improved behaviour of the I/O thread should MySQL crash but there is no way to test it on boxes I currently use. Making the I/O thread into a module would make it much easier to try out different ideas on GA boxes to show whether these ideas are really workable or not.
  • The query parser and optimiser in MySQL is supposed to be a horrendous beast that everyone must keep clear of.  Improvements are happening and posts like this are an indication of progress. My understanding is that this beast is spread all over the server code and thus hard to untangle but certainly from a theoretical point of view doing so would allow alternative optimisers to be usable/pluggable, and for example different optimisers might be better at handling workloads such as batch based workloads with sub queries and such which MySQL is known not to handle well, but which for certain workloads could potentially make a great deal of difference to us all.  The MySQL of 5.0 is quite different from the MySQL of today and sharding is the norm, but that requires help from the app to do all the dirty work. Other options are to use something like Vitess, ScaleBase, or Spider, or some built-in new module which knows aobut this type of thing better and can do this sort of stuff transparently to the application. MySQL Fabric tries to do this at the application level and that’s fine, but it adds much more complexity for the application developers who probably should not really have to worry (too much) about this type of detail.  So solving the problem is not the issue here, it’s providing hooks to let others try, or simply to swap out version 1 with version 10, and see if version 10 is better and faster, with everything else unchanged.
  • The handling of memory in MySQL has always been interesting to us all. Each engine has traditionally managed the memory it needs itself and there is no concept of sharing, or memory pressure, all of which can lead to sudden memory explosions due to a changing workload which may kill mysqld (Linux OOM) or trigger swapping (database servers should never swap…). I have seen in 5.7 that there is now some memory instrumentation and this at least allows looking to see where memory is used. The next step would be to use the same memory management routines, and finally perhaps to add this concept of memory pressure allowing a large query if needed to page out or reduce the size of the innodb buffer pool while it is running, or the heavy use of some MyISAM or Aria tables could do the same.  Doing that is hard, but we are no longer using a MySQL “toy” database. Many large billion $ companies depend on MySQL so this sort of functionality would be most welcome there I am sure.  Changes in this area would certainly need to be done cautiously but I can envisage swapping out the default 5.8 memory manager for a “new feature” 5.9 version with all the “if it breaks you keep the bits” warnings attached, allowing us to see if indeed problematic memory behaviour is resolved by this new module.

I am sure there are lots of other components of MySQL which could receive the same treatment.

Making these sort of changes is of course a huge project and most managers do not see the gain of this, certainly not short term.  However, if care is taken and as different subsystems are modified there is an opportunity for making progress and allowing the sort of experimentation I describe.  Also, and while Oracle may not see it this way, having a clearer interface and more modular framework would allow others to perhaps try different things, and replace a module with their own.  Oracle do seem to be putting a lot of resources into MySQL and that is good, but they do not have infinite resources and they can not solve specialised or every need that we might see. Making it easier, for those who can, to use this hypothetical modular framework, provides an opportunity for some things to be done which can not be done now.  Add a bounty feature and let people pay for that and where something is modularised it will be much easier for them to try to solve problems that may come up. In any case, later testing will be easier if these interfaces exist.

This is the way I would like to see MySQL improve, notice I do not actually talk about functional improvements, but how to make it potentially easier to experiment and test these new features. This sort of design change would allow those of us that need new features now to test and perhaps include them in our GA versions. Maybe then the definition of GA will become rather vague if I am using 5.7.10 + innodb 5.8.1 + io_thread_5.8.3 + sql_thread_5.8.6 + event_scheduler….. Support will probably hate the suggestion I have just made as it would potentially make their life more challenging, but then again I do not see most people playing this game. It is meant for those of us who need it, and if not needed at all bug fixing specific issues should be much easier than now, where you need to do a full new test on a new version to make sure you do not catch another set of new bugs.

If you have got to the end of this thanks for reading. I need to learn to write less but I do believe that the reasoning I make above makes a lot of sense. This can only be done with small changes and with people seeing the idea and trying it out, and at least initially doing it on parts of the system which are easy to do. If they work further progress can be made.

Oracle and MariaDB both want feedback and ideas of where we want MySQL / MariaDB to go.  Independently of some of the technical aspects of new features and improvements this is my 2 cents of one thing I would like to see and why.

Does it make sense?

PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL high availability management with ClusterControl

Installing and managing a highly available MySQL infrastructure can be really tedious. Solutions to facilitate database and system administrator’s task exist, but few of these cover the complete database lifecycle and address all the database infrastructure management requirements. Severalnines’ product ClusterControl is probably the only solution that covers the full infrastructure lifecycle and is also able to provide a full set of functionalities required by database cluster architectures. In this article, I will show how to install, monitor and administrate a database cluster with ClusterControl.


Severalnines is a Swedish company mostly composed of ex-MySQL AB staff. Severalnines provides automation and management software for database clusters. Severalnines’ ClusterControl perfectly fits this objective by providing a full “deploy, manage, monitor, and scale” solution. ClusterControl supports several database cluster technologies such as: Galera Cluster for MySQL, Percona XtraDB Cluster, MariaDB Galera Cluster, MySQL Cluster and MySQL Replication. However ClusterControl does not only support MySQL based cluster but also MongoDB clusters such as MongoDB Sharded Cluster, MongoDB Replica Set and TokuMX. In this article we will use Percona XtraDB Cluster to demonstrate ClusterControl functionalities.


There are two different editions of ClusterControl: the community edition that provides basic functionalities and the enterprise edition that provides a full set of features and a really reactive support. All the details about the features of both editions can be found on the Severalnines website ( In this article, we will detail four main global functionalities that are covered by ClusterControl:


1. The cluster deployment

2. The cluster management

3. The cluster monitoring

4. The scalability functionalities


The cluster architecture that we chose for the purpose of this article is represented in Figure 1. This cluster is composed by three Percona XtraDB nodes (green), two HAProxy nodes (red) and one ClusterControl (blue).


Figure 1: Percona XtraDB Cluster architecture

1. Cluster Deployment

As stated in the introduction, ClusterControl can manage several kinds of MySQL clusters or MongoDB clusters. The cluster deployment starts on Severalnines website on by choosing the kind of cluster we want to install. Once we have selected Percona XtraDB Cluster (Galera), we can select on which infrastructure we want to deploy the cluster. We can choose between on-premise, Amazon EC2 or Rackspace. Since we want to install this cluster on our own infrastructure, our choice here is “on-premise”.

Then we simply have to fill in the general settings forms by specifying parameters such as operating system, platform, number of cluster nodes, ports number, OS user, MySQL password, system memory, database size, etc., as presented in Figure 1.


Figure 2: General Settings

Once the general settings forms are filled in, we have to specify the nodes that belong to the Percona XtraDB cluster as well as the storage details.

The first settings are related to the ClusterControl server, the ClusterControl address and memory. There are also the details regarding the Apache settings, since the web interface is based on an Apache web server:


Figure 3: ClusterControl settings

Now you can fill in the parameters related to the Percona XtraDB data nodes.


Figure 4: Percona XtraDB nodes settings

Once all settings are entered, a deployment package can be automatically generated through the “Generate Deployment Script” button. We simply have to execute it on the ClusterControl server in order to deploy the cluster. Of course, it is still possible to edit the configuration parameters by editing the my.cnf file located in s9s-galera-1.0.0-/mysql/config/my.cnf.


[root@ClusterControl severalnines]# tar xvzf s9s-galera-percona-2.8.0-rpm.tar.gz

[root@ClusterControl severalnines]# cd s9s-galera-percona-2.8.0-rpm/mysql/scripts/install/

[root@ClusterControl install]# bash ./ 2>&1|tee cc.log


The deployment package will download and install Percona XtraDB Cluster on the database hosts, as well as the ClusterControl components to manage the cluster. When the installation is successfully finalized, we can access the ClusterControl web interface via http://ClusterControl


Once logged in to ClusterControl we are able to view all database systems that are managed and monitored by ClusterControl. This means that you can have several differing cluster installations, all managed from one ClusterControl web interface.


Figure 5: ClusterControl Database Clusters

Now the Percona XtraDB cluster is deployed and provides data high availability by using three data nodes. We still have to implement the service high availability and service scalability. In order to do that, we have to setup two HAProxy nodes in the frontend. Adding an HAProxy node with ClusterControl is a straightforward procedure. We would use a one-page wizard to specify the nodes to be included in the load balancing set and the node that will act as the load balancer, as presented in Figure 6.


Figure 6 : Load balancer installation, using HAProxy

To avoid having a Single Point Of Failure (SPOF), it is strongly advised to add a second HAProxy node by following the same procedure as for adding the first HAProxy node. Then simply add a Virtual IP, using the “Install Keepalived” menu as presented in Figure 7.


 Figure 7: Virtual IP configuration using KeepAlived

2. Cluster Management 

ClusterControl offers numbers of administration features such as: Online backup scheduling, configuration management, database node failover and recovery, schema management, manual start/stop of nodes, process management, automated recovery, database user management, database upgrades/downgrades, adding and removing nodes online, cloning (for galera clusters), configuration management (independently for each MySQL node) and comparing status of different cluster nodes.


Unfortunately, presenting all these great management functionalities is not possible in the context of this article. Therefore, we will focus on backup scheduling and user, schema, and configuration management.


a. Backup Scheduling

As far as I remember, MySQL backup has always been a hot topic. ClusterControl offers three backup possibilities for MySQL databases: mysqldump, Percona Xtrabackup (full) and Percona Xtrabackup (incremental). Xtrabackup is a hot backup facility that does not lock the database during the backup. Scheduling the backups and having a look on performed backups is really easy with ClusterControl. It is also possible to immediately start a backup from the backup schedules’ interface. The Figure 7 presents the backup scheduling screen.


Figure 8: Backup scheduling screen (retouched image for the purpose of this article)

You do not have to make a purge script to remove old backups anymore: ClusterControl is able to purge the backups after the definition of the retention period (from 0 to 365 days).

Unfortunately the restore procedure has to be managed manually since ClusterControl does not provide any graphical interface to restore a backup.


b. User, schema, and configuration management 


We can manage the database schemas, upload dumpfiles, and manage user privileges through the ClusterControl web interface.


Figure 9: MySQL user privileges management


You can also change the my.cnf configuration file, apply the configuration changes across the entire cluster, and orchestrate a rolling restart – if required. Every configuration change is version-controlled.


 Figure 10: MySQL Configuration management

New versions of the database software can be uploaded to ClusterControl, which then automates rolling software upgrades.

Figure 11: Rolling upgrade through ClusterControl interface

A production cluster can easily be cloned, with a full copy of the production data, e.g. for testing purposes.


Figure 12: Cloning Cluster screen

3. Cluster monitoring

With ClusterControl, you are not only able to build a cluster from scratch or get a full set of cluster management functionalities. It is also a great monitoring tool that provides you with a number of graphs and indicators, such as the list of top queries (by execution time or Occurrence), the running queries, the query histogram, CPU/Disk/Swap/RAM/Network usage, Tables/Databases growth, health check, and schema analyzer (showing tables without primary keys or redundant indexes). Furthermore, ClusterControl can record up to 48 different MySQL counters (such as opened tables, connected threads, aborted clients, etc.), present all these counters in charts, and many other helpful things that a database administrator will surely appreciate.


Figure 13: Database performance graphics with time range and zoom functionalities (retouched image for the purpose of this article)

ClusterControl provides some interesting information regarding database growth for data and indexes. Figure 14 presents a chart showing the database growth since the last 26 days.


Figure 14: Database growth since the last 26 days


ClusterControl is also able to send e-mail notifications when alerts are raised or even create custom expressions. The database administrator can also setup its own warning as well as critical thresholds for CPU, RAM, disk space, and MySQL memory usage. The following figure represents the resource usage for a given node.


Figure 15: Resources usage for a Master node

Power users can set up custom KPIs, and get alerts in case of threshold breaches.


 Figure 16: Custom KPIs definition


Health Report consists of a number of performance advisors that automatically examine the configuration and performance of the database servers, and alert in case of deviations from best practice rules.


Figure 17: Health report with performance advisors


4. Scalability functionalities


Sooner or later it will be necessary to add or remove either a data node or a HAProxy node to the cluster for scalability or maintenance reasons. With ClusterControl, adding a new node is as easy as selecting the new host and giving it the role we want in the cluster. ClusterControl will automatically install the package needed for this new node and make the appropriate configuration in order to integrate it in the cluster. Of course, removing a node is just as easy.


 Figure 18: New node addition and "add master" screens




With ClusterControl, Severalnines did a great job! For those who ever tried to build and administrate a highly available MySQL architecture using disparate clustering components such as heartbeat, DRBD (Data Replication Block Device), MySQL replication or any other high availability component, I am sure that you often wished to have a solution that provides a complete package. Deploying multiple clustering technologies can become a nightmare. Of course there are solutions such as MMM (Multi-Master replication Management for MySQL), but there is no solution covering the whole cluster lifecycle and offering such an amazing set of features via a nice web interface.


In addition to the great set of functionalities provided by ClusterControl, there is the Severalnines support: Their support team is amazingly efficient and reactive. The reaction time presented on the Severalnines website indicates 1 day but I never waited more than 1 hour before getting a first answer.


As stated in the introduction, there are two editions: The community edition with a limited set of functionalities is free, whereas the enterprise edition is available under a commercial license and support subscription agreement. This subscription includes ClusterControl software, upgrades, and 12 incidents per year. It is also interesting to notice that Severalnines and Percona are partners starting from this year.


The summary of my ClusterControl experience is presented in the table below:



Drawbacks / limitation

+ Covers the whole cluster lifecycle from installation, upgrade as well as the management and monitoring phases

+ Much easier to use than many other tools that do not even provide half of the ClusterControl functionalities

+ Each operation includes a new job subscription – all operation are therefore logged

+ Amazingly reactive support!

- Does not provide backup restore functionalities

- It is not possible to acknowledge alerts or blackout targets



Additional information can be found on Since dbi services is Severalnines partner and has installed this solution at several customer sites, feel free to contact us if you have any additional question regarding ClusterControl.


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Workbench on Fedora

The early release of Fedora 20 disallowed installation of MySQL Workbench but the current version allows it. Almost like Tom Cruise’s Edge of Tomorrow without the drama. All you need to do is follow my earlier instructions for installing MySQL on Fedora 20. I’d check your kernel to know whether it’s supported. You can check that with this command:

<shell> uname -r

My Fedora is at the following version:


Then, you can install MySQL Workbench with yum, like this:

<shell> sudo yum install mysql-workbench

It generates the following log file, and if you have Oracle 11g XE installed you can ignore the mime-type error:

Loaded plugins: langpacks, refresh-packagekit Resolving Dependencies --> Running transaction check ---> Package mysql-workbench-community.x86_64 0:6.1.7-1.fc20 will be installed --> Processing Dependency: for package: mysql-workbench-community-6.1.7-1.fc20.x86_64 --> Processing Dependency: for package: mysql-workbench-community-6.1.7-1.fc20.x86_64 --> Processing Dependency: for package: mysql-workbench-community-6.1.7-1.fc20.x86_64 --> Processing Dependency: for package: mysql-workbench-community-6.1.7-1.fc20.x86_64 --> Processing Dependency: for package: mysql-workbench-community-6.1.7-1.fc20.x86_64 --> Processing Dependency: for package: mysql-workbench-community-6.1.7-1.fc20.x86_64 --> Processing Dependency: for package: mysql-workbench-community-6.1.7-1.fc20.x86_64 --> Running transaction check ---> Package compat-lua-libs.x86_64 0:5.1.5-1.fc20 will be installed ---> Package ctemplate.x86_64 0:2.2-5.fc20 will be installed ---> Package gtkmm24.x86_64 0:2.24.4-2.fc20 will be installed ---> Package libzip.x86_64 0:0.11.2-1.fc20 will be installed ---> Package tinyxml.x86_64 0:2.6.2-4.fc20 will be installed ---> Package vsqlite++.x86_64 0:0.3.13-3.fc20 will be installed --> Finished Dependency Resolution   Dependencies Resolved   ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: mysql-workbench-community x86_64 6.1.7-1.fc20 mysql-tools-community 24 M Installing for dependencies: compat-lua-libs x86_64 5.1.5-1.fc20 updates 158 k ctemplate x86_64 2.2-5.fc20 fedora 174 k gtkmm24 x86_64 2.24.4-2.fc20 fedora 748 k libzip x86_64 0.11.2-1.fc20 updates 59 k tinyxml x86_64 2.6.2-4.fc20 updates 49 k vsqlite++ x86_64 0.3.13-3.fc20 updates 58 k   Transaction Summary ================================================================================ Install 1 Package (+6 Dependent packages)   Total download size: 26 M Installed size: 119 M Is this ok [y/d/N]: y Downloading packages: (1/7): compat-lua-libs-5.1.5-1.fc20.x86_64.rpm | 158 kB 00:01 (2/7): ctemplate-2.2-5.fc20.x86_64.rpm | 174 kB 00:01 (3/7): tinyxml-2.6.2-4.fc20.x86_64.rpm | 49 kB 00:00 (4/7): gtkmm24-2.24.4-2.fc20.x86_64.rpm | 748 kB 00:01 (5/7): vsqlite++-0.3.13-3.fc20.x86_64.rpm | 58 kB 00:00 (6/7): libzip-0.11.2-1.fc20.x86_64.rpm | 59 kB 00:02 (7/7): mysql-workbench-community-6.1.7-1.fc20.x86_64.rpm | 24 MB 00:08 -------------------------------------------------------------------------------- Total 2.9 MB/s | 26 MB 00:08 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : gtkmm24-2.24.4-2.fc20.x86_64 1/7 Installing : libzip-0.11.2-1.fc20.x86_64 2/7 Installing : vsqlite++-0.3.13-3.fc20.x86_64 3/7 Installing : ctemplate-2.2-5.fc20.x86_64 4/7 Installing : compat-lua-libs-5.1.5-1.fc20.x86_64 5/7 Installing : tinyxml-2.6.2-4.fc20.x86_64 6/7 Installing : mysql-workbench-community-6.1.7-1.fc20.x86_64 7/7 Error in file "/usr/share/applications/oraclexe-startdb.desktop": "Application/database" is an invalid MIME type ("Application" is an unregistered media type) Verifying : tinyxml-2.6.2-4.fc20.x86_64 1/7 Verifying : compat-lua-libs-5.1.5-1.fc20.x86_64 2/7 Verifying : ctemplate-2.2-5.fc20.x86_64 3/7 Verifying : vsqlite++-0.3.13-3.fc20.x86_64 4/7 Verifying : mysql-workbench-community-6.1.7-1.fc20.x86_64 5/7 Verifying : libzip-0.11.2-1.fc20.x86_64 6/7 Verifying : gtkmm24-2.24.4-2.fc20.x86_64 7/7   Installed: mysql-workbench-community.x86_64 0:6.1.7-1.fc20   Dependency Installed: compat-lua-libs.x86_64 0:5.1.5-1.fc20 ctemplate.x86_64 0:2.2-5.fc20 gtkmm24.x86_64 0:2.24.4-2.fc20 libzip.x86_64 0:0.11.2-1.fc20 tinyxml.x86_64 0:2.6.2-4.fc20 vsqlite++.x86_64 0:0.3.13-3.fc20   Complete!

After successfully installing MySQL Workbench, you can launch it with the following command:

<shell> mysql-workbench

It should launch the following MySQL Workbench home page (click on it to see the full size image):

PlanetMySQL Voting: Vote UP / Vote DOWN

The Road to MySQL 5.6: Default Options

When you're testing out a new version of MySQL in a non-production environment there is a temptation to go wild and turn on all kinds of new features.  Especially if you're reading the changelogs or the manual and scanning through options.  You want to start with the most reasonable set of defaults, right?  Maybe you're even doing benchmarks to optimize performance using all the new bells and whistles.

Resist the temptation!  If your goal is to upgrade your production environment then what you really want is to isolate changes.  You want to preform the upgrade with as little to no impact as possible.  Then you can start turning on features or making changes one-by-one.

Why?  Anytime you're doing a major upgrade to something as fundamental as your core RDBMS, there are many ways things can go wrong.  Performance regressions & incompatible changes, client/server incompatibilities abound.  If you try to change too many variables at once, you'll often waste a lot of time trying to figure out if you broke something because of a new variable you introduced -- which can be easily resolved by undoing your change -- or if you are encountering a fundamentally incompatible issue which needs to be solved.

But wait! Even if you make no changes to your my.cnf, you're still not in the clear. Why?  Because you may run into a default value for an option which is different or incompatible!  Ooops.

Here are some important examples:

  • innodb_file_per_table - This one caused some fun for us.  Normally we want it on, but we have some unit test servers which include a local instance of mysql.  They drop and create a lot of tables for their setup and tear-down so it's a huge performance boost to keep everything in the global tablespace.  In 5.5 you accomplish that by simply omitting the option.  But the default in 5.6 is now enabled!
  • binlog_checksum - This one is subtle.  Binlog checksums changes how binlogs are written in a way that is not backwards compatible! And as of 5.6.6 it has become the default option.  This has the potential to cause all kinds of headaches when upgrading.  Especially if you're running master-master replication.  It's a dynamic variable so turn it off when upgrading, then enable it later.

I turned to pt-config-diff to see if I could generate a list of all configuration differences.  Given two instances with stock my.cnf files you should be able to just diff the variables to figure out what defaults have changed.   There's one catch though -- pt-config-diff only shows variables which actually exist in both versions.  Which means you're going to miss any new options introduced (such is the case with binlog_checksum!)
I had to make a quick manual modification to the script and now I can dump a comprehensive list.  There are quite a few changed values and a lot of new variables!

Here is a full list:
~160 config differences

Variable 5.5.39 5.6.20
======================================== ==================== ==================
back_log 50 80
bind_address *
binlog_checksum CRC32
binlog_max_flush_queue_time 0
binlog_order_commits ON
binlog_row_image FULL
binlog_rows_query_log_events OFF
binlogging_impossible_mode IGNORE_ERROR
block_encryption_mode aes-128-ecb
core_file OFF
default_tmp_storage_engine InnoDB
disconnect_on_expired_password ON
end_markers_in_json OFF
enforce_gtid_consistency OFF
eq_range_index_dive_limit 10
explicit_defaults_for_timestamp OFF
gtid_mode OFF
have_backup_locks YES
have_snapshot_cloning YES
have_statement_timeout YES
host_cache_size 279
innodb_adaptive_flushing_lwm 10
innodb_adaptive_max_sleep_delay 150000
innodb_api_bk_commit_interval 5
innodb_api_disable_rowlock OFF
innodb_api_enable_binlog OFF
innodb_api_enable_mdl OFF
innodb_api_trx_level 0
innodb_autoextend_increment 8 64
innodb_buffer_pool_dump_at_shutdown OFF
innodb_buffer_pool_dump_now OFF
innodb_buffer_pool_filename ib_buffer_pool
innodb_buffer_pool_instances 1 8
innodb_buffer_pool_load_abort OFF
innodb_buffer_pool_load_at_startup OFF
innodb_buffer_pool_load_now OFF
innodb_change_buffer_max_size 25
innodb_checksum_algorithm innodb
innodb_cleaner_lsn_age_factor high_checkpoint
innodb_cmp_per_index_enabled OFF
innodb_compression_failure_threshold_pct 5
innodb_compression_level 6
innodb_compression_pad_pct_max 50
innodb_concurrency_tickets 500 5000
innodb_data_file_path ibdata1:10M:autoe... ibdata1:12M:aut...
innodb_disable_sort_file_cache OFF
innodb_empty_free_list_algorithm backoff
innodb_file_per_table OFF ON
innodb_flush_log_at_timeout 1
innodb_flush_neighbors 1
innodb_flushing_avg_loops 30
innodb_foreground_preflush exponential_bac...
innodb_ft_cache_size 8000000
innodb_ft_enable_diag_print OFF
innodb_ft_enable_stopword ON
innodb_ft_max_token_size 84
innodb_ft_min_token_size 3
innodb_ft_num_word_optimize 2000
innodb_ft_result_cache_limit 2000000000
innodb_ft_sort_pll_degree 2
innodb_ft_total_cache_size 640000000
innodb_io_capacity_max 2000
innodb_log_arch_dir ./
innodb_log_arch_expire_sec 0
innodb_log_archive OFF
innodb_log_checksum_algorithm innodb
innodb_log_compressed_pages ON
innodb_log_file_size 5242880 50331648
innodb_lru_scan_depth 1024
innodb_max_dirty_pages_pct_lwm 0
innodb_max_purge_lag_delay 0
innodb_old_blocks_time 0 1000
innodb_online_alter_log_max_size 134217728
innodb_open_files 300 2000
innodb_optimize_fulltext_only OFF
innodb_purge_batch_size 20 300
innodb_read_only OFF
innodb_sched_priority_cleaner 19
innodb_sort_buffer_size 1048576
innodb_stats_auto_recalc ON
innodb_stats_on_metadata ON OFF
innodb_stats_persistent ON
innodb_stats_persistent_sample_pages 20
innodb_stats_transient_sample_pages 8
innodb_status_output OFF
innodb_status_output_locks OFF
innodb_sync_array_size 1
innodb_undo_directory .
innodb_undo_logs 128
innodb_undo_tablespaces 0
innodb_version 5.5.39-36.0 5.6.20-68.0
join_buffer_size 131072 262144
log_bin_use_v1_row_events OFF
log_throttle_queries_not_using_indexes 0
long_query_time 10.000000 0.000000
master_info_repository FILE
master_verify_checksum OFF
max_allowed_packet 1048576 4194304
max_connect_errors 10 100
max_statement_time 0
metadata_locks_hash_instances 8
open_files_limit 1024 5000
optimizer_switch index_merge=on,in... index_merge=on,...
optimizer_trace enabled=off,one...
optimizer_trace_features greedy_search=o...
optimizer_trace_limit 1
optimizer_trace_max_mem_size 16384
optimizer_trace_offset -1
performance_schema OFF ON
performance_schema_accounts_size 100
performance_schema_digests_size 10000
performance_schema_events_stages_hist... 10000
performance_schema_events_stages_hist... 10
performance_schema_events_statements_... 10000
performance_schema_events_statements_... 10
performance_schema_hosts_size 100
performance_schema_max_cond_instances 1000 3504
performance_schema_max_file_instances 10000 7693
performance_schema_max_mutex_instances 1000000 16208
performance_schema_max_rwlock_classes 30 40
performance_schema_max_rwlock_instances 1000000 9102
performance_schema_max_socket_classes 10
performance_schema_max_socket_instances 322
performance_schema_max_stage_classes 150
performance_schema_max_statement_classes 179
performance_schema_max_table_handles 100000 4000
performance_schema_max_table_instances 50000 12500
performance_schema_max_thread_instances 1000 402
performance_schema_session_connect_at... 512
performance_schema_setup_actors_size 100
performance_schema_setup_objects_size 100
performance_schema_users_size 100
query_cache_size 0 1048576
query_cache_type ON OFF
relay_log_info_repository FILE
rpl_stop_slave_timeout 31536000
secure_auth OFF ON
server_id_bits 32
server_uuid 0351e47e-3533-1...
sha256_password_private_key_path private_key.pem
sha256_password_public_key_path public_key.pem
slave_allow_batching OFF
slave_checkpoint_group 512
slave_checkpoint_period 300
slave_parallel_workers 0
slave_pending_jobs_size_max 16777216
slave_rows_search_algorithms TABLE_SCAN,INDE...
slave_sql_verify_checksum ON
sort_buffer_size 2097152 262144
sql_mode NO_ENGINE_SUBST...
sync_master_info 0 10000
sync_relay_log 0 10000
sync_relay_log_info 0 10000
table_definition_cache 400 1400
table_open_cache 400 2000
table_open_cache_instances 1
thread_cache_size 0 9
tx_read_only OFF
version 5.5.39-36.0 5.6.20-68.0

This is part 2 in an ongoing series of posts.  Read Part 1
PlanetMySQL Voting: Vote UP / Vote DOWN

Simulating Add Column If Not Exists in MySQL with common_schema

Some MySQL DDL commands such as CREATE TABLE and DROP TABLE support an IF [NOT] EXISTS option which allows you to downgrade the error to a warning if you try to create something that already exists or drop something that doesn't exist.

For example this gives an error:

mysql> drop table sakila.fake_table; ERROR 1051 (42S02): Unknown table 'sakila.fake_table'

And this gives a warning:

``` mysql> drop table if exists sakila.fake_table; Query OK, 0 rows affected, 1 warning (0.00 sec)

Note (Code 1051): Unknown table 'sakila.fake_table' ```

You may also want to use IF [NOT] EXISTS for column-level changes such as ADD COLUMN and DROP COLUMN, but MySQL does not support that.

Read on for some examples of how to simulate IF [NOT] EXISTS using the QueryScript language from common_schema.


This will add a new column named "foo" to the table only if it doesn't already exist:

``` call" if ( select count(*)=0 from information_schema.columns where table_schema = 'sakila' and table_name = 'film' and column_name = 'foo' ) {

alter table add column foo tinyint unsigned not null default 0;

} "); ```


This will drop the "foo" column from the table if it exists:

``` call" if ( select count(*) from information_schema.columns where table_schema = 'sakila' and table_name = 'film' and column_name = 'foo' ) {

alter table drop column foo;

} "); ```

PlanetMySQL Voting: Vote UP / Vote DOWN

OpenStack Live 2015: Call for speakers open through November 9

OpenStack Live 2015: Call for speakers open through Nov. 9

I am proud to announce OpenStack Live, a new annual conference that will run in parallel with the Percona Live MySQL Conference & Expo at the Santa Clara Convention Center in Silicon Valley. The inaugural event, OpenStack Live 2015, is April 13-14, 2015. We are lining up a strong Conference Committee and are now accepting tutorial and breakout session speaking proposals through November 9.

OpenStack Live will emphasize the essential elements of making OpenStack work better with emphasis on the critical role of MySQL and the value of Trove. You’ll hear about the hottest current topics, learn about operating a high-performing OpenStack deployment, and listen to top industry leaders describe the future of the OpenStack ecosystem. We are seeking speaking proposals on the following topics:

  • Performance Optimization of OpenStack
  • OpenStack Operations
  • OpenStack Trove
  • Replication and Backup for OpenStack
  • High Availability for OpenStack
  • OpenStack User Stories
  • Monitoring and Tools for OpenStack

The conference features a full day of keynotes, breakout sessions, and Birds of a Feather sessions on April 14 preceded by an optional day of tutorials on April 13. A Monday reception will be held on the exhibit floor and joint lunches with both conferences offer you the opportunity to network with both the OpenStack and MySQL communities from both conferences. The OpenStack Live conference is a great event for users of any level.

As a bonus, OpenStack Live attendees may attend any Percona Live MySQL Conference session during the days of the OpenStack event. Conference only passes are available for April 14 and conference and tutorial passes are available for both April 13 and 14.

If you are using OpenStack and have a story to share – or a skill to teach – then now is the time to put pen to paper (or fingers to keyboard) and write your speaking proposal for either breakout or tutorial sessions (or both). Submissions will be reviewed by the OpenStack Live Conference Committee, which includes:

  • Mark Atwood: Director – Open Source Evangelism for HP Cloud Services
  • Rich Bowen: OpenStack Community Liaison at Red Hat
  • Jason Rouault: Senior Director OpenStack Cloud at Time Warner Cable
  • Peter Boros: Principal Architect at Percona

Presenting at OpenStack Live 2015 is your chance to put your ideas, case studies, best practices and technical knowledge in front of an intelligent, engaged audience of OpenStack users. If selected as a speaker by our Conference Committee, you will receive a complimentary full conference pass.

Public speaking not your thing or just want to learn about the latest and greatest OpenStack technologies, deployments and projects? Then register now and save big with our early bird discount. OpenStack Live 2015 is an ideal opportunity for organizations to connect with the community of OpenStack enthusiasts from Silicon Valley and around the world. The Percona Live MySQL Conference this past April had over 1,100 registered attendees from 40 countries and the OpenStack Open Source Appreciation Day on the Monday before the conference was fully booked so don’t delay, register today to save your seat!

We are currently accepting sponsors. You can learn more about sponsorship opportunities here.

I hope to see you at OpenStack Live 2015 next April! And speakers, remember the deadline to submit your proposals is November 9. In the meantime you can learn more by visiting the official OpenStack Live 2015 website.

The post OpenStack Live 2015: Call for speakers open through November 9 appeared first on MySQL Performance Blog.

PlanetMySQL Voting: Vote UP / Vote DOWN

Announcing Planet MySQL: Meta

A couple of months back, I wrote that we were looking to improve the quality of Planet MySQL. Today, I am very excited to share the results of this with the announcement of Planet MySQL: Meta.

What is Meta?

Meta is a new category of posts that appear on Planet MySQL, which is intended for posts that are more social and less technical in nature. That is to say that instead of all posts appearing in one central feed, blog authors will now be able to target their posts for either Planet MySQL or Planet MySQL: Meta.

Readers will also have a choice as to which categories of posts they would like to subscribe to:

The +more above denotes that as well as reducing noise surrounding the technical posts, we also recognizing that there is a social aspect to being part of the MySQL Community. We are encouraging new content to Meta that authors may have not posted before out of fear it was too spammy.

Note for authors: Existing blogs will stay defaulted to Planet MySQL, but for more details on what content belongs where, please see our FAQ.

How can I subscribe to meta?

If you are reading via Planet MySQL, ensure that the tab "Planet MySQL: Meta" is selected. For RSS feed subscriptions, we offer both Planet MySQL and Planet MySQL: Meta.

How can I unsubscribe from meta?

If you are reading via Planet MySQL, ensure that the tab "Planet MySQL" is selected. For RSS feed subscriptions, we offer both Planet MySQL and Planet MySQL: Meta.

How can I continue as things were?

We will continue to offer the option to subscribe to both categories, in a very similar way to the way that Planet MySQL previously operated.

  • Morgan

PlanetMySQL Voting: Vote UP / Vote DOWN

Percona XtraBackup 2.2.4 is now available

Percona is glad to announce the release of Percona XtraBackup 2.2.4 on September 12th 2014. Downloads are available from our download site here and Percona Software Repositories.

Percona XtraBackup enables backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backup.

New Features:

  • Percona XtraBackup has implemented support for Galera GTID auto-recovery. Percona XtraBackup retrieves the GTID information, after backing up a server with backup locks support, from the InnoDB trx header on recovery and creates the xtrabackup_galera_info during that stage.

Bugs Fixed:

  • Percona XtraBackup is now built with system zlib library instead of the older bundled one. Bug fixed #1108016.
  • apt-get source was downloading older version of Percona XtraBackup. Bug fixed #1363259.
  • innobackupex would ignore the innobackupex --databases without innobackupex --stream option and back up all the databases. Bug fixed #569387.
  • rsync package wasn’t a dependency although it is required for the innobackupex --rsync option. Bug fixed #1259436.
  • innobackupex --galera-info was checking only for non-capitalized wsrep_* status variables which was incompatible with MariaDB Galera Cluster 10.0. Bug fixed #1306875.
  • Percona XtraBackup would crash trying to remove absent table from InnoDB data dictionary while preparing a partial backup. Bug fixed #1340717.
  • Percona XtraBackup now supports MariaDB GTID. Bugs fixed #1329539 and #1326967 (Nirbhay Choubey).
  • MariaDB 10.1 is now added to the list of supported servers. Bug fixed #1364398.
  • Percona XtraBackup would fail to restore (copy-back) tables that have partitions with their own tablespace location. Bug fixed #1322658.

Other bugs fixed: #1333570, #1326224, and #1181171.

Release notes with all the bugfixes for Percona XtraBackup 2.2.4 are available in our online documentation. Bugs can be reported on the launchpad bug tracker

The post Percona XtraBackup 2.2.4 is now available appeared first on MySQL Performance Blog.

PlanetMySQL Voting: Vote UP / Vote DOWN

Speaking at Oracle OpenWorld 2014

For those of you lucky enough to come and listen to all the great talks within the MySQL Central @ Oracle OpenWorld this year, I’ll also be giving a talk about the MySQL sys schema this year.

This builds upon the talks that I’ve given in the past around Performance Schema, as well as some of the great looking talks on Performance Schema that are on the schedule this year by other MySQL @ Oracle engineers (see below).

The schedule builder is now live, here are some of the other interesting ones on my list that I’d like to attend:

And of course, if you’re in to monitoring as much as me (and why wouldn’t you be?), be sure to check these out:

Hope to see you there - come find me, and lets have a beverage of your choice together!

PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL at the PyCon Japan

MySQL team is attending the PyCon Japan conference on September 13-15, 2014, our local colleagues will be available for your questions either at our MySQL booth or around the conference. We are also having a MySQL Lightening talk, please watch the conference schedule for any updates. 

PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Central @ OpenWorld, Focus on the Hands-On Labs

Earlier this week I blogged about the MySQL Central @ OpenWorld tutorials. Let's focus today on the Hands-On Labs (HOLs), giving you the opportunity to get hands-on experience across a number of MySQL areas.

All HOLs will be held at the Nikko hotel. Here is the schedule:

Tuesday, September 30

DevOps Made Easy with MySQL Utilities

Charles Bell, Software Development Manager, Oracle - 10:15 AM - 11:15 AM Hotel Nikko - Bay View

Practical Introduction to MySQL Monitoring for Oracle Enterprise Manager Users

Carlos Proal Aguilar, Senior Software Developer, Oracle 3:45 PM - 4:45 PM Hotel Nikko - Bay View

Deploying MySQL with Oracle ZFS Storage Appliance

John Baton, Associate Quality Assurance Engineer, Oracle Robert Cummins, Systems Engineer ZFSSA, Oracle Paul Johnson, Principal Software Engineer, Oracle John Zabriskie Jr, Software engineer, Oracle 6:45 PM - 7:45 PM Hotel Nikko - Mendocino I/II

Wednesday, October 01

MySQL EXPLAIN in Practice

Øystein Grøvlen, Senior Principal Software Engineer, Oracle 10:15 AM - 11:15 AM Hotel Nikko - Peninsula

Protecting Your MySQL Database

Georgi Kodinov, Senior Software Development Manager, Oracle 11:45 AM - 12:45 PM Hotel Nikko - Bay View

Using MySQL Workbench Performance Tools

Mike Frank, Senior Product Manager, Oracle Alfredo Kojima, Sr Software Development Manager, Oracle 1:15 PM - 3:15 PM Hotel Nikko - Bay View

Thursday, October 02

MySQL Troubleshooting with the Performance Schema

Lig Isler-turmelle, Principle Technical Support Engineer, Oracle Sveta Smirnova, Senior Principal Technical Support Engineer, Oracle 8:30 AM - 9:30 AM Hotel Nikko - Monterey

MySQL Enterprise Edition Features in Practice

Matt Lord, MySQL Product Manager, Oracle 10:00 AM - 12:00 PM Hotel Nikko - Monterey

How to Run Efficient, High-Performance, Secure MySQL Backups

Mike Frank, Senior Product Manager, Oracle 2:30 PM - 3:30 PM Hotel Nikko - Bay View

All HOLs have now been added to the "Focus On" MySQL document, helping you choose what session to attend when.

MySQL Central @ OpenWorld is only 17 days away, not registered yet? Sign up now to save $200!

PlanetMySQL Voting: Vote UP / Vote DOWN

Summer Progress &amp; Collaboration

Fri, 2014-09-12 07:15patriksallner

Over the summer, SkySQL has taken big strides in our collaboration with strategic partners. Our relationship with IBM has deepened and broadened as the porting and optimization of MariaDB on Power8 servers has progressed. The performance results look very promising and this collaboration is already opening some very exciting discussions with lead customers.

We also continue to work closely with Red Hat as the pace of upgrades to RHEL7 with MariaDB included is accelerating. Both Rackspace and Pivotal have recently integrated MariaDB into their cloud architectures and we are working with Tesora to integrate MariaDB into the OpenStack Trove project.

These partnerships are further growing the adoption of MariaDB among both large enterprises and the broader open source community. This has led to increasing the interest in MariaDB Enterprise and MariaDB Enterprise Cluster, which are tightly integrated subscription offerings for demanding enterprise environments.

Finally, we are sad to see the departure of Ivan Zoratti from the company. He has been with the company since the early days and played a crucial role in building deep relationships with many key customers. Ivan built a vision for the suite of tools and solutions around the database server. After the merger with MariaDB, this work evolved to what is today MariaDB Enterprise. In addition, Ivan has been instrumental in the creation of the MaxScale proxy system, which is now in beta and will be production ready shortly.

We wish Ivan all the best in his new challenges. We are now actively looking for candidates for the CTO position. Please spread the word and encourage any candidates matching the criteria to apply.

Tags: Business About the Author Patrik Sallner

Patrik joined as CEO in 2012 after running the Content Cloud business at F-Secure where he was also leading the Professional Services function. Earlier, Patrik built up and led the Hosting Line of Business in Nokia Siemens Networks. Patrik has a long background with Nokia where he held various management positions in mobile phone product development, strategy and innovation, including running a cross-functional Corporate Strategy unit called Insight & Foresight. He also worked several years in management consulting with McKinsey & Company in their Paris and Helsinki offices.

Patrik has a MSc degree in Technology Management from Helsinki University of Technology as well as an MBA from the Wharton School and a MA degree in International Studies from the University of Pennsylvania.

PlanetMySQL Voting: Vote UP / Vote DOWN

OpenStack: A MySQL DBA Perspective – Sept. 17 webinar

OpenStack: A MySQL DBA Perspective

I’ll have the pleasure to present, next Wednesday, September 17 at 10 a.m. PDT (1 p.m. EDT) a webinar titled “OpenStack: A MySQL DBA Perspective.” Everyone is invited.

The webinar will be divided into two parts. The first part will cover how MySQL can be used by the OpenStack infrastructure including the expected load, high-availability solutions and geo-DR.

The second part will focus on the use of MySQL within an OpenStack cloud. We’ll look into the various options that are available, the traditional ones and Trove. We’ll also discuss the block device options in regards with MySQL performance and, finally, we’ll review the high-availability implications of running MySQL in an OpenStack cloud.

Register here. I look forward to your questions, and if you have any related to OpenStack that I can help with in advance of the webinar please feel free to post those in the comments section below. I’ll write a followup post after the webinar to recap all related questions and answers. I’ll also provide the slides.

See you next Wednesday!

The post OpenStack: A MySQL DBA Perspective – Sept. 17 webinar appeared first on MySQL Performance Blog.

PlanetMySQL Voting: Vote UP / Vote DOWN