Planet MySQL

Monitoring MySQL from Oracle Enterprise Manager

I wanted to quickly remind everyone that we recently announced the GA release of Oracle Enterprise Manager for MySQL.

We know that many existing Oracle Database customers are also using MySQL, so we're very excited that you can now manage your entire hardware and software stack, including MySQL, all from within a single tool.

I also wanted to take the opportunity to answer a few common questions that I've seen come up over the last few weeks:

Q: Can I use the new Plugin with Oracle Enterprise Manager 11g (Grid Control)?
A: No. The Plugin requires Oracle Enterprise Manager 12c release 4 or later.

Q: Where can I download the Plugin?
A: The Plugin is available via Self Update within Oracle Enterprise Manager 12c release 4 and later. It's also available via My Oracle Support and the Oracle Software Delivery Cloud. You can find more information about the installation process here.

Q: Can I monitor MySQL Community Edition (GPL licensed) Targets?
A: Yes. There are no technical restrictions or behavioral differences for the Plugin depending on whether the MySQL Target is Community or Enterprise Edition. The licensed use of the Plugin to monitor that Target, however, is only covered by the 30 day trial license or a MySQL Enterprise license. If you have a valid license that covers the MySQL Target being monitored, then it does not matter which Edition of MySQL the Target is running.

Q: Where can I find documentation about the Plugin?
A: You can find it here and here.

We look forward to hearing your feedback on this new edition to MySQL Enterprise! Please let us know if you have any questions, encounter any problems, or have any feature requests. You can also engage in a discussion on our forums. Lastly, feel free to reach out to me directly as well, either here in the comments or via email. Thank you for using MySQL!

PlanetMySQL Voting: Vote UP / Vote DOWN

Benchmarking Joomla

This post recently caught my attention on Planet MySQL. If you haven’t read it yet, I suggest that you go and do so, and also read the comments. I think Matthew’s request for the queries so that others can run comparative benchmarks is very interesting, and while I don’t have access to the queries used to produce […] Related posts:
  1. Updated mysql-proxy benchmarking script (for proxy 0.7) My previous post contained a lua script for MySQL proxy...
  2. Using MySQL Proxy to benchmark query performance By transparently sitting between client and server on each request,...
  3. iptables trick to limit concurrent tcp connections This is sort of a self-documenting post, and a self-support...
YARPP powered by AdBistroPowered by
PlanetMySQL Voting: Vote UP / Vote DOWN

Why JSON is bad for applications

Today I read an article about how company X has improved things by amongst other things ditching JSON after 2 years of using it. Before I start on this subject I should say that JSON does have its place. If you have a web application where a browser is talking to a web server and in particular uses JavaScript then JSON is a good fit.

I've discussed this issue several times before with Brian Aker who works with me at HP's Advanced Technology Group and in the past I have been hit with the issues I'm going to talk about here.

JSON is human readable and easy to parse, that cannot be denied and for prototyping is good in a pinch. The first problem comes when you need to validate data. I've been stung many times by one end trying to read/write the JSON in a slightly different format to the other end, the end result is always not pretty. This is one advantage that XML and SOAP has going for it over JSON since validation is easier. I'm personally not a fan of XML but there are many who are.

There are additional problems when you start using mobile platforms. Mobile networks are unreliable, you may have a good 3G signal but it is possible to only get dial-up speed through it due to all the other users. JSON is verbose, XML more so which requires more data transfer. Whilst this can be resolved with protocol compression it will require additional decoding on the client side to do this. In addition data conversion will be needed in many cases for numeric fields.

The biggest problem with JSON is versioning. As you add more features to your application there will likely come a time where you need to change the data structure for your messages. Often you can't guarantee that your client is using the same version of the software as your server so backwards and forwards compatibility problems can arise. Resolving these often makes the JSON messages very complex to create and decode. This is not as much of a problem for web applications because the browser usually grabs an update version of the JavaScript on execution. So changing the data format at any time is easy as long as both ends agree on the format.

The solution

For many applications the data you are sending is directly from a database or at least data that has been modified since being read from a database. So you will likely want the data model for your messages to match this as much as possible. This is where Google's Protocol Buffers fit nicely.

Protocol Buffers allow you to specify a schema for the data in a human readable format, it actually looks a little like a database schema. They will automatically validate the data for you and have versioning built-in. This means you can make your code easily backwards and forwards compatible.

There is a positive and negative side to the data transfer of Protocol Buffers. It is a binary protocol. This means it takes up minimal bandwidth on the wire but also means that it is very hard to interpret the data without the schema. The same could be said if you were given InnoDB table data without the schemas. It also means it may be possible to compress the data further with something like LZO or DEFLATE.

I recommend application developers consider Protocol Buffers instead of JSON when they are next developing a server/client application.

PlanetMySQL Voting: Vote UP / Vote DOWN

Log Buffer #395, A Carnival of the Vanities for DBAs

This Log Buffer Edition ventures out in the fields of Oracle, SQL Server and MySQL and picks some of the coolest blog posts for this week.


In Oracle, if we add a column to a table which is NOT NULL, we are allowed to do it directly, in a single statement, as long as we supply a DEFAULT value to populate any pre-existing rows.

There have been rumblings from the HPC community indicating a general suspicion of and disdain for Big Data technology which would lead one to believe that whatever Google, Facebook and Twitter do with their supercomputers is not important enough to warrant seriousness—that social supercomputing is simply not worthy.

Work-around Instance Migration Limits of BPM Suite 11g.

Oracle Event Processing 12c: java errors when deploying a new OEP project.

Creating a WebLogic 12c Data Source Connection to Pivotal GemFireXD 1.3.

SQL Server:

It sounds simple enough. Either your column will always have a value or it may not. Yet somehow such a seemingly simple decision can become a never-ending debate where database schema begins to resemble superstition and designing effective tables seems more contentious than you expected it to be.

There’s a joke doing the rounds at SQL conferences and seminars: three DBAs walk into a NoSQL bar and leave when they can’t find a table.

Stairway to AlwaysOn Level 3: Infrastructure 101.

How to compare two databases and email the results to every one who needs to know.

Free eBook: Troubleshooting SQL Server: A Guide for the Accidental DBA.


Set up an SSL-encrypted connection between Sphinx and MySQL.

Sometimes users ask for something that doesn’t really make sense. On the first glance. But then you start asking and realize that the user was right, you were wrong, and it is, actually, a perfectly logical and valid use case.

PECL/mysqlnd_ms needs updates for MySQL Group Replication.

Why should you migrate from MySQL to MariaDB?

The Perfect Server – CentOS 7 (Apache2, Dovecot, ISPConfig 3).

PlanetMySQL Voting: Vote UP / Vote DOWN

Replace Oracle RAC with MariaDB Galera Cluster?

Fri, 2014-10-31 11:34erkanyanar

If you want to avoid downtimes in your business, High Availabilty (HA) is a strong requirement which, by definition, makes it possible to access your data all the time without losing (any) data. In this blog we compare two alternatives: Oracle RAC and MariaDB Galera Cluster. 

There are several options to implement High Availability. Oracle RAC is a popular and proven HA solution. HA can also be enabled for your data and systems with loadbalancers that make it possible to always access your data. MariaDB Galera Cluster provides similar functionality using synchronous multi-master Galera replication. It is also easier to build and proves to be more cost-effective. Being OpenSource, you may have to pay for support, but not for running the system.

Next, the designs of Oracle RAC and MariaDB Galera Cluster are going to be compared, so you can make up your mind on your own.

Oracle RAC

With RAC, Oracle instances run on separate nodes, while the data is located on shared storage. All instances access the same files.

To prevent conflicts, the instances must agree on which instance is actually working on a block of data. If a node wants to change a row, it must get exclusive access to that block and store it in its cache. It therefore asks the other nodes whether they have the block. If no other node does, it gets the block from storage.

Even in case of read-access data all the nodes need to communicate this way to get the data as it is done for writing. When the block is modified, the requesting nodes get a consistent read version (which they are not allowed to modify) from the block. This adds latency due to internode communication - there will be read and write access every time a node does not have the block.

The need for communication between the nodes for every access on a table adds overhead. On the other hand, having all blocks advance local locks on a node, e.g. for SELECT FOR UPDATE, are cluster wide locks.

The advantage of RAC is that losing an Oracle node does not harm the service at all. The other nodes will keep providing data (HA of the access). Therefore, you can shut down a node to perform maintenance tasks such as upgrading hardware or software, while reducing unexpected downtime. However, the shared storage - responsible for the data - is a potential single point of failure.

On Oracle RAC distributing read or write access is not optimal because latency is added by additional internode round trips. The best results occur when the application only accesses a fixed part of the data per node, so that no blocks have to be moved around, but it makes the setup more complicated.

MariaDB Galera Cluster

In contrast to Oracle RAC, MariaDB Galera Cluster is a high availability setup with shared-nothing architecture. Instead of having one shared storage (SAN or NAS), every cluster member has its own copy of all the data, thus eliminating the single point of failure.

MariaDB Galera Cluster take care about syncing data even for new nodes. This makes managing the cluster easy, as adding an empty node into the cluster is sufficient. MariaDB Galera Cluster will provide all data for the new node.

Unlike Oracle RAC, accessing a node to read data does not result in internode communication. Instead, communication (and so latency) happens at the time transactions are committed. This is faster than the Oracle RAC approach of acquiring all blocks in advance, but this also means conflicts are found at the time a transaction is committed.

And conflict are found by the internode communication because of the commit. Thats why the same data should not be accessed (at least not at the same time) on different nodes, as this increases the chance of having conflicts. This will not happen when the data is accessed on different nodes one after another. In the case of Oracle RAC the blocks would have to be copied.

This means that a SELECT FOR UPDATE statement is able to fail on commit, as it locks the data locally but not cluster wide. So conflicts with transactions on other nodes can only be found at the time of the commit. That is why the same data should not be accessed at the same time on different nodes, as it increases the chance of having conflicts. This is slightly different to Oracle RAC where accessing data on another node any time later does move the blocks.

While Oracle RAC has a lot of latency moving data blocks into the cache of every node, MariaDB Galera Cluster has an increased likelihood of failing commits.

Like Oracle RAC, single nodes in a MariaDB Galera Cluster can be taken down for maintenance without stopping the cluster. When a node rejoins the cluster, it automatically gets missing transactions via Incremental State Transfer (IST), or it may sync all data using State Snapshot Transfer (SST). If the missing transactions are in a local (configurable) cache of a node, IST is used, if not SST is used.

One drawback of the current Galera version is that Data Definition Language (DDL) commands (CREATE, ALTER, DROP) are run synchronously on the cluster. Therefore the entire cluster stalls until a DDL command finishes. Thats why Magento installations running default configuration do not scale at all on MariaDB Galera Cluster. In general using tools like pt-online-schema-change bypass this limitation. Eliminating this limitation is on the development roadmap.

In comparison

Oracle RAC and MariaDB Galera Cluster provide similar functionality using different designs. Each one is eliminating maintenance downtime for many tasks and thus gives you more freedom to run applications.

In general Oracle RAC has a lot more latency because of internode communication (including moving all requested data blocks) for read and write access. In MariaDB Galera Cluster the changed dataset is sent around by committing. So only changed datasets are sent.

Despite the obvious similarities, the two databases have quite different architectures. Oracle RAC uses shared storage, while MariaDB Galera Cluster uses a shared-nothing architecture, which is less expensive. Oracle RACs shared storage is quite expensive. The author has observed EMC or NetApp for that, as it is the single point of failure something reliable is needed.

Data on MariaDB Galera Cluster is replicated on all the nodes, which makes it easy to run the cluster spread over different regions. Consequently, your data will be safe even if your datacenter burns down. To have this level of redundancy with Oracle RAC you need a shared storage accordingly, i.e. a Netapp MetroCluster. Beside adding more costs, Netapp MetroCluster requires a network with a round trip latency of less than 10ms, while MariaDB Galera Cluster even runs in Cloud environments in different regions.

With Oracle RAC there are two inherent sources of latency: accessing the shared storage and internode communication for read and write access. While in MariaDB Galera Cluster there is latency for every COMMIT needed by the internode communication to check and send the data to be committed.

Of course MariaDB Galera Cluster is no one-to-one replacement for Oracle RAC. But if your application runs with either Oracle or MySQL/MariaDB, MariaDB Galera Cluster is more than an alternative.

Further reading Tags: ClusteringGaleraMariaDB Enterprise About the Author erkan yanar

Erkan Yanar is an independent consultant with strong focus on MySQL, Docker/LXC and OpenStack. He loves to give presentations and do also writes for trade magazines.

PlanetMySQL Voting: Vote UP / Vote DOWN

The Perfect Server - Ubuntu 14.10 with Apache, PHP, MySQL, PureFTPD, BIND, Postfix, Dovecot and ISPConfig

The Perfect Server - Ubuntu 14.10 with Apache, PHP, MySQL, PureFTPD, BIND, Postfix, Dovecot and ISPConfig

This tutorial shows how to install an Ubuntu 14.10 (Utopic  Unicorn) server (with Apache2, BIND, Dovecot) for the installation of ISPConfig 3, and how to install ISPConfig 3. ISPConfig 3 is a webhosting control panel that allows you to configure the following services through a web browser: Apache or nginx web server, Postfix mail server, Courier or Dovecot IMAP/POP3 server, MySQL, BIND or MyDNS nameserver, PureFTPd, SpamAssassin, ClamAV, and many more. This setup covers the installation of Apache (instead of nginx), BIND (instead of MyDNS), and Dovecot (instead of Courier).

PlanetMySQL Voting: Vote UP / Vote DOWN

Benchmark of Load Balancers for MySQL/MariaDB Galera Cluster

October 31, 2014 By Severalnines

When running a MariaDB Cluster or Percona XtraDB Cluster, it is common to use a load balancer to distribute client requests across multiple database nodes. Load balancing SQL requests aims to optimize the usage of the database nodes, maximize throughput, minimize response times and avoid overload of the Galera nodes. 

In this blog post, we’ll take a look at four different open source load balancers, and do a quick benchmark to compare performance:

  • HAproxy by HAproxy Technologies
  • IPVS by Linux Virtual Server Project
  • Galera Load Balancer by Codership
  • mysqlproxy by Oracle (alpha)

Note that there are other options out there, e.g. MaxScale from the MariaDB team, that we plan to cover in a future post.


When to Load Balance Galera Requests


Although Galera Cluster does multi-master synchronous replication, you should really read/write on all database nodes provided that you comply with the following:

  • Table you are writing to is not a hotspot table
  • All tables must have an explicit primary key defined
  • All tables must run under InnoDB storage engine
  • Huge writesets must run in batch, for example it is recommended to run 100 times of 1000 row inserts rather than one time of 100000 row inserts
  • Your application can tolerate non-sequential auto-increment values.

If above requirements are met, you can have a pretty safe multi-node write cluster without the need to split the writes on multiple masters (sharding) as  you would need to do in a MySQL Replication setup because of slave lag problems. Furthermore, having load balancers between the application and database layer can be very convenient where load balancers may assume that all nodes are equal and no extra configuration such as read/write splitting and promoting a slave node to a master are required.

Note that if you run into deadlocks with Galera Cluster, you can send all writes to a single node and avoid concurrency issues across nodes. Read requests can still be load balanced across all nodes. 


Load Balancers




HAProxy stands for High Availability Proxy, it is an open source TCP/HTTP-based load balancer and proxying solution. It is commonly used to improve the performance and availability of a service by distributing the workload across multiple servers. Over the years it has become the de-facto open source load balancer, is now shipped with most mainstream Linux distributions.

read more

PlanetMySQL Voting: Vote UP / Vote DOWN

Get a handle on your HA at Percona Live London 2014

From left: Liz van Dijk, Frédéric Descamps and Kenny Gryp

If you’re following this blog, it’s quite likely you’re already aware of the Percona Live London 2014 conference coming up in just a few days. Just in case, though (you know, if you’re still looking for an excuse to sign up), I wanted to put a spotlight on the tutorial to be delivered by my esteemed colleagues Frédéric Descamps (@lefred) and Kenny Gryp (@gryp), and myself.

The past two years at Percona we’ve been spending a substantial amount of time working with customers taking their first steps into creating Highly Available MySQL environments built on Galera. Percona XtraDB Cluster allows you to get it up and running very fast, but as any weathered “HA” DBA will tell you, building the cluster is only the beginning. (Percona XtraDB Cluster is an open source (free) high-availability and high-scalability solution for MySQL clustering.)

Any cluster technology is likely to introduce a great amount of complexity to your environment, and in our tutorial we want to show you not only how to get started, but also how to avoid many of the operational pitfalls we’ve encountered. Our tutorial, Percona XtraDB Cluster in a nutshell, will be taking place on Monday 3 November and is a full-day (6 hours) session, with an intense hands-on approach.

We’ll be covering a great deal of practical topics, such as:

  • Things to keep in mind when migrating an existing environment over to PXC
  • How to manage and maintain the cluster, keeping it in good shape
  • Load balancing requests across the cluster
  • Considerations for deploying PXC in the cloud

Planning on attending? Be sure to come prepared! Given the hands-on approach of the tutorial, make sure you bring your laptop with enough disk space (~20GB) and processing power to run at least 4 small VirtualBox VM’s.

We look forward to seeing you there!

The post Get a handle on your HA at Percona Live London 2014 appeared first on MySQL Performance Blog.

PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB 10.1.1: Galera support

MariaDB 10.1 server is now “Galera ready” with the latest 10.1.1 release. It includes wsrep (write set replication) patch that enables server to load the wsrep provider (galera) library and interact with it to provide multi-master synchronous replication support. The patch implements hooks inside server and storage engines to populate and apply the write sets on sender and receiver nodes in a cluster respectively. The wsrep patch also adds a number of system and status variables (prefixed with wsrep) that can be used to configure and monitor the server acting as a node in Galera cluster.

Unlike older MariaDB versions, the wsrep patch is now part of regular (vanilla) MariaDB 10.1 server, that is, with 10.1.1 there would be no separate vanilla and Galera server versions. As a result, same 10.1 packages (binary tarballs, deb, rpm, etc.) can now be used to run MariaDB server as standalone server or a node in MariaDB Galera cluster.

As a standalone server, it would require no additional settings, just your usual favorite options. However, if one wants to start it as a MariaDB Galera node the following mandatory settings would be required :

  • wsrep_on=ON
  • wsrep_provider
  • wsrep_cluster_address
  • binlog_format=ROW
  • default_storage_engine=InnoDB
  • innodb_autoinc_lock_mode=2
  • innodb_doublewrite=1
  • query_cache_size=0

Without these additional settings the Galera replication essentially gets disabled and server functions like a standalone server with no Galera-related overhead.

If you are building MariaDB 10.1 server from source, WITH_WSREP cmake switch (ON by default) can be used to control the inclusion of wsrep patch in the build.

MariaDB [test]> select @@version; +----------------------------+ | @@version | +----------------------------+ | 10.1.1-MariaDB-wsrep-debug | +----------------------------+ 1 row in set (0.00 sec) MariaDB [test]> show status like 'wsrep_ready'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | wsrep_ready | ON | +---------------+-------+ 1 row in set (0.00 sec) MariaDB [test]> show status like 'wsrep_cluster_size'; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 2 | +--------------------+-------+ 1 row in set (0.00 sec)

More about MariaDB Galera cluster:

PlanetMySQL Voting: Vote UP / Vote DOWN

Set up an SSL-encrypted connection between Sphinx and MySQL

A Wolf, a Dolphin and a Sphinx walk into a bar… nevermind. We’ll skip the jokes. This post is about using SSL to set up a secure connection between MySQL and Sphinx. Serious stuff! The Idea It simple. We’re going to use Sphinx to index some data from MySQL across a secure connection. So, we’ll [...]
PlanetMySQL Voting: Vote UP / Vote DOWN

GeoJSON Functions

In recent years, GeoJSON has become a popular data format for exchanging GIS data due to several factors. The primary factors being that it’s easy to read, and it’s simple and lightweight. In 5.7.5, we added support for parsing and generating GeoJSON documents via two new functions: ST_GeomFromGeoJson() and ST_AsGeoJson(). These functions makes it easy to connect MySQL with other GeoJSON enabled software and services, such as the Google Maps Javascript API.

Since GeoJSON is a JSON format, we needed a library to parse and write JSON documents. After evaluating several candidates, we ended up with rapidjson due to its features, speed, and compatible license.

The new functions support all of the geometry types specified in the GeoJSON specification, as well as collections. The parsing function also extracts the geometry information from feature objects, as shown below:

mysql> SELECT ST_AsText(ST_GeomFromGeoJson( -> '{ '> "type": "Feature", '> "properties": {"Location": "Oracle HQ"}, '> "geometry": '> { '> "type": "Point", '> "coordinates": [-122.262289, 37.530518] '> } '> }' '> )); +--------------------------------+ | SELECT ST_AsText(ST_GeomFr ... | +--------------------------------+ | POINT(-122.262289 37.530518) | +--------------------------------+

When creating GeoJSON documents from spatial data, you have the possibility to specify a maximum number of decimal digits in the output. If GeoJSON output length is a concern for you, then this is a simple and efficient way to reduce the output length if your geometry contains a lot of coordinates with many decimals.

You can even add a URN in the OGC namespace to the GeoJSON output. The function uses the SRID from the geometry input, and outputs a short or long format OGC URN depending the input parameters (note that the output below is formatted for readability):

mysql> SELECT ST_AsGeoJson(ST_GeomFromText("POINT(-0.127676 51.507344)", 4326), 5, 4); +-------------------------------------------------------------------------+ | ST_AsGeoJson(ST_GeomFromText("POINT(-0.127676 51.507344)", 4326), 5, 4) | +-------------------------------------------------------------------------+ | { | | "type":"Point", | | "coordinates": [-0.12768,51.50734], | | "crs": | | { | | "type":"name", | | "properties": | | { | | "name":"urn:ogc:def:crs:EPSG::4326" | | } | | } | | } | +-------------------------------------------------------------------------+

URNs in the OGC namespace are also recognized by the parsing function, and are stored in the geometry output from the function.

What about Importing GeoJSON with 3D Geometries?

You might think that your lovely collection of GeoJSON documents with 3D geometries are useless with these functions, and that they can’t be imported, but this is exactly what the options argument in ST_GeomFromGeoJson is for. By default, the parsing function will refuse to parse documents with 3D geometries. We do, however, realize that this would reject a lot of existing documents out there, so setting the options argument to 2, 3, or 4 will allow you to import these documents:

mysql> SELECT ST_AsText(ST_GeomFromGeoJson( -> '{ '> "type": "LineString", '> "coordinates": '> [ '> [30.1904297,69.7181067,1.11], '> [27.8173828,70.2446036,1.12], '> [21.9726563,69.6876184,1.12], '> [18.1054688,68.7204406,1.11], '> [15.9082031,67.5924750,1.12], '> [14.2382813,66.2314575,1.11], '> [13.1835938,65.1091482,1.10], '> [11.9091797,63.6267446,1.10], '> [8.65722660,61.2702328,1.11], '> [7.07519530,58.0080978,1.11] '> ] '> }', 3 '> )); +------------------------------------------------------------------------------+ | SELECT ST_AsText(ST_GeomFr ... | +------------------------------------------------------------------------------+ | LINESTRING(30.1904297 69.7181067,27.8173828 70.2446036,21.9726563 69.6876184,| | 18.1054688 68.7204406,15.9082031 67.592475,14.2382813 66.2314575,13.1835938 6| | 5.1091482,11.9091797 63.6267446,8.6572266 61.2702328,7.0751953 58.0080978 | +------------------------------------------------------------------------------+

Option values of 2, 3, and 4 all have the same effect for now, but in the future when 3D geometries are supported, they will produce different results. We recommend using option 3, since this will mean that documents with 3D geometries will be rejected when 3D is introduced in MySQL. This will ensure that the change of behavior won’t happen silently and that you can take the appropriate action in your applications.

We look forward to your feedback on the new Geohash and GeoJSON features! Please let us know if you have any comments or if you run into any bugs. Thank you for using MySQL!

PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB 10.1.1: triggers for RBR

Sometimes users ask for something that doesn’t really make sense. On the first glance. But then you start asking and realize that the user was right, you were wrong, and it is, actually, a perfectly logical and valid use case.

I’ve had one of these moments when I’ve heard about a request of making triggers to work on the slave in the row-based replication. Like, really? In RBR all changes made by triggers are replicated from the master to slaves as row events. If triggers would be fired on the slave they would do their changes twice. And anyway, assuming that one only has triggers one the slave (why?) in statement-based replication triggers would run on the slave normally, wouldn’t they?

Well, yes, they would, but one cannot always use statement-based replication. If one could, RBR would’ve never been implemented. There are many cases that statement-based replication cannot handle correctly. Galera requires RBR too. And as it turned out, that user, indeed, only had triggers on the slave — the master gets all the updates and slaves maintain summary tables that triggers keep up to date.

That’s why MariaDB 10.1.1 can now optionally invoke triggers for row-based events. This is controlled by the slave_run_triggers_for_rbr system variable. This variable is defined as

MariaDB [test]> SELECT * FROM INFORMATION_SCHEMA.SYSTEM_VARIABLES WHERE --> VARIABLE_NAME='slave_run_triggers_for_rbr'\G *************************** 1. row *************************** VARIABLE_NAME: SLAVE_RUN_TRIGGERS_FOR_RBR SESSION_VALUE: NULL GLOBAL_VALUE: NO GLOBAL_VALUE_ORIGIN: COMPILE-TIME DEFAULT_VALUE: NO VARIABLE_SCOPE: GLOBAL VARIABLE_TYPE: ENUM VARIABLE_COMMENT: Modes for how triggers in row-base replication on slave side will be executed. Legal values are NO (default), YES and LOGGING. NO means that trigger for RBR will not be running on slave. YES and LOGGING means that triggers will be running on slave, if there was not triggers running on the master for the statement. LOGGING also means results of that the executed triggers work will be written to the binlog. NUMERIC_MIN_VALUE: NULL NUMERIC_MAX_VALUE: NULL NUMERIC_BLOCK_SIZE: NULL ENUM_VALUE_LIST: NO,YES,LOGGING READ_ONLY: NO COMMAND_LINE_ARGUMENT: REQUIRED 1 row in set (0.01 sec)

That is, it’s a global variable, it can be set to YES, NO, and LOGGING. By default it’s NO. The value of YES will, naturally, make triggers to run for RBR events. The value of LOGGING will make them to run and changes made by the triggers will be written into the binary log. This mode can be changed run-time or from the command line or a config file.

Either way, when slave_run_triggers_for_rbr is not NO MariaDB slaves will invoke specific triggers for certain row events:

  • Update_row_event will invoke an UPDATE trigger
  • Delete_row_event will invoke a DELETE trigger
  • Write_row_event is a bit tricky. It is applied (yes, in MySQL too) as follows:
    • The slave tries to insert a row.
    • If the table has UNIQUE KEY constraints (or a PRIMARY KEY) and there is a conflicting row — it’ll be updated to have all values as in what should’ve been inserted.
    • But if the table also has FOREIGN KEY constraints or there are other UNIQUE keys in the table, old row will be deleted and new row will be inserted. Two operations instead of one, so it’s slower, but guarantees that there will be no references to the old row after it disappears.

    That is, Write_row_event can invoke INSERT trigger, DELETE trigger, or UPDATE trigger, depending on whether a conflicting row exists, which UNIQUE constraint was violated and whether a table was referenced in a foreign key constraint. If you think it’s too complex — you’re right, we’ll simplify it in 10.1.2.

In setups like this, particularly if one replicates further, these slaves being masters to some other slaves, there is a risk that triggers will be run multiple times for the same row. To detect this kind of mistakes, MariaDB marks row events that have invoked triggers, and when these events are replicated, they won’t cause further trigger invocations on slaves. That is, the master must not have any triggers on tables in question, otherwise slave-side triggers will not be invoked.

PlanetMySQL Voting: Vote UP / Vote DOWN

PECL/mysqlnd_ms needs updates for MySQL Group Replication

‘Synchronous’, multi-master, auto-everything – that’s the new MySQL Group Replication (IPC14 talk/slides) in simple words. After torturing PHP developers for decades with MySQL Replication there is now a new replication option which does not require read-write splitting. A system that does not know about slave lags and reading stale data. In theory, MySQL Group Replication is just about the perfect approach to run a standard PHP application (WordPress, Drupal, …) on a small cluster (3-7 nodes) in LAN settings. In theory, MySQL Group Replication improves both availability and performance.

MySQL Group Replication talk given today at the International PHP Conference 2014 (Munich)

Distribution Transparency

When designing replication systems there are some desireable goals which contradict each other. In a perfect world, from a developers perspective, a database cluster would behave exactly the same way as a single database. The user should never have to worry where and how data is stored in the cluster. Transactions executed on the cluster would provide the same properties like transactions run on a standalone database. The cluster would never return stale data (synchronous).

Synchronous replication is desired but it requires coordination among cluster nodes. In LAN settings coordination can be reasonably fast. MySQL Group Replication is ‘synchronous’ replication (see slides for details). Deploy it on LAN only. In the internet, in WAN settings, when trying to replicate from Europe to Asia things will be slow. If WAN, then either forget about distribution transparency or performance. If WAN, go for asychronous MySQL Replication.

The extra work different clusters cause for the developer

Synchronous and asynchronous clusters always cause some extra work for the developer. Either approach requires load balancing and failover logic. An asynchronous approach adds: dealing with delays and stale reads. MySQL Replication is not only asynchronous but has only one master (primary). This adds: read-write splitting.

PECL/mysqlnd_ms tries to help with all these tasks and take them over in a semi-transparent way.

PECL/mysqlnd_ms support for synchronous clusters

PECL/mysqlnd_ms is a plugin for mysqlnd. PDO_MySQL and mysqli use mysqlnd as their default library to talk to MySQL. Any of the two APIs works with PECL/mysqlnd_ms, our load balancing and replication plugin.

The plugin monitors many API calls and aims to make using any kind of MySQL clusters easier. No matter what cluster: MySQL Replication, MySQL Cluster, MySQL Group Replication, 3rd party solutions. Example configurations are given in the PHP manual.

MySQL Group Replication usage task 1: load balancing

When moving an application from a single database server to a synchronous cluster there are two additional tasks: load balancing and failover. With PECL/mysqlnd_ms load balancing does no require any code changes. The plugin intercepts your connect calls and tests whether the host you connect to matches the name of a config entry. If so, the plugin loads the config, learns from the config which nodes there are and starts load balancing connection. Should you be too lazy to change the host name in your connects to match a PECL/mysqlnd_ms config entry, then just name the config entry after you current host names, have a config entry for ’′ etc.

$link = new mysqli("myapp", ...);

MySQL Group Replication usage task 2: failover

The second task is to handle the failure of a cluster node and connect to the next available one. PECL/mysqlnd_ms does that for you if you want. It picks an alternative from the config and connect you to it.

There’s a small feature gap here. MySQL Group Replication tries to be an auto-everything solution. It automatically detects failed nodes. It also fully automates adding new nodes to the cluster. That’s cool but it means that over time the set of nodes can change and your config needs to be updated.

The PECL/mysqlnd_ms feature gap

There are two options. First, you could deploy the config. Second, after a failover or periodically, we could make PECL/mysqlnd_ms fetch the list of nodes from the cluster and make it reconfigure itself (see also here). That’s finally possible because MySQL Group Replication shows the list of nodes in a performance schema table.

Once we did that, and MySQL Group Replication has reached GA, the auto-everything cluster for MySQL becomes real. All the stuff on the server side is already automatic. PECL/mysqlnd_ms is already GA and already handles all additional tasks – without code changes. A tiny addition is missing and you could even get an auto-deployed PECL/mysqlnd_ms…

Happy hacking!


The post PECL/mysqlnd_ms needs updates for MySQL Group Replication appeared first on Ulf Wendel.

PlanetMySQL Voting: Vote UP / Vote DOWN

Presenting Sphinx Search at Percona Live London

It has been a while since I posted on this blog. The more in depth articles are all posted on the Spil Games Engineering blog and I’m overcrowded with work in the past months that I hardly have any time left.

One of the reasons for having too much on my plate was my own doing: I volunteered to be a conference committee member for Percona Live London and we, as a committee, worked our way through all proposals while you all were enjoying your holidays in the sun. Well, I must admit I did review a couple of them sitting in the sun, next to the pool enjoying a cold drink. ;)
I must say there were so many great proposals it was really hard to choose which ones would be left out.

I also proposed a talk for Percona Live London this year and my fellow committee members liked it enough to make it to the final schedule: Serve out any page with an HA Sphinx environment.
In basis it is a MySQL case study where I will show how we at Spil Games use Sphinx Search in three different use cases: our main search, friend search and serving out our content on any page. I’ll also describe how we handle high availability, how we handle index creation and show benchmark results between MySQL and Sphinx in various use cases.

In case you are interested in Sphinx or the benchmark results: the session will be on the 4th of November at 3:10pm – 4:00pm in Cromwell 1 & 2.
Also don’t hesitate to ask me things when I’m wandering around in the corridors and rooms. Or maybe we’ll meet at the MySQL Community Dinner?
See you next week!

Tagged: conference committee, percona live london, sphinx search
PlanetMySQL Voting: Vote UP / Vote DOWN

An Ending and a Beginning: VMware Has Acquired Continuent

As of today, Continuent is part of VMware. We are absolutely over the moon about it.
You can read more about the news on the VMware vCloud blog by Ajay Patel, our new boss. There’s also an official post on our Continuent company blog. In a nutshell the Continuent team is joining the VMware Cloud Services Division. We will continue to improve, sell, and support our Tungsten products and work on innovative integration into VMware’s product line.
So why do I feel exhilarated about joining VMware? There are three reasons. 
1.     Continuent is joining a world-class company that is the leader in virtualization and cloud infrastructure solutions. Even better, VMware understands the value of data to businesses. They share our vision of managing an integrated fabric of standard DBMS platforms, both in public clouds as well as in local data centers. It is a great home to advance our work for many years to come.

2.     We can continue to support our existing users and make Tungsten even better. I know many of you have made big decisions to adopt Continuent technology that would affect your careers if they turned out badly. We now have more resources and a mandate to grow our product line. We will be able to uphold our commitments to you and your businesses.

3.     It’s a great outcome for our team, which has worked for many years to make Continuent Tungsten technology successful. This includes our investors at Aura in Helsinki, who have been dogged in their support throughout our journey.
Speaking of the Continuent team…I am so proud of what all of you have achieved. Today we are starting a new chapter in our work together. See you at VMware!
PlanetMySQL Voting: Vote UP / Vote DOWN

Why should you migrate from MySQL to MariaDB?

Wed, 2014-10-29 09:52anatoliydimitrov

Anatoliy Dimitrov gives his take on technical reasons to migrate from MySQL to MariaDB: MariaDB offers several advantages in terms of performance and features.

First and foremost, MariaDB offers more and better storage engines. NoSQL support, provided by Cassandra, allows you to run SQL and NoSQL in a single database system. MariaDB also supports TokuDB, which can handle big data for large organizations and corporate users.

MySQL's usual (and slow) database engines MyISAM and InnoDB are replaced in MariaDB by Aria and XtraDB respectively. Aria offers better caching, which makes a difference when it comes to disk-intensive operations. Temporary tables also use Aria, which speeds up complex queries, such as those involving GROUP BY and DISTINCT. Percona's XtraDB gets rid of all of the InnoDB problems with slow performance and stability, especially in high load environments.

Additional, unmatched features in MariaDB provide better monitoring through the introduction of microsecond precision and extended user statistics. MariaDB also enhances the KILL command to allow you to kill all queries for a user (KILL USER username) or to kill a query ID (KILL QUERY ID query_id). MariaDB also switched to Perl-compatible regular expressions (PCRE), which offer more powerful and precise queries than standard MySQL regex support.

In addition to more features, MariaDB has also applied a number of query optimizations for queries connected with disk access, join operations, subqueries, derived tables and views, execution control, and even explain statements. To see what these mean for database performance, visit the MariaDB optimizer benchmark page.

In addition, in the latest Red Hat 7 release, MariaDB replaces MySQL in the default software repository. This means automatic migration to MariaDB for most people who use the default distribution packages. Corporate users' migration to MariaDB will be facilitated with additional support from Red Hat.

Read each MariaDB version's the release notes to learn about all the new features and improvements. Naturally, the latest version (currently 10.0) has the most new features and improvements, but if you are focused on stability, you may want to deploy an older, time-tested version such as 5.5.

For all of the reasons cited here, a large number of former MySQL users, including Google and Wikipedia, have migrated to MariaDB, and in the process have gained a modern, stable, powerful, and fast database system.

Further reading Tags: CommunityDeveloper About the Author Anatoliy Dimitrov

Anatoliy Dimitrov is an open source enthusiast with substantial professional experience in databases and web/middleware technologies. He is as interested in technical writing and documentation as in practical work on complex IT projects. His favourite databases are MariaDB (sometimes MySQL) and PostgreSQL. He is currently graduating his master's degree in IT and aims to a PhD in Bionformatics in his home town University of Sofia.

PlanetMySQL Voting: Vote UP / Vote DOWN

Free Webinar! Developing MySQL Applications with Go

Google’s Go language is increasingly popular for systems programming. We use Go extensivley at VidivCortex and have come to appreciate its simplicity. It’s efficient and fun to program in Go, and it produces high-performance programs. Go is well suited for working with SQL databases, and has excellent MySQL drivers. Join us on Tuesday, November 11 at 2 PM EST (6 PM GMT) as Baron Schwartz, our CEO and Founder, discusses the following:

  • Using Go’s database/sql package with MySQL

  • Idiomatic database/sql code and available drivers for MySQL

  • Tips and tricks to save time and frustration

You will come away with a stronger grasp on the integration and flexibility between the two. Please register here, and we will send you an invite and link to the event.

If you cannot attend, we will send you a link to the recording.

About the speaker

Baron is one of the world’s leading experts on MySQL, and has helped build and scale some of the largest web, social, gaming, and mobile properties. His award-winning tools are used by tens of thousands of large MySQL deployments, including Facebook and Twitter. His book High Performance MySQL is widely acknowledged as the definitive reference for MySQL. As an author, consultant, speaker, and conference organizer, Baron has deep connections in the MySQL and DevOps communities and beyond. Before founding VividCortex, Baron led sales, marketing, consulting, support, and software engineering at Percona, the leading MySQL consulting and support company.

PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL for Excel 1.3.3 has been released

The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.3.3. This is a maintenance release for 1.3.x. It can be used for production environments.

MySQL for Excel is installed using the MySQL Installer for Windows which comes in 2 versions:

  • Full (150 MB) which includes a complete set of MySQL products with their binaries included in the download
  • Web (1.5 MB - a network install) which will just pull MySQL for Excel over the web and install it when run.

You can download MySQL Installer from our official Downloads page at The MySQL for Excel product can also be downloaded by using the product standalone installer found at this link

Several new features were added the 1.3.x branch, for more information see What Is New In MySQL for Excel 1.3 (

PlanetMySQL Voting: Vote UP / Vote DOWN

The Perfect Server – CentOS 7 (Apache2, Dovecot, ISPConfig 3)

The Perfect Server – CentOS 7 (Apache2, Dovecot, ISPConfig 3)

This tutorial shows how to prepare a CentOS 7 x86_64 server for the installation of ISPConfig 3 and how to install ISPConfig 3. ISPConfig 3 is a webhosting control panel that allows you to configure the following services through a web browser: Apache web server, Postfix mail server, MySQL, BIND nameserver, PureFTPd, SpamAssassin, ClamAV, Mailman, and many more.

PlanetMySQL Voting: Vote UP / Vote DOWN