Planet MySQL

Developing modern applications using MySQL seminar

I recently completed a four country tour of Baltic Sea countries presenting this seminar series at:

This seminar of four presentations provided a detailed review of the essential lifecycle components for developing a successful software application and offered a checklist for your company to review the design, development, deployment and support practices of your MySQL business applications. Presentations included:

NOTE: More detailed posts which include detailed links will be released soon.

Effective MySQL Architecture and Design Practices
Download PDF Presentation

Correctly designing a MySQL architecture has a huge impact on the growth of your system for future needs. This presentation covered some key business and technology decisions that should be asked and discussed before building a MySQL based system.

Effective Software Development with MySQL
Download PDF Presentation

While the MySQL RDBMS is essential for storing your information, how your application interacts with applicable business logic is what makes each company in unique. Ensuring the practices of your developers can deliver reliable and maintainable systems cost effectively is important for success.

Upcoming MySQL features for modern applications
Download PDF Presentation

Are you running a legacy version of MySQL that is no longer supported?  Oracle has provided three versions since purchasing Sun Microsystems in 2010.  This presentation discusses the current and upcoming features of the various MySQL versions and provide insights into why these new features are important for developing modern applications.

Effective Web Site Operations
Download PDF Presentation

It is very important for companies to be able to support and manage your MySQL installation and related technology stack.  This presentation included being prepared for handling software, OS or dependency upgrades. Ensuring the right backup and recovery procedures and correct testing and verification to avoid common pitfalls. Monitoring, alerting and instrumentation options.

MySQL Overview
Download PDF Presentation

This overview of MySQL provided a background of the history, ecosystem, current community and commercial products available.

Acknowledgements

A special thank you to all the organizers and sponsors that made each of these events possible. It takes a lot of work for organizers and event locations to enable attendees to get the best event possible.

  • Heli Helskyaho, EMEA Oracle User Group President
  • Petri Koistinen from the Finland MySQL Users Group
  • Solinor. Sponsor and host for Finland event.
  • Olle Nilsson and Thomas Johansson from the Sweden MySQL Users Group and Ted Wennmark from Oracle Sweden.
  • B3IT. Sponsor and host for Sweden event.
  • Andrejs Vorobjovs from the Oracle Users Group Latvia
  • BDA. Sponsor and host for Latvia event. BCACC our beer sponsor for after event
  • Ilmar Kerm from the Oracle Users Group Estonia
  • TransferWise. Sponsor and host for Estonia event.
  • Vikka Lira – Oracle ACE Program

PlanetMySQL Voting: Vote UP / Vote DOWN

Putting MySQL Cluster in a container

To get more familiar with docker and to create a test setup for MySQL Cluster I created docker images for the various components of MySQL Cluster (a.k.a. NDB Cluster)

At first I created a Fedora 20 container and ran all components in one container. That worked and is quite easy to setup. But that's not how one is supposed to use docker.

So I created Dockerfile's for all components and one base image.

The base image:
  • contains the MySQL Cluster software
  • has libaio installed
  • has a mysql user and group 
  • serves as a base for the other images
The management node (ndb_mgmd) image:
  • Has ndb_mgmd as entrypoint
  • Has a config.ini for the cluster config
  • Should be started with "--name=mymgm01"
The data node (ndbmtd) image:
  • Has ndbmtd as entrypoint
  • Uses the connect string: "host=${MGM01_PORT_1186_TCP_ADDR}:1186"
  • Should be started with "--link mymgm01:mgm01" to allow it to connect to the management node.
  • You should create 2 containers of this type to create a nodegroup of 2 nodes.
The API node (mysqld) image:
  • has a my.cnf
  • Runs mysqld_safe
  • Should be started with "--link mymgm01:mgm01" to allow it to connect to the management node.
  • The ndb-connectstring is given as parameter to mysqld_safe as it comes from an environment variable. It's not possible to use environment variables from within my.cnf. Docker is supposed to also update /etc/hosts but that didn't work for me.
  • You should expose port 3306 for your application
The management client (ndb_mgm) image:
  • Runs ndb_mgm as entrypoint
  • Should be started with "--link mymgm01:mgm01" to allow it to connect to the management node.
  • Running the ndb_mgm in a container removes the need to publish port 1186 on the management server. More info here.
  • You can override the entrypoint to run other NDB utilities like ndb_desc or ndb_select_all
The images can be found on https://registry.hub.docker.com/u/dveeden/mysqlcluster72/
The Dockerfiles can be found on https://github.com/dveeden/dve-docker  

Possible improvements
  • Use hostnames in the config.ini instead of IPv4 addresses. This makes it more dynamic. But that means updating /etc/hosts or fideling with DNS.
  • Using VOLUMES in the Dockerfiles to make working with data easier.

PlanetMySQL Voting: Vote UP / Vote DOWN

Galera data on Percona Cloud Tools (and other MySQL monitoring tools)

I was talking with a Percona Support customer earlier this week who was looking for Galera data on Percona Cloud Tools. (Percona Cloud Tools, now in free beta, is a hosted service providing access to query performance insights for all MySQL uses.)

The customer mentioned they were already keeping track of some Galera stats on Cacti, and given they were inclined to use Percona Cloud Tools more and more, they wanted to know if it was already supporting Percona XtraDB Cluster. My answer was: “No, not yet: you can install agents in each node (the regular way in the first node, then manually on the other nodes… and when prompted say “No” to create MySQL user and provide the one you’re using already) and monitor them as autonomous MySQL servers – but the concept of cluster and specially the “Galera bits” has yet to be implemented there.

Except I was wrong.

By “concept of cluster” I mean supporting the notion of group instances, which should allow a single cluster-wide view for metrics and query reports, such as the slow queries (which are recorded locally on the node where the query was run and thus observed in a detached way). This still needs to be implemented indeed, but it’s on the roadmap.

The “Galera bits” I mentioned above are the various “wsrep_” status variables. In fact, those refer to the Write Set REPlication patches (based in the wsrep API), a set of hooks applied to the InnoDB/XtraDB storage engine and other components of MySQL that modifies the way replication works (to put it in a very simplified way), which in turn are used by the Galera library to provide a “generic Synchronous Multi-Master replication plugin for transactional applications.” A patched version of Percona Server together with the Galera libray compose the base of Percona XtraDB Cluster.

As I found out only now, Percona Cloud Tools does collect data from the various wsrep_ variables and it is available for use – it’s just not shown by default. A user only needs to add a dashboard/chart manually on PCT to see these metrics:

Click on the picture to enlarge it

Now, I need to call that customer …

Monitoring the cluster

Since I’m touching this topic I thought it would be useful to include some additional information on monitoring a Galera (Percona XtraDB Cluster in particular) cluster, even though most of what I mention below has already been published in different posts here on the MySQL Performance Blog. There’s a succint documentation page bearing the same title of this section that indicates the main wsrep_ variables you should monitor to check the health status of the cluster and how well it’s coping with load along the time (performance). Remember you can get a grasp of the full set of variables at any time by issuing the following command from one (or each one) of the nodes:

mysql> SHOW GLOBAL STATUS LIKE "wsrep_%";

And for a broader and real time view of the wsrep_ status variables you can use Jay Janssen’s myq_gadgets toolkit, which he modified a couple of years ago to account for Galera.

There’s also a specific Galera-template available in our Percona Monitoring Plugins (PMP) package that you can use in your Cacti server. That would cover the “how well the cluster copes with load along the time,” providing historical graphing. And while there isn’t a Galera specific plugin for Nagios in PMP, Jay explains in another post how you can customize pmp-check-mysql-status to “check any status variable you like,” describing his approach to keep a cluster’s “health status” in check by setting alerts on specific stats, on a per node basis.

VividCortex recently added a set of templates for Galera in their product and you can also rely on Severalnines’ ClusterControl monitoring features to get that “global view” of your cluster that Percona Cloud Tools doesn’t yet provide. Even though ClusterControl is a complete solution for cluster deployment and management, focusing on the automation of the whole process, the monitoring part alone is particularly interesting as it encompasses cluster-wide information in a customized way, including the “Galera bits”. You may want to give it a try as the monitoring features are available in the Community version of the product (and if you’re a Percona customer with a Support contract covering Percona XtraDB Cluster, then you’re entitled to get support for it from us).

One thing I should note that differentiate the monitoring solutions from above is that while you can install Cacti, Nagios and ClusterControl as servers in your own infrastructure both Percona Cloud Tools and VividCortex are hosted, cloud-based services. Having said that, neither one nor the other upload sensitive data to the cloud and both provide options for query obfuscation.

Summary

Contrary to what I believed, Percona Cloud Tools does provide support for “Galera bits” (the wsrep_ status variables), even though it has yet to implement support for the notion of group instances, which will allow for cluster-wide view for metrics and query reports. You can also rely on the Galera template for Cacti provided by Percona Monitoring Plugins for historical graphing and some clever use of Nagios’ pmp-check-mysql-status for customized cluster alerts. VividCortex and ClusterControl also provide monitoring for Galera.

Percona Cloud Tools, now in free beta, is a hosted service providing access to query performance insights for all MySQL uses. After a brief setup, unlock new information about your database and how to improve your applications. Sign up to request access to the beta today.  

The post Galera data on Percona Cloud Tools (and other MySQL monitoring tools) appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Galera data on Percona Cloud Tools (and other MySQL monitoring tools)

I was talking with a Percona Support customer earlier this week who was looking for Galera data on Percona Cloud Tools. (Percona Cloud Tools, now in free beta, is a hosted service providing access to query performance insights for all MySQL uses.)

The customer mentioned they were already keeping track of some Galera stats on Cacti, and given they were inclined to use Percona Cloud Tools more and more, they wanted to know if it was already supporting Percona XtraDB Cluster. My answer was: “No, not yet: you can install agents in each node (the regular way in the first node, then manually on the other nodes… and when prompted say “No” to create MySQL user and provide the one you’re using already) and monitor them as autonomous MySQL servers – but the concept of cluster and specially the “Galera bits” has yet to be implemented there.

Except I was wrong.

By “concept of cluster” I mean supporting the notion of group instances, which should allow a single cluster-wide view for metrics and query reports, such as the slow queries (which are recorded locally on the node where the query was run and thus observed in a detached way). This still needs to be implemented indeed, but it’s on the roadmap.

The “Galera bits” I mentioned above are the various “wsrep_” status variables. In fact, those refer to the Write Set REPlication patches (based in the wsrep API), a set of hooks applied to the InnoDB/XtraDB storage engine and other components of MySQL that modifies the way replication works (to put it in a very simplified way), which in turn are used by the Galera library to provide a “generic Synchronous Multi-Master replication plugin for transactional applications.” A patched version of Percona Server together with the Galera libray compose the base of Percona XtraDB Cluster.

As I found out only now, Percona Cloud Tools does collect data from the various wsrep_ variables and it is available for use – it’s just not shown by default. A user only needs to add a dashboard/chart manually on PCT to see these metrics:

Click on the picture to enlarge it

Now, I need to call that customer …

Monitoring the cluster

Since I’m touching this topic I thought it would be useful to include some additional information on monitoring a Galera (Percona XtraDB Cluster in particular) cluster, even though most of what I mention below has already been published in different posts here on the MySQL Performance Blog. There’s a succint documentation page bearing the same title of this section that indicates the main wsrep_ variables you should monitor to check the health status of the cluster and how well it’s coping with load along the time (performance). Remember you can get a grasp of the full set of variables at any time by issuing the following command from one (or each one) of the nodes:

mysql> SHOW GLOBAL STATUS LIKE "wsrep_%";

And for a broader and real time view of the wsrep_ status variables you can use Jay Janssen’s myq_gadgets toolkit, which he modified a couple of years ago to account for Galera.

There’s also a specific Galera-template available in our Percona Monitoring Plugins (PMP) package that you can use in your Cacti server. That would cover the “how well the cluster copes with load along the time,” providing historical graphing. And while there isn’t a Galera specific plugin for Nagios in PMP, Jay explains in another post how you can customize pmp-check-mysql-status to “check any status variable you like,” describing his approach to keep a cluster’s “health status” in check by setting alerts on specific stats, on a per node basis.

VividCortex recently added a set of templates for Galera in their product and you can also rely on Severalnines’ ClusterControl monitoring features to get that “global view” of your cluster that Percona Cloud Tools doesn’t yet provide. Even though ClusterControl is a complete solution for cluster deployment and management, focusing on the automation of the whole process, the monitoring part alone is particularly interesting as it encompasses cluster-wide information in a customized way, including the “Galera bits”. You may want to give it a try as the monitoring features are available in the Community version of the product (and if you’re a Percona customer with a Support contract covering Percona XtraDB Cluster, then you’re entitled to get support for it from us).

One thing I should note that differentiate the monitoring solutions from above is that while you can install Cacti, Nagios and ClusterControl as servers in your own infrastructure both Percona Cloud Tools and VividCortex are hosted, cloud-based services. Having said that, neither one nor the other upload sensitive data to the cloud and both provide options for query obfuscation.

Summary

Contrary to what I believed, Percona Cloud Tools does provide support for “Galera bits” (the wsrep_ status variables), even though it has yet to implement support for the notion of group instances, which will allow for cluster-wide view for metrics and query reports. You can also rely on the Galera template for Cacti provided by Percona Monitoring Plugins for historical graphing and some clever use of Nagios’ pmp-check-mysql-status for customized cluster alerts. VividCortex and ClusterControl also provide monitoring for Galera.

Percona Cloud Tools, now in free beta, is a hosted service providing access to query performance insights for all MySQL uses. After a brief setup, unlock new information about your database and how to improve your applications. Sign up to request access to the beta today.  

The post Galera data on Percona Cloud Tools (and other MySQL monitoring tools) appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Does MySQL need a mentoring program?

Does MySQL need a mentoring program? I get calls, emails, and other requests for trained MySQL DBAs and Developers. Human Resources, managers, team leads, and entrepreneurs have the need but can not find the bodies. It is easy enough to download MySQL, get it running, and use it. But the next steps seem problematic for many. There are programs like MySQL Marinate and Girl Develop It to provide some hands on help for beginners. Autodidacts can find tons of MySQL Books and on line information. But how do we take the beginners and get them to intermediate or beyond?

How do we support these new comers, give them a hand if needed, a shoulder to cry on, or just provide someone who has been there before to bounce ideas around when needed? How do we pull them into social networks to warn them of pitfalls, pass on information about new technologies, or just be there as a friendly voice when the air movement device is being impacted by non optimal material? How do we pass on best practices, professional guidance, and the norms of our community? There is only so much forums, IRC, and Stack Overflow can handle. Local users groups are good if you have a local user group.

A good place to start is to see what other Open Source projects are doing. PHP Mentorting is a formal, personal, long term, peer to peer mentorship organization focused on creating networks of skilled developers from all walks of life. Read their info and let me know if you think the MySQL Community needs something similar.

Being a mentor has benefits too. There is an old saying that you really do not know a subject until you can pass on your knowledge to someone else. It also helps bring along someone who could replace you if you decided to climb the corporate ladder. Plus you never know what you fledgling might teach you.

So do we need a MySQL mentoring program?



PlanetMySQL Voting: Vote UP / Vote DOWN

Percona Server 5.6.20-68.0 is now available

Percona is glad to announce the release of Percona Server 5.6.20-68.0 on August 29, 2014. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.20, including all the bug fixes in it, Percona Server 5.6.20-68.0 is the current GA release in the Percona Server 5.6 series. All of Percona’s software is open-source and free. Complete details of this release can be found in the 5.6.20-68.0 milestone on Launchpad.

New Features:

  • Percona Server has implemented the MySQL 5.7 SHOW SLAVE STATUS NONBLOCKING syntax for Lock-Free SHOW SLAVE STATUS feature. The existing SHOW SLAVE STATUS NOLOCK is kept as a deprecated alias and will be removed in Percona Server 5.7. There were no functional changes for the feature.
  • Percona Server Audit Log Plugin now supports JSON and CSV formats. The format choice is controlled by audit_log_format variable.
  • Percona Server Audit Log Plugin now supports streaming the audit log to syslog.
  • TokuDB storage engine package has been updated to version 7.1.8.

Bugs Fixed:

  • Querying INNODB_CHANGED_PAGES table with a range condition START_LSN > x AND END_LSN < y would lead to a server crash if the range was empty with x greater than y. Bug fixed #1202252 (Jan Lindström and Sergei Petrunia).
  • SQL statements of other connections were missing in the output of SHOW ENGINE INNODB STATUS, in LATEST DETECTED DEADLOCK and TRANSACTIONS sections. This bug was introduced by Statement Timeout patch in Percona Server 5.6.13-61.0. Bug fixed #1328824.
  • Some of TokuDB distribution files were missing in the TokuDB binary tarball. Bug fixed #1338945.
  • With XtraDB changed page tracking feature enabled, queries from the INNODB_CHANGED_PAGES could read the bitmap data whose write was in still progress. This would cause the query to fail with an ER_CANT_FIND_SYSTEM_REC and a warning printed to the server error log. The workaround has been to add an appropriate END_LSN-limiting condition to the query. Bug fixed #1193332.
  • mysqld-debug was missing from Debian packages. This regression was introduced in Percona Server 5.6.16-64.0. Bug fixed #1290087.
  • Fixed a memory leak in Slow Query Log Rotation and Expiration. Bug fixed #1314138.
  • The audit log plugin would write log with XML syntax errors when OLD and NEW formats were used. Bug fixed #1320879.
  • Combination of Log Archiving for XtraDB, XtraDB changed page tracking, and small InnoDB logs could hang the server on the bootstrap shutdown. Bug fixed #1326379.
  • --tc-heuristic-recover option values were broken. Bug fixed #1334330 (upstream #70860).
  • If the bitmap directory has a bitmap file sequence with a start LSN of one file less than a start LSN of the previous file, a debug build would assert when queries were run on INNODB_CHANGED_PAGES table. Bug fixed #1342494.

Other bugs fixed: #1337247, #1350386, #1208371, #1261341, #1151723, #1182050, #1182068, #1182072, #1184287, #1280875, #1338937, #1334743, #1349394, #1182046, #1182049, and #1328482 (upstream #73418).

Release notes for Percona Server 5.6.20-68.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

The post Percona Server 5.6.20-68.0 is now available appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Percona Server 5.6.20-68.0 is now available

Percona is glad to announce the release of Percona Server 5.6.20-68.0 on August 29, 2014. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.20, including all the bug fixes in it, Percona Server 5.6.20-68.0 is the current GA release in the Percona Server 5.6 series. All of Percona’s software is open-source and free. Complete details of this release can be found in the 5.6.20-68.0 milestone on Launchpad.

New Features:

  • Percona Server has implemented the MySQL 5.7 SHOW SLAVE STATUS NONBLOCKING syntax for Lock-Free SHOW SLAVE STATUS feature. The existing SHOW SLAVE STATUS NOLOCK is kept as a deprecated alias and will be removed in Percona Server 5.7. There were no functional changes for the feature.
  • Percona Server Audit Log Plugin now supports JSON and CSV formats. The format choice is controlled by audit_log_format variable.
  • Percona Server Audit Log Plugin now supports streaming the audit log to syslog.
  • TokuDB storage engine package has been updated to version 7.1.8.

Bugs Fixed:

  • Querying INNODB_CHANGED_PAGES table with a range condition START_LSN > x AND END_LSN < y would lead to a server crash if the range was empty with x greater than y. Bug fixed #1202252 (Jan Lindström and Sergei Petrunia).
  • SQL statements of other connections were missing in the output of SHOW ENGINE INNODB STATUS, in LATEST DETECTED DEADLOCK and TRANSACTIONS sections. This bug was introduced by Statement Timeout patch in Percona Server 5.6.13-61.0. Bug fixed #1328824.
  • Some of TokuDB distribution files were missing in the TokuDB binary tarball. Bug fixed #1338945.
  • With XtraDB changed page tracking feature enabled, queries from the INNODB_CHANGED_PAGES could read the bitmap data whose write was in still progress. This would cause the query to fail with an ER_CANT_FIND_SYSTEM_REC and a warning printed to the server error log. The workaround has been to add an appropriate END_LSN-limiting condition to the query. Bug fixed #1193332.
  • mysqld-debug was missing from Debian packages. This regression was introduced in Percona Server 5.6.16-64.0. Bug fixed #1290087.
  • Fixed a memory leak in Slow Query Log Rotation and Expiration. Bug fixed #1314138.
  • The audit log plugin would write log with XML syntax errors when OLD and NEW formats were used. Bug fixed #1320879.
  • Combination of Log Archiving for XtraDB, XtraDB changed page tracking, and small InnoDB logs could hang the server on the bootstrap shutdown. Bug fixed #1326379.
  • --tc-heuristic-recover option values were broken. Bug fixed #1334330 (upstream #70860).
  • If the bitmap directory has a bitmap file sequence with a start LSN of one file less than a start LSN of the previous file, a debug build would assert when queries were run on INNODB_CHANGED_PAGES table. Bug fixed #1342494.

Other bugs fixed: #1337247, #1350386, #1208371, #1261341, #1151723, #1182050, #1182068, #1182072, #1184287, #1280875, #1338937, #1334743, #1349394, #1182046, #1182049, and #1328482 (upstream #73418).

Release notes for Percona Server 5.6.20-68.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

The post Percona Server 5.6.20-68.0 is now available appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Percona Server 5.5.39-36.0 is now available

Percona is glad to announce the release of Percona Server 5.5.39-36.0 on August 29, 2014 (Downloads are available here and from the Percona Software Repositories). Based on MySQL 5.5.39, including all the bug fixes in it, Percona Server 5.5.39-36.0 is now the current stable release in the 5.5 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.5.39-36.0 milestone at Launchpad.

New Features:

Bugs Fixed:

  • Querying INNODB_CHANGED_PAGES with a range condition START_LSN > x AND END_LSN < y would lead to a server crash if the range was empty with x greater than y. Bug fixed #1202252 (Jan Lindström and Sergei Petrunia).
  • With XtraDB changed page tracking feature enabled, queries from the INNODB_CHANGED_PAGES could read the bitmap data whose write was in still progress. This would cause the query to fail with an ER_CANT_FIND_SYSTEM_REC and a warning printed to the server error log. The workaround has been to add an appropriate END_LSN-limiting condition to the query. Bug fixed #1346122.
  • mysqld-debug was missing from Debian packages. This regression was introduced in Percona Server 5.5.36-34.0. Bug fixed #1290087.
  • Fixed a memory leak in Slow Query Log Rotation and Expiration. Bug fixed #1314138.
  • The audit log plugin would write log with XML syntax errors when OLD and NEW formats were used. Bug fixed #1320879.
  • A server built with system OpenSSL support, such as the distributed Percona Server binaries, had SSL-related memory leaks. Bug fixed #1334743 (upstream #73126).
  • If the bitmap directory has a bitmap file sequence with a start LSN of one file less than a start LSN of the previous file, a debug build would assert when queries were run on INNODB_CHANGED_PAGES table. Bug fixed #1342494.

Other bugs fixed: #1337324, #1151723, #1182050, #1182072, #1280875, #1182046, #1328482 (upstream #73418), and #1334317 (upstream #73111).

Release notes for Percona Server 5.5.39-36.0 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.39-36.0 is now available appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Percona Server 5.5.39-36.0 is now available

Percona is glad to announce the release of Percona Server 5.5.39-36.0 on August 29, 2014 (Downloads are available here and from the Percona Software Repositories). Based on MySQL 5.5.39, including all the bug fixes in it, Percona Server 5.5.39-36.0 is now the current stable release in the 5.5 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.5.39-36.0 milestone at Launchpad.

New Features:

Bugs Fixed:

  • Querying INNODB_CHANGED_PAGES with a range condition START_LSN > x AND END_LSN < y would lead to a server crash if the range was empty with x greater than y. Bug fixed #1202252 (Jan Lindström and Sergei Petrunia).
  • With XtraDB changed page tracking feature enabled, queries from the INNODB_CHANGED_PAGES could read the bitmap data whose write was in still progress. This would cause the query to fail with an ER_CANT_FIND_SYSTEM_REC and a warning printed to the server error log. The workaround has been to add an appropriate END_LSN-limiting condition to the query. Bug fixed #1346122.
  • mysqld-debug was missing from Debian packages. This regression was introduced in Percona Server 5.5.36-34.0. Bug fixed #1290087.
  • Fixed a memory leak in Slow Query Log Rotation and Expiration. Bug fixed #1314138.
  • The audit log plugin would write log with XML syntax errors when OLD and NEW formats were used. Bug fixed #1320879.
  • A server built with system OpenSSL support, such as the distributed Percona Server binaries, had SSL-related memory leaks. Bug fixed #1334743 (upstream #73126).
  • If the bitmap directory has a bitmap file sequence with a start LSN of one file less than a start LSN of the previous file, a debug build would assert when queries were run on INNODB_CHANGED_PAGES table. Bug fixed #1342494.

Other bugs fixed: #1337324, #1151723, #1182050, #1182072, #1280875, #1182046, #1328482 (upstream #73418), and #1334317 (upstream #73111).

Release notes for Percona Server 5.5.39-36.0 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.39-36.0 is now available appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

The InnoDB Mutex, part 3

I repeated tests from part 1 and part 2 using a system that has an older/smaller CPU but more recent versions of glibc & Linux. In this case the CPU has dual sockets with 6 cores per socket and 24 vCPUs with HT enabled. The host uses Fedora, glibc 2.17 and Linux kernel 3.11.9-200. Tests were run up to 512 threads. There was an odd segfault at 1024 threads with the TTASFutexMutex that I chose not to debug. And tests were run for 1, 4 and 12 mutexes rather than 1, 4 and 16 because of the reduced vCPU count. The lock hold duration was ~6000 nsecs rather than ~4000 because of the different CPU.

My conclusions from this platform were less strong than from the previous tests.
  • The InnoDB syncarray mutex still has lousy behavior at high concurrency but it is less obvious here because I did not run a test for 1024 threads.
  • pthread default is usually better than pthread adaptive, but in at least one case pthread adaptive was much better
  • The new custom InnoDB mutex, TTASFutexMutex, was occasionally much worse than the alternatives. From looking at code in 5.7, it looks like the choice to use it is a compile time decision. If only to figure out the performance problems this choice should be a my.cnf option and it isn't clear to me that TTASFutexMutex is ready for prime time.
Graphs for 0 nsec lock hold

Graphs for 1000 nsec lock hold


Graphs for 6000 nsec lock hold



PlanetMySQL Voting: Vote UP / Vote DOWN

ConFoo is looking for speakers

ConFoo is currently looking for web professionals with deep understanding of PHP, Java, Ruby, Python, DotNet, HTML5, Databases, Cloud Computing, Security and Mobile development to share their skills and experience at the next ConFoo. Submit your proposals between August 25th and September 22nd.

ConFoo is a conference for developers that has built a reputation as a prime destination for exploring new technologies, diving deeper into familiar topics, and experiencing the best of community and culture.

  • ConFoo 2015 will be hosted on February 18-20 in Montreal, at the Hilton Bonaventure Hotel.
  • We take good care of our speakers by covering most expenses including travel, accommodation, lunch, full conference ticket, etc.
  • Presentations are 35min + 10min for questions, and may be delivered in English or French.
  • ConFoo is an open environment where everyone is welcome to submit. We are simply looking for quality proposals by skilled and friendly people.

If you would simply prefer to attend the conference, we have a $290 discount until October 13th.


PlanetMySQL Voting: Vote UP / Vote DOWN

HowTo Video: Using MySQL ASP.NET MVC Project Wizard

MySQL for Visual Studio 1.2.x recently became a GA version. One of the main features included in this version was the new MySQL ASP.NET MVC Wizard. This wizard allows the creation of a new ASP.NET MVC application using a MySQL existing database and a data entity model with Entity Framework 5 or 6 version. In this video you will see this feature in action and will show you how to generate a complete MVC application that includes authentication, registration and browsable pages for each table in your MySQL database without any code needed.
PlanetMySQL Voting: Vote UP / Vote DOWN

The InnoDB Mutex, part 2

I updated the innotsim client to include a new custom mutex from InnoDB in MySQL 5.7. This is called TTASFutexMutex in the source. I also fixed a bug in innotsim - the benchmark timer was started too soon so these results are more accurate than the original post.

The original post has much more detail on the tests. In the graphs below, TTASFutexMutex has the label inno futex and the old-style custom mutex that uses a sync array has the label inno syncarray. My updated conclusions are:
  • TTASFutexMutex is worse than pthread default when the lock hold duration is short but gets better as the lock hold duration is increased. I wonder if this conclusion will hold across Linux releases.
  • The InnoDB mutex that uses a sync array continues to show its dark side with high concurrency but is pretty good with lower concurrency.
  • I am not sure that pthread adaptive should be used at all. It is frequently worse, occasionally much worse and in the base case not much better than pthread default. But I am using glibc 2.12 which was released in 2010. I don't know yet whether my conclusions hold for a recent glibc release. Alas, this appears to be widely used by both InnoDB and non-InnoDB code in mysqld (grep for MY_MUTEX_INIT_FAST). I think more benchmark results are required to figure this out. It might help to have an option in my.cnf to prevent it from being used.

Graphs for 0 nsec lock hold

Graphs for 1000 nsec lock hold 

Graphs for 4000 nsec lock hold 




PlanetMySQL Voting: Vote UP / Vote DOWN

Second MySQL User Group Korea meeting

We are pleased to announce the second MySQL User Group meeting in Seoul, South Korea.

This time Daesang Information & Technology sponsors venue and snack. Please find more information below.

Date & Time: 29th Aug, 2014, 7pm to 9 pm

VenueTOZ Kangnam tower, 2nd floor

Agenda:

1. MySQL Fabric intro

2. MySQL Cluster intro & demo

3. Select MUG Staff

See more details

We are looking forward to talking to you! 


PlanetMySQL Voting: Vote UP / Vote DOWN

OpenStack Trove Day 2014 Recap: MySQL and DBaaS

OpenStack Trove Day

I just returned from a week in Cambridge, Massachusetts where I was attending the OpenStack Trove Day and the Trove mid-cycle meetup, both sponsored by the great folks at Tesora.

I am relatively new to the OpenStack and Trove arenas so this was a fantastic opportunity for me to learn more about the communities, the various components within OpenStack, and what part Trove plays. I found the entire event very worthwhile – I met a lot of key people in the community, learned more about Trove and its potential, and in general felt a great energy and excitement surrounding Trove and OpenStack as a whole.

There were more than 120 attendees at Trove Day. That is almost four times the initial estimate! I think I would call that a success. There were 7 very high quality topics that covered material ranging from new and coming features within Trove, to deep inspection of how it is currently used in several big name companies to an investor’s perspective of the OpenStack market. There were also 2 panel style discussions that covered a lot of ground with all participants being ‘guys on the ground’ actively working with OpenStack deployments including one of my fellow Perconians, Mr. Tim Sharp.

One of the main takeaways for me from the entire day was the forward looking adoption estimates for Trove. This came up over and over through the various talks and panels. There seems to be a tremendous amount of interest in Trove deployments for late 2014/2015 but very few actual live users today. There also seems to be a bit of a messaging issue and confusion amongst potential users as to what Trove really is and is not. Simply reading the Trove Mission Statement should quickly clarify:

The OpenStack Open Source Database as a Service Mission: To provide scalable and reliable Cloud Database as a Service provisioning functionality for both relational and non-relational database engines, and to continue to improve its fully-featured and extensible open source framework.

So allow me to expand on that a bit based on some specific comments or questions that I overheard:
- Trove is NOT a database abstraction layer nor any sort of database unification tool; all applications still communicate with their respective datastores directly through their native APIs.
- Trove is NOT a database monitoring, management or analysis tool; all of your favorite debugging and monitoring tools like Percona Toolkit will still work exactly as advertised, and yes, you do need a monitoring tool.
- Although Trove does have some useful backup scheduling options, Trove is NOT a complete backup and recovery tool that can accommodate every backup strategy; you may still use 3rd party options such as scripting your own around Percona XtraBackup or make your life a lot easier and sign up for the Percona Backup Service.
- Trove IS a very nice way to add resource provisioning for many disparate datastores and has some ‘smarts’ built in for each. This ensures a common user experience when provisioning and managing datastore instances.

To that final point, our friends at Tesora introduced their new Database Certification Program at Trove Day. This new program will ensure a high level of compatibility between the various participating database vendors and the Trove project. Of course, Percona Server has already been certified.

I see the future of Trove as being very bright with a huge potential for expansion into other areas, once it is stabilized. I am very excited to begin contributing to this project and watch it grow.

Until next time…

The post OpenStack Trove Day 2014 Recap: MySQL and DBaaS appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

OpenStack Trove Day 2014 Recap: MySQL and DBaaS

OpenStack Trove Day

I just returned from a week in Cambridge, Massachusetts where I was attending the OpenStack Trove Day and the Trove mid-cycle meetup, both sponsored by the great folks at Tesora.

I am relatively new to the OpenStack and Trove arenas so this was a fantastic opportunity for me to learn more about the communities, the various components within OpenStack, and what part Trove plays. I found the entire event very worthwhile – I met a lot of key people in the community, learned more about Trove and its potential, and in general felt a great energy and excitement surrounding Trove and OpenStack as a whole.

There were more than 120 attendees at Trove Day. That is almost four times the initial estimate! I think I would call that a success. There were 7 very high quality topics that covered material ranging from new and coming features within Trove, to deep inspection of how it is currently used in several big name companies to an investor’s perspective of the OpenStack market. There were also 2 panel style discussions that covered a lot of ground with all participants being ‘guys on the ground’ actively working with OpenStack deployments including one of my fellow Perconians, Mr. Tim Sharp.

One of the main takeaways for me from the entire day was the forward looking adoption estimates for Trove. This came up over and over through the various talks and panels. There seems to be a tremendous amount of interest in Trove deployments for late 2014/2015 but very few actual live users today. There also seems to be a bit of a messaging issue and confusion amongst potential users as to what Trove really is and is not. Simply reading the Trove Mission Statement should quickly clarify:

The OpenStack Open Source Database as a Service Mission: To provide scalable and reliable Cloud Database as a Service provisioning functionality for both relational and non-relational database engines, and to continue to improve its fully-featured and extensible open source framework.

So allow me to expand on that a bit based on some specific comments or questions that I overheard:
- Trove is NOT a database abstraction layer nor any sort of database unification tool; all applications still communicate with their respective datastores directly through their native APIs.
- Trove is NOT a database monitoring, management or analysis tool; all of your favorite debugging and monitoring tools like Percona Toolkit will still work exactly as advertised, and yes, you do need a monitoring tool.
- Although Trove does have some useful backup scheduling options, Trove is NOT a complete backup and recovery tool that can accommodate every backup strategy; you may still use 3rd party options such as scripting your own around Percona XtraBackup or make your life a lot easier and sign up for the Percona Backup Service.
- Trove IS a very nice way to add resource provisioning for many disparate datastores and has some ‘smarts’ built in for each. This ensures a common user experience when provisioning and managing datastore instances.

To that final point, our friends at Tesora introduced their new Database Certification Program at Trove Day. This new program will ensure a high level of compatibility between the various participating database vendors and the Trove project. Of course, Percona Server has already been certified.

I see the future of Trove as being very bright with a huge potential for expansion into other areas, once it is stabilized. I am very excited to begin contributing to this project and watch it grow.

Until next time…

The post OpenStack Trove Day 2014 Recap: MySQL and DBaaS appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Resources for Database Clusters: Performance Tuning for HAProxy, Support for MariaDB 10, Technical Blogs &amp; More

August 28, 2014 By Severalnines Check Out Our Latest Resources for MySQL, MariaDB & MongoDB Clusters

 

Here is a summary of resources & tools that we’ve made available to you in the past weeks. If you have any questions on these, feel free to contact us!

 

New Technical Webinars

 

Performance Tuning of HAProxy for Database Load Balancing

09 September 2014 - with Baptiste Assmann of HAProxy Technologies

Do you know what HAProxy can tell you about your application and database instances? Do you know the difference between short-lived connections and persistent connections, and how they affect the performance and availability of your applications? As a follow-up to our previous webinar on MySQL Load Balancing and HAProxy, we are glad to organize a deep dive into Performance Tuning of HAProxy.

Register here!

 

Automation & Management of MariaDB Galera Clusters: European Webinars with SkySQL - The MariaDB Company

30 Sept. (English) / 07 Oct. (French) / 08 Oct. (German) - with the MariaDB Team

MariaDB Galera Cluster involves more effort and resource to administer than standalone MariaDB systems. In this webinar, we will give you practical advice on how to introduce clusters into your MariaDB environment, automate deployment and make it easier for operational staff to manage and monitor the cluster using ClusterControl.

Register here!

 

Technical Webinar - Replay

 

Webinar Replay & Slides: How To Set Up SQL Load Balancing with HAProxy

In this webinar, we covered the concepts around the popular open-source HAProxy load balancer, and demonstrated how to use it with your SQL-based database clusters. We also discussed HA strategies for HAProxy with Keepalived and Virtual IP.

View the replay & read the slides!

 

read more


PlanetMySQL Voting: Vote UP / Vote DOWN

Baron Schwartz to Speak on the Power of Regressions

Baron Schwartz, our CEO and founder, is speaking at the Performance and Capacity Conference by CMG on November 6th. CMG, the Computer Measurement Group is a not-for-profit organization that focuses on the efficiency and scalability of IT service delivery through measurement, quantitative analysis and forecasting. It has been a reputable, worldwide resource for best practices in the IT profession since 1974.

Baron will be contributing to these efforts when he presents on “Using Regressions to Estimate Query Resource Consumption.” VividCortex uses regression to generate metrics for many things that cannot be measured directly, allowing us to provide robust sets of data while reducing the measurement intrusion effect on systems. In particular, our customers are able to see the estimated CPU usage of a group of queries and better plan for capacity.

If you will not be attending the conference but are interested in the method and application, read the whitepaper. It describes the process in detail and even provides sample data and code for reproducible results.

Abstract

Relational database administrators are often interested in measuring the resources a statement or query consumes. An example is CPU: we would like to know how much CPU time a query uses. Likewise, if a server’s CPU utilization is high, we would like to know which queries use the most CPU time. Query resource consumption is not possible to measure directly in many cases, and not always desirable even if possible. This paper demonstrates how a variation of linear regression, which we call weighted linear regression, can reveal the relationship between metrics such as query execution time and almost arbitrary metrics, both inside and outside of the RDBMS.


PlanetMySQL Voting: Vote UP / Vote DOWN

Baron Schwartz to Speak on the Power of Regressions

Baron Schwartz, our CEO and founder, is speaking at the Performance and Capacity Conference by CMG on November 6th. CMG, the Computer Measurement Group is a not-for-profit organization that focuses on the efficiency and scalability of IT service delivery through measurement, quantitative analysis and forecasting. It has been a reputable, worldwide resource for best practices in the IT profession since 1974.

Baron will be contributing to these efforts when he presents on “Using Regressions to Estimate Query Resource Consumption.” VividCortex uses regression to generate metrics for many things that cannot be measured directly, allowing us to provide robust sets of data while reducing the measurement intrusion effect on systems. In particular, our customers are able to see the estimated CPU usage of a group of queries and better plan for capacity.

If you will not be attending the conference but are interested in the method and application, read the whitepaper. It describes the process in detail and even provides sample data and code for reproducible results.

Abstract

Relational database administrators are often interested in measuring the resources a statement or query consumes. An example is CPU: we would like to know how much CPU time a query uses. Likewise, if a server’s CPU utilization is high, we would like to know which queries use the most CPU time. Query resource consumption is not possible to measure directly in many cases, and not always desirable even if possible. This paper demonstrates how a variation of linear regression, which we call weighted linear regression, can reveal the relationship between metrics such as query execution time and almost arbitrary metrics, both inside and outside of the RDBMS.


PlanetMySQL Voting: Vote UP / Vote DOWN

Trawling the binlog with FlexCDC and new FlexCDC plugins for MySQL

Swanhart-Tools includes FlexCDC, a change data capture tool for MySQL. FlexCDC follows a server’s binary log and usually writes “changelogs” that track the changes to tables in the database. I say usually because the latest version of Swanhart-Tools (only in github for now) supports FlexCDC plugins, which allow you to send the updates to a remote data source, or to any other place of your liking.  You can find out more about FlexCDC basics in a previous blog post.

Please note that FlexCDC still needs to have source and destination instances defined in the configuration, even when using plugins.  This is because the FlexCDC state (how much into which binary log has FlexCDC progressed, and what tables are being captured) is stored in the “dest”.  Normally when using a plugin, the source and destination instances will be the same. FlexCDC will create a ‘flexviews’ database with a number of state tables in the destination instance.  This also means you still have to use the create_mvlog.php add_table.php or Flexview’s create_mvlog(…) procedure to mark which tables to capture!  See the previous blog post about FlexCDC.

When you create the mvlog, there will still be a changelog table created in the dest, just like when not using a plugin. This is because the INFORMATION_SCHEMA is used to get column datatypes and additional information (such as if an int is signed or unsigned) and this lookup is done against the table in the dest. The reason this is needed, is because mysqlbinlog, the utility used to scrape the binlog, produces strange output for large signed integers (it provides the signed and unsigned version), thus FlexCDC must figure out the right one to choose from the actual DDL of the changelog table. FlexCDC can’t look at the DDL of the source table though, because the consumer may be behind, and the current structure may not match the structure of the rows in the log.

The new plugin system allows you to do a lot of nifty things like:

  • Replicate to external databases
  • Publish changes to a message queue (this is like Facebook’s Wormhole)
  • Keep a remote cache server in sync
  • and more…

The latest version of Swanhart-Tools includes an Example plugin (in flexviews/consumer/include/example_plugin.php) that simply prints the events that come through it, not logging them into the changelog table at all. There is an example of the output at the end of the post.

The example plugin looks like this:

<?php class FlexCDC_Plugin { static function begin_trx($uow_id, $gsn) { echo "START TRANSACTION: trx_id: $uow_id, Prev GSN: $gsn"; } static function commit_trx($uow_id, $gsn) { echo "COMMIT: trx_id: $uow_id, Last GSN: $gsn"; } static function rollback_trx($uow_id) { echo "ROLLBACK: trx_id: $uow_id"; } static function insert($row, $db, $table, $trx_id, $gsn) { echo "TRX_ID: $trx_id, Schema:$db, Table: $table, DML: INSERT, AT: $gsn"; print_r($row); } static function delete($row, $db, $table, $trx_id, $gsn) { echo "TRX_ID: $trx_id, Schema:$db, Table: $table, DML: DELETE, AT: $gsn"; print_r($row); } static function update_before($row, $db, $table, $trx_id, $gsn) { echo "TRX_ID: $trx_id, Schema:$db, Table: $table, DML: UPDATE (OLD), AT: $gsn"; print_r($row); } static function update_after($row, $db, $table, $trx_id, $gsn) { echo "TRX_ID: $trx_id, Schema:$db, Table: $table, DML: UPDATE (NEW), AT: $gsn"; print_r($row); } }

Important Note: You must define all seven of these functions in your plugin, even if you do not plan to have actions for each of the callbacks – just leave the function body empty to do no action (the call is simply a noop that case.) Note that the plugin functions must be declared STATIC.  This is due to the way that FlexCDC calls the functions.

Transaction state callbacks
There are three callback functions which notify the plugin of changes in transaction state. Before I go into what they do, I want to note the $trx_id and $gsn parameters which are present in every callback. Each transaction is assigned a monotonically increasing transaction identifier. This sequence uniquely identifies each transaction that FlexCDC processes. In addition, each row change is assigned a unique sequence number which FlexCDC calls the Generic Sequence Number (or GSN).

As you can see, the start_trx(…) callback (called when a transaction starts) is passed both the new transaction number and also the highest GSN used in the previous transaction. This is called the GSN high water mark (GSNHWM). At transaction commit, the commit_trx(…) callback is called and the transaction id and the last GSN assigned in the transaction are passed into the callback. This same value will appear as the GSNHWM in the next start_trx(…) callback. Finally, at rollback any sequence numbers assigned in that transaction will be re-used, so no GSN is passed to the rollback callback, but a transaction id is, which lets you determine exactly which transaction is rolling back.

Row change callbacks

Each of the four row change callback functions capture a particular change to the data set. Each of the functions take five parameters. The first ($row) is an array which contains the row being acted upon. The second ($db) is the schema which contains the row. The third ($table) is the table that contains the row. Each callback also receives the transaction identifier, and of course, each row change is assigned a unique GSN.

For example:
An update will fire both update_before(…) and update_after(…) callbacks with the row images before and after the change, respectively. There is an example of this at the end of the post.

Configuring FlexCDC to use a plugin
FlexCDC uses a configuration file called consumer.ini by default.  To the [flexcdc] section add:
plugin=plugin_file.php

The plugin must be in the FlexCDC include/ directory.  You will find example_plugin.php in this directory, to serve as an example.

How it works
Flexviews uses mysqlbinlog to decode the binary log from the source server. It uses the –decode-rows=ROWS option to decode RBR into a format which can be parsed by an external utility. FlexCDC collects information about each transaction and the row changes that happen in the database (which means it requires ROW based binary logging to be used.)  When a plugin is defined the normal actions used by FlexCDC are overridden with the callback functions.

Here is the output from the example plugin, for an update that affected 3 rows (update test.t3 set c1 = c1 – 1):

START TRANSACTION: trx_id: 44, Prev GSN: 107 TRX_ID: 44, Schema:test, Table: t3, DML: UPDATE (OLD), AT: 108 Array ( [c1] => -3 [c2] => 1 ) TRX_ID: 44, Schema:test, Table: t3, DML: UPDATE (NEW), AT: 109 Array ( [c1] => -4 [c2] => 1 ) TRX_ID: 44, Schema:test, Table: t3, DML: UPDATE (OLD), AT: 110 Array ( [c1] => -5 [c2] => 2 ) TRX_ID: 44, Schema:test, Table: t3, DML: UPDATE (NEW), AT: 111 Array ( [c1] => -6 [c2] => 2 ) TRX_ID: 44, Schema:test, Table: t3, DML: UPDATE (OLD), AT: 112 Array ( [c1] => -5 [c2] => 2 ) TRX_ID: 44, Schema:test, Table: t3, DML: UPDATE (NEW), AT: 113 Array ( [c1] => -6 [c2] => 2 ) COMMIT: trx_id: 44, Last GSN: 113

One thing you should notice, is that FlexCDC provides column names for the data coming from the binary log. This is because the log table exists in the dest instance and FlexCDC can get the list of column names from there. When you use other CDC tools, like the C binlog API, you don’t get column names.

The post Trawling the binlog with FlexCDC and new FlexCDC plugins for MySQL appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Pages