Planet MySQL

Percona XtraDB Cluster 5.6.37-26.21-3 is Now Available

Percona announces the release of Percona XtraDB Cluster 5.6.37-26.21-3 on October 27, 2017. Binaries are available from the downloads section or our software repositories.

Percona XtraDB Cluster 5.6.37-26.21-3 is now the current release, based on the following:

All Percona software is open-source and free.

Fixed Bugs
  • Added access checks for DDL commands to make sure they do not get replicated if they failed without proper permissions. Previously, when a user tried to perform certain DDL actions that failed locally due to lack of privileges, the command could still be replicated to other nodes, because access checks were performed after replication.This vulnerability is identified as CVE-2017-15365.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Announcing Galera Cluster for MySQL 5.5.58, 5.6.38, 5.7.20 with Galera 3.22.

Codership is pleased to announce a new release of Galera Cluster for MySQL consisting of MySQL-wsrep 5.5.58, 5.6.38, 5.7.20 and new Galera 3.22 library, wsrep API version 25.

 

NEW FEATURES AND NOTABLE FIXES IN THE GALERA REPLICATION LIBRARY SINCE LAST BINARY RELEASE BY CODERSHIP (3.21):

 

New features and notable fixes in Galera replication since last binary release

* Reporting last committed write set fixed to respect commit ordering (MW-341)

* GComm socket level error handling improved to avoid backend thread exit
in case of unexpected input from ASIO IO service (GAL-518)

* Race condition fixed in GComm message sending codepath (GAL-520)

* Fix for EVS protocol stall due to exhausted send window setting. This
bug could stall cluster messaging until the next keepalive was sent by
some node, causing intermittent pauses in write set replication. (GAL-521)

* Code changes to avoid name collisions with FreeBSD system headers (GAL-523)

Read the full release notes (how to install, repository structure) 

 

 

NOTABLE BUG FIXES IN MYSQL-WSREP:

 

Version MySQL-wsrep 5.7.20 and Galera 3.22, wsrep API version 25.

* Preserve –wsrep-recover log for future reference when starting the server.
The preserved log is stored in a file under MySQL data directory,
either in wsrep_recovery.ok or wsrep_recovery.fail depending on recovery
success. (MW-318)

* Avoid returning outdated values for wsrep_ready status variable (MW-384)

* A bug which caused stored procedure with an error handler to commit
a statement even in case of certification error was fixed. (MW-388)

* Crash during LOAD DATA for partition engine was fixed (MW-394)

* Fixed a crash caused by a dangling reference to wsrep status variables
array. (MW-399)

* Fixes to processing of foreign key cascades. (MW-402)

* ACL checks are now enforced before replication for all DDL operations
(MW-416)

* ALTER EVENT statement failure on slave was fixed (MW-417)

Read the full release notes  (known issues, how to install, repository structure) 

 

 

Version MySQL-wsrep 5.6.38 and Galera 3.22, wsrep API version 25

* Preserve –wsrep-recover log for future reference when starting the server.
The preserved log is stored in a file under MySQL data directory,
either in wsrep_recovery.ok or wsrep_recovery.fail depending on recovery
success. (MW-318)

* Avoid returning outdated values for wsrep_ready status variable (MW-384)

* A bug which caused stored procedure with an error handler to commit
a statement even in case of certification error was fixed. (MW-388)

* Crash during LOAD DATA for partition engine was fixed (MW-394)

* Fixed a crash caused by a dangling reference to wsrep status variables
array. (MW-399)

* Fixes to processing of foreign key cascades. (MW-402)

* ACL checks are now enforced before replication for all DDL operations
(MW-416)

Read the full release notes (known issues, how to install, repository structure)

 

 

Version MySQL-wsrep 5.5.58 and Galera 3.22, wsrep API version 25

Notable bug fixes in MySQL-wsrep:

* Avoid returning outdated values for wsrep_ready status variable (MW-384)

* Crash during LOAD DATA for partition engine was fixed (MW-394)

* Fixes to processing of foreign key cascades. (MW-402)

* ACL checks are now enforced before replication for all DDL operations
(MW-416)

Read the full release notes (known issues, how to install, repository structure)

 

This Week in Data with Colin Charles 12: Open Source Summit Europe and Open Source Entrepreneur Network

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

This week was exciting from a Percona exposure standpoint. We were at Open Source Summit Europe. I gave two talks and participated in a panel, as the co-located event for the Open Source Entrepreneur Network happened on the last day as well. We had a booth, and it was great to hang out and talk with my colleagues Dorothée Wuest and Dimitri Vanoverbeke as well as all the attendees that popped by.

Releases Link List Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

Lesson 08: Doing More with MySQL

Notes/errata/updates for Chapter 8:
See the official book errata at http://tahaghoghi.com/LearningMySQL/errata.php – Chapter 8 includes pages 277 – 296.

Note: if trying to load the “academics.csv” file on Linux, you may want to set the owner and group to mysql first:
sudo chown mysql:mysql /tmp/academics.csv

In the section on “Creating Tables with Queries”, p. 286, it says “The LIKE syntax allows you to create a new table with exactly the same structure as another, including keys.” However, it only preserves UNIQUE, PRIMARY and regular indexes. It does not preserve FOREIGN keys.

In the section on “Replacing Data”, p 292 – 293, it mentions the REPLACE command, but not the INSERT…ON DUPLICATE KEY UPDATE syntax. You can read about that at the MySQL manual page.

Finally, for more information on EXPLAIN, check out this video presentation on how to use and interpret EXPLAIN. There are also PDF slides.

Topics covered:
Importing and Exporting Data
Replacing Data
CREATE TABLE…LIKE
EXPLAIN

Reference/Quick Links for MySQL Marinate

Efficient JSON Replication in MySQL 8.0

MySQL is not only a relational database, but can also be used as a schemaless/NOSQL document store, or a mix of both. This is realized by the JSON datatype, and is useful for any data that is hard to fit in the ”tabular” format of a traditional table.…

MySQL Enterprise Monitor 4.0.1 has been released

We are pleased to announce that MySQL Enterprise Monitor 4.0.1 is now available for download on the My Oracle Support (MOS) web site. It will also be available for download via the Oracle Software Delivery Cloud in a few days. MySQL Enterprise Monitor is the best-in-class tool for monitoring and management of your MySQL assets and is included with your MySQL Enterprise Edition and MySQL Enterprise Carrier Grade subscriptions.

You can find more information on the contents of this release in the change log.

Highlights of MySQL Enterprise Monitor 4.0 include:

  • Modern look and feel: a redesigned user interface delivers a vastly improved overall user experience. The visible changes--the layout, the icons, the and the overall aesthetics--provide a more natural and intuitive experience. Views dynamically change and adjust to your current context and the assets you've selected, everything from individual MySQL instances or hosts to your custom Groups, to your complex replication and clustered topologies. Additional enhancements include a much more responsive UI and a greater capacity to scale, allowing you to more effectively manage thousands of MySQL related assets.
  • MySQL Cluster monitoring: we now auto-discover your MySQL Cluster installations and give you visibility into the performance, availability, and health of each MySQL instance and NDB process, as well as the health of the MySQL Cluster instance as a single logical system. The Overview dashboard displays detailed instrumentation available for each MySQL Cluster and the Topology dashboard displays the current configuration of your MySQL Cluster enabling you to quickly see the status of the MySQL Cluster instance as a whole and each individual process. The Topology dashboard allows you to easily see how your MySQL Cluster installations are currently functioning.
  • A User Statistics report: which provides an easy way to monitor MySQL resource utilization broken down by user.

You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet.

You will also find the binaries on the Oracle Software Delivery Cloud soon.  Type "MySQL Enterprise Monitor 4.0.0" in the search box, or enter a license name to find Enterprise Monitor along with other MySQL products: "MySQL Enterprise Edition" or "MySQL Cluster Carrier Edition".  Then select your platform.

Please open a bug or a ticket on My Oracle Support to report problems, request features, or give us general feedback about how this release meets your needs.

If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact.

Thanks and Happy Monitoring!

- The MySQL Enterprise Tools Development Team

Useful URLs

The Perfect Server - Ubuntu 17.10 (Artful Aardvark) with Apache, PHP, MySQL, PureFTPD, BIND, Postfix, Dovecot and ISPConfig 3.1

This tutorial shows how to install an Ubuntu 17.10 (Artful Aardvark) server (with Apache2, BIND, Dovecot) for the installation of ISPConfig 3.1, and how to install ISPConfig. ISPConfig 3 is a web hosting control panel that allows you to configure the following services through a web browser: Apache or nginx web server, Postfix mail server, Courier or Dovecot IMAP/POP3 server, MySQL, BIND or MyDNS nameserver, PureFTPd, SpamAssassin, ClamAV, and many more. This setup covers the installation of Apache (instead of nginx), BIND (instead of MyDNS), and Dovecot (instead of Courier).

Joint Webinar with Severalnines: How to manage Galera Cluster using ClusterControl

Since its creation, Galera Cluster has established itself as the most popular high availability solution for MySQL and MariaDB users worldwide.

ClusterControl is the go-to automation and management system for Galera Cluster users.

And together, we’re going to walk you through all the different aspects that make Galera Cluster such a popular high availability solution for MySQL and MariaDB and how to best manage it with ClusterControl.

We’ll hear about the latest features of Galera Cluster directly from Codership, the creators of Galera Cluster. And we’ll look at how to automate everything from deployment, monitoring (how about ASCII-art graphs?), backups, failover, recovery, rolling upgrades and scaling using the ClusterControl CLI (for a change, we also have a GUI of course).

AGENDA

Introduction
About Codership, the makers of Galera Cluster
About Severalnines, the makers of ClusterControl
What’s new with Galera Cluster
Core feature set overview
What’s coming up
ClusterControl for Galera Cluster
Deployment
Monitoring
Management
Scaling
Live Demo
Q&A

 

Join EMEA timezone webinar Tue November 14, 10 AM CET

 

Join USA timezone webinar Tue November 14, 9 AM PST

 

Presenting:

Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.

Seppo Jaakola, Codership CEO

Migrating to MySQL 8.0 for WordPress – episode 3: query optimization

Now that MySQL 8.0.3 RC1 is installed and that we saw how to verify the workload, it’s time to see if we can optimize some of the queries. As explained before, rewriting queries when using a product like WordPress is complicated but maybe we can do something for the indexes ?

So, do you remember how to check the query that was consuming most resources ? Let’s find it out again:

mysql> select t1.*, QUERY_SAMPLE_TEXT from statement_analysis as t1 join performance_schema.events_statements_summary_by_digest as t2 on t2.digest=t1.digest and t2.SCHEMA_NAME = t1.db where db = 'wp_lefred' limit 1\G *************************** 1. row *************************** query: SELECT `option_name` , `option ... options` WHERE `autoload` = ? db: wp_lefred full_scan: * exec_count: 103 err_count: 0 warn_count: 0 total_latency: 2.97 s max_latency: 600.21 ms avg_latency: 28.82 ms lock_latency: 39.46 ms rows_sent: 28825 rows_sent_avg: 280 rows_examined: 208849 rows_examined_avg: 2028 rows_affected: 0 rows_affected_avg: 0 tmp_tables: 0 tmp_disk_tables: 0 rows_sorted: 0 sort_merge_passes: 0 digest: 52292f0ae858595a6dfe100f30207c9f first_seen: 2017-10-24 14:59:50.492922 last_seen: 2017-10-24 23:45:26.192839 QUERY_SAMPLE_TEXT: SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes'

Nothing special… let’s check the Query Execution Plan (QEP):

mysql> explain SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes'\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: wp_options partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 1551 filtered: 10.00 Extra: Using where 1 row in set, 1 warning (0.02 sec)

Hummm… so here we can definitely see that this is a full table scan (type: ALL) and it scans 1551 rows.

Let’s now verify the table’s structure:

mysql> show create table wp_options\G *************************** 1. row *************************** Table: wp_options Create Table: CREATE TABLE `wp_options` ( `option_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `option_name` varchar(191) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', `option_value` longtext COLLATE utf8mb4_unicode_ci NOT NULL, `autoload` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'yes', PRIMARY KEY (`option_id`), UNIQUE KEY `option_name` (`option_name`) ) ENGINE=InnoDB AUTO_INCREMENT=980655 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci 1 row in set (0.08 sec)

OK so it’s clear that autoload is not indexed… and that the type doesn’t seems super adequate…

Let’s verify:

mysql> select distinct autoload FROM wp_options; +----------+ | autoload | +----------+ | yes | | no | +----------+ 2 rows in set (0.00 sec) mysql> select autoload, count(*) FROM wp_options group by autoload; +----------+----------+ | autoload | count(*) | +----------+----------+ | yes | 280 | | no | 1309 | +----------+----------+ 2 rows in set (0.00 sec)

Now let’s run the query and check if MySQL performs really a full table scan:

mysql> flush status; Query OK, 0 rows affected (0.07 sec) mysql> SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes'\G ... mysql> show status like 'ha%'; +----------------------------+-------+ | Variable_name | Value | +----------------------------+-------+ | Handler_commit | 1 | | Handler_delete | 0 | | Handler_discover | 0 | | Handler_external_lock | 2 | | Handler_mrr_init | 0 | | Handler_prepare | 0 | | Handler_read_first | 1 | | Handler_read_key | 1 | | Handler_read_last | 0 | | Handler_read_next | 0 | | Handler_read_prev | 0 | | Handler_read_rnd | 0 | | Handler_read_rnd_next | 1590 | | Handler_rollback | 0 | | Handler_savepoint | 0 | | Handler_savepoint_rollback | 0 | | Handler_update | 0 | | Handler_write | 0 | +----------------------------+-------+ 18 rows in set (0.06 sec)

Handler_read_rnd_next is incremented when the server performs a full table scan and this is a
counter you don’t really want to see with a high value. So indeed in our case we perform a full table scan.

The QEP can also be more detailed when using the JSON format:

mysql> explain format=JSON SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes'\G *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "202.85" }, "table": { "table_name": "wp_options", "access_type": "ALL", "rows_examined_per_scan": 1546, "rows_produced_per_join": 154, "filtered": "10.00", "cost_info": { "read_cost": "187.39", "eval_cost": "15.46", "prefix_cost": "202.85", "data_read_per_join": "131K" }, "used_columns": [ "option_name", "option_value", "autoload" ], "attached_condition": "(`wp_lefred`.`wp_options`.`autoload` = 'yes')" } } } 1 row in set, 1 warning (0.03 sec)

This is already enough information for this query, but we could have even more details enabling the OPTIMIZER TRACE:

mysql> SET OPTIMIZER_TRACE = "enabled=on"; Query OK, 0 rows affected (0.01 sec) mysql> explain SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes'\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: wp_options partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 1546 filtered: 10.00 Extra: Using where 1 row in set, 1 warning (0.00 sec) mysql> select * from INFORMATION_SCHEMA.OPTIMIZER_TRACE\G *************************** 1. row *************************** QUERY: explain SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes' TRACE: { "steps": [ { "join_preparation": { "select#": 1, "steps": [ { "expanded_query": "/* select#1 */ select `wp_options`.`option_name` AS `option_name`, `wp_options`.`option_value` AS `option_value` from `wp_options` where (`wp_options`.`autoload` = 'yes')" } ] } }, { "join_optimization": { "select#": 1, "steps": [ { "condition_processing": { "condition": "WHERE", "original_condition": "(`wp_options`.`autoload` = 'yes')", "steps": [ { "transformation": "equality_propagation", "resulting_condition": "(`wp_options`.`autoload` = 'yes')" }, { "transformation": "constant_propagation", "resulting_condition": "(`wp_options`.`autoload` = 'yes')" }, { "transformation": "trivial_condition_removal", "resulting_condition": "(`wp_options`.`autoload` = 'yes')" } ] } }, { "substitute_generated_columns": { } }, { "table_dependencies": [ { "table": "`wp_options`", "row_may_be_null": false, "map_bit": 0, "depends_on_map_bits": [ ] } ] }, { "ref_optimizer_key_uses": [ ] }, { "rows_estimation": [ { "table": "`wp_options`", "table_scan": { "rows": 1546, "cost": 48.25 } } ] }, { "considered_execution_plans": [ { "plan_prefix": [ ], "table": "`wp_options`", "best_access_path": { "considered_access_paths": [ { "rows_to_scan": 1546, "filtering_effect": [ ], "final_filtering_effect": 0.1, "access_type": "scan", "resulting_rows": 154.6, "cost": 202.85, "chosen": true } ] }, "condition_filtering_pct": 100, "rows_for_plan": 154.6, "cost_for_plan": 202.85, "chosen": true } ] }, { "attaching_conditions_to_tables": { "original_condition": "(`wp_options`.`autoload` = 'yes')", "attached_conditions_computation": [ ], "attached_conditions_summary": [ { "table": "`wp_options`", "attached": "(`wp_options`.`autoload` = 'yes')" } ] } }, { "refine_plan": [ { "table": "`wp_options`" } ] } ] } }, { "join_explain": { "select#": 1, "steps": [ ] } } ] } MISSING_BYTES_BEYOND_MAX_MEM_SIZE: 0 INSUFFICIENT_PRIVILEGES: 0 1 row in set (0.01 sec)

For this particular easy query, this is not important but that can be useful. Particularly when you know that you can influence on the cost model.

Optimizing the query

It’s then clear that we could benefit from an index here (and maybe reduce the size of the field, but I won’t modify table structures for now).

Let’s create an index on autoload:

mysql> alter table wp_options add index autoload_idx(autoload);

And we can verify the QEP with the new index:

mysql> explain SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes'\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: wp_options partitions: NULL type: ref possible_keys: autoload_idx key: autoload_idx key_len: 82 ref: const rows: 280 filtered: 100.00 Extra: NULL 1 row in set, 1 warning (0.00 sec)

Don’t forget that with MySQL 8.0 it’s also possible to set some indexes invisible. Good candidates are the indexes returned by these queries:

mysql> select * from schema_unused_indexes where object_schema='wp_lefred'; +---------------+-------------+--------------+ | object_schema | object_name | index_name | +---------------+-------------+--------------+ | wp_lefred | wp_links | link_visible | | wp_lefred | wp_termmeta | term_id | | wp_lefred | wp_termmeta | meta_key | | wp_lefred | wp_users | user_email | +---------------+-------------+--------------+ mysql> select * from schema_redundant_indexes where table_schema='wp_lefred'; Empty set (0.20 sec)

This post now concludes the migration to MySQL 8.0 for WordPress series and I hope this article will help you finding which queries need to be optimized !

 

20 Tips: Prepare Your Database for Black Friday & Cyber Monday

20 Tips: Prepare Your Database for Black Friday & Cyber Monday jmclaurin Wed, 10/25/2017 - 16:51

The biggest online-shopping days of the year are just around the corner. Is your database ready? By tuning 20 key MariaDB system variables, you'll bolster your database's performance, scalability and availability, ensuring every potential customer has a smooth user experience. The following system variables come up repeatedly in configuring an optimal MariaDB server environment. Implement our recommendations for the most tuned values, and make this year's Black Friday–Cyber Monday period your best ever.

A couple of important notes:

  • Do not accept these suggestions blindly. Each MariaDB environment is unique and requires additional thought before making any changes.  You will most likely need to adjust these settings for your specific use case and environment.

  • MariaDB configuration file is located in /etc/my.cnf. Every time you modify this file you will need to restart the MySQL service so the new changes can take effect.

20 Black Friday and Cyber Monday Tuning Recommendations

1. InnoDB Buffer Pool Size

The InnoDB buffer pool size this is the #1 setting to look at for any installation using InnoDB. The buffer pool is where data and indexes are cached; having it as large as possible will ensure you use memory and not disks for most read operations.

2. InnoDB Log File Size

innodb_log-file-size is the size of the redo logs, which are used to make sure writes are fast and durable. There are two general suggestions for InnoDB log file sizing:  

  • Set combined total size of InnoDB log files greater than 25–50% of the InnoDB buffer pool size

or

  • Set combined InnoDB log file log size equal to one hour’s worth of log entries during peak load 

Larger log files can lead to slower recovery in the event of a server crash. However, they also reduce the number of checkpoints needed and reduce disk I/O.

Evaluate the size of one hour’s worth of binary logs under operational load, then decide whether to increase the size of the InnoDB log files. 

Getting the innodb log file sizes right is important to achieve good system performance. MariaDB’s InnoDB storage engine uses a fixed size (circular) redo log space. The size is controlled by innodb_log_file_size and innodb_log_files_in_group (default 2). Multiply those values to get the redo log space that available for use. While technically it shouldn’t matter whether you use the innodb_log_file_size or innodb_log_files_in_group variable to control the redo space size, most people just work with the innodb_log_file_size and leave innodb_log_files_in_group alone. 

InnoDB’s redo space size is one of the most important configuration options for write-intensive workloads. However, it comes with trade-offs. The more redo space configured, the better InnoDB can optimize write I/O. However, increasing the redo space also means longer recovery times when the system loses power or crashes for other reasons. 

3. InnoDB Log Buffer Size

A larger InnoDB log buffer size means less disk I/O for larger transactions. It is suggested to set this to 64M on all servers.

4. InnoDB Log Flush Interval

The innodb_flush_log_at_trx_commit variable controls when flushing of the log buffer to disk occurs. innodb_flush_log_at_trx_commit = 1 (default) flushes the log buffer to disk at each transaction commit. This is the safest but also the least performant option.  

innodb_flush_log_at_trx_commit = 0 flushes the log buffer to disk every second, but nothing on transaction commit. Up to one second (possibly more due to process scheduling) could be lost. If there’s any crash, MySQL or the server can lose data. This is the fastest but least safe option.

innodb_flush_log_at_trx_commit = 2 writes the log buffer out to file on each commit but flushes to disk every second. If the disk cache has a battery backup (for instance a battery backed cache raid controller) this is generally the best balance of performance and safety. A crash of MySQL should not lose data. A server crash or power outage could lose up to a second (possibly more due to process scheduling). A battery-backed cache reduces this possibility.

We suggest using the first option for safety.

5. InnoDB IO Capacity

innodb_io_capacity should be set to approximately the maximum number of IOPS the underlying storage can handle.

By default this is set to 1000. We recommend to benchmarking the storage to determine whether you can increase this value further.

6. Thread Cache Size

Monitor the value of Threads_created. If it continues increasing at more than a few threads per minute, increase the value of thread_cache_size.

The thread cache size is set to 200 in the current default configuration.

7. Table Cache and Table Definition Cache

The table_open_cache and table_defintion_cache variables control the number of tables and definitions to keep open for all threads.

Monitor Open_tables, Open_table_defintitions, Opened_tables, and Opened_table_definitions to determine the best value. The general suggestion is to set table_open_cache (and subsequently table_definition_cache) only high enough to reduce the rate of increase of the Opened_tables (and Opened_table_definitions) status value.

Both table_open_cache and table_defintion_cache are set to 2048 in the default configuration.

8. Query Cache

The query cache is a well-known bottleneck that can be seen even when concurrency is moderate. The best option is to disable it from day one by setting query_cache_size = 0 (the default in MariaDB 10) and to use other ways to speed up read queries: having good indexing, adding replicas to spread the read load or using an external cache (memcache or redis, for instance). If you have already built your MariaDB application with the query cache enabled and have never noticed any problem, the query cache may be beneficial for you. In that case, be cautious if you decide to disable it.

9. Temporary Tables, tmp_table_size, and max_heap_table_size

MySQL uses the lower of max_heap_table_size and tmp_table_size to limit the size of temporary tables in memory. These are per client variables. While having this value large can help reduce the number of temporary tables created on disk, it also raises the risk of reaching the server's memory capacity since this is per client. Generally 32M to 64M is the suggested value to begin with for both variables and tune as needed.

Temporary tables are often used for GROUP BY, ORDER BY, DISTINCT, UNION, sub queries, etc. Ideally, MySQL should create these in memory, with as few on disk as possible.

It is important to note that queries not using joins appropriately and creating large temporary tables can be one reason for a higher number of temporary tables on disk. Another reason is the memory storage engine uses fixed length columns and assumes the worst-case scenario. If columns are not sized correctly (for example, a VARCHAR(255) for a short string), this influences the size of the table in memory and can cause it to go to disk earlier than it should. Also, temporary tables with blob and text columns will immediately go to disk, as the memory storage engine does not support them.

Both are currently set to 64M by default.

10. Warning Log Level

We recommend setting the warning log level this to log_warnings = 2. Doing so logs information about aborted connections and access-denied errors.

11. Max Connections

If you are often facing the "Too many connections" error, max_connections is too low. Frequently, because the application does not close connections to the database correctly, you need much more than the default 151 connections. The main drawback of high values for max_connections (say, 1,000 or more) is that the server will become unresponsive if it must run that many active transactions. Using a connection pool at the application level or a thread pool at the MariaDB level can help here.

12. Transaction Isolation

Investigate the available transaction isolation levels, and determine the best transaction isolation for your server’s use case.

13. Binary Log Format

We recommend using the ROW binary log format for master-master replication.

14. Auto-Increment Offsets

To help reduce the chances of collision between two masters being written to simultaneously, the auto increment and auto increment offset values need to be adjusted accordingly. 

15. Sync Binlog

By default, the OS handles flushing the binlog to disk. In the event of a server crash, it is possible to lose transactions from the binary log, leading to replication being out sync. Setting sync_binlog = 1 causes the binlog file to be flushed on every commit.

This is slower, but the safest option.

16. Crash Safe(r) Slaves

To help avoid replication errors after a slave crash, enable relay log recover and syncing of the relay log and relay log info files to disk.

17. Log Slave Updates

To have chained replication (master -> slave -> slave), log_slave_updates needs to be enabled. This tells a slave to write replicated transactions to its own binary log so that they can then be replicated to slaves off of it.

18. Read-Only Slaves

Slaves should be read-only to avoid data accidentally being written to them.

Note: Users with super privileges can still write when the server is read-only.

19. Slave Net Timeout

The slave_net_timeout variable is the number of seconds the slave will wait for a packet from the master before trying to reconnect. The default is 3600 (1 hour). This means if the link goes down and isn’t detected, it could be up to an hour before the slave reconnects. This could lead to the slave suddenly being up to an hour behind the master.

We recommend setting slave_net_timeout to a more reasonable value, such as 30 or 60.

20. Watch Our Webinar on Preparing for Black Friday & Cyber Monday

Watch our on-demand webinar – Preparing for Black Friday & Cyber Monday – to learn the four vital principles of database preparedness: security measures to protect your database from malicious attacks, performance tuning to ensure you deliver a smooth user experience, high availability strategies to ensure you don’t miss a single sale and scalability to prepare for both anticipated growth and unexpected spikes.

With all the excitement leading up to the holidays, it's easy to overlook how to prep your database environment for the biggest online shopping days of the year! Each year, more and more shoppers opt for online holiday deals instead of the more traditional mall experience, which means that retailers must prepare for multiple days of high online traffic to their e-commerce sites. This year you’ll be prepared, as I’m passing along 20 tips to tune your database environment for some of the biggest online holiday shopping days – Black Friday and Cyber Monday!

Simon Mudd

Thu, 10/26/2017 - 01:34

Changes to /etc/my.cnf and restarting MySQL/MariaDB

A blank statement like "MariaDB configuration file is located in /etc/my.cnf. Every time you modify this file you will need to restart the MySQL service so the new changes can take effect." is very misleading. Restarting MySQL/MariaDB often means taking downtime especially for a server running in a "24x7 web environment" so suggesting changes and suggesting that the server is restarted is really not considering the real world, and is suggesting the user do something that may not be necessary.

I think you should point out that:
* the /etc/my.cnf settings have an "on-server global variable equivalent value", at least in almost every case.
* of these global settings within MySQL/MariaDB most of them are dynamic. It's true that some are not and if you need to change these settings then you will need to restart the server to pick them up.
* for the dynamic settings you can do something as simple as SET GLOBAL sync_binlog = 1 to mention one of the settings referenced in the post. This would change the running value from 0 if that were set before.
* Given your list of settings that need to be looked at above I would make a clear point of indicating which of these settings are not dynamic, as this allows people to modify the dynamic ones as I indicated and then consider restarting the server to change those static settings if that's required.

Given your target audience addressing these points I mention would probably make the suggestions you make more useful to real world users.

In the environment I work in I push out changes to /etc/my.cnf and a process which monitors the change checks the running server's configuration and if it sees that a setting needs to be changed and it is dynamic the change will be made dynamically avoiding any server restart. This has proved to be very convenient and has avoided a large number of unnecessary restarts.

It is true that there are static settings which mean the server does need to be restarted and I would always encourage the MariaDB developers to look for ways to make these settings dynamic too. That would allow how higher uptime which is in the end what we care about, while at the same time allowing us to reconfigure the running server to best suit its workload.

Jack Ponting

Wed, 11/01/2017 - 08:23

Good
http://www.good.com

Login or Register to post comments

JSON Output of the pmm-admin list Command

In this blog post, we’ll look at Percona Monitoring and Management’s pmm-admin list command.

The pmm-admin list command shows all monitoring services you have added using the pmm-admin add command. Starting with version 1.4.0, Percona Monitoring and Management (PMM) also lists external monitoring services when you run pmm-admin list, i.e., those services that monitor the backends not supported out of the box (such as PostgreSQL databases).

In the output, the external monitoring services appear at the bottom:

The tabular output of the pmm-admin list commandJSON Output for Automatic Verification

But there is also another feature of pmm-admin list. If you run this command with the –json parameter, the command gives you a JSON document as output. This option now enables inspecting the monitoring services by computers due to the strict JSON syntax rules. JSON has become a de-facto standard for exchanging data for many tools. The JSON output provided by the pmm-admin list command can be used by configuration management tools such as ansible or chef.

The output is captured as keys and values. The general information about the computer where this pmm-client is installed is given as top-level elements:

  • Version
  • ServerAddress
  • ServerSecurity
  • ClientName
  • ClientAddress
  • ClientBindAddress
  • Platform

You can quickly determine if there are any errors in built-in monitoring services by inspecting the Err top level element in the JSON output. Similarly, the ExternalErr element reports errors on external services:

The JSON parsing friendly version produced by the pmm-admin list commandRepresenting Monitoring Services

Two elements contain lists as their values. The Services top-level element contains a list of documents that represent enabled monitoring services. The ExternalServices element contains a list of documents that represent enabled external monitoring services. Each attribute in the Services and ExternalServices elements provides the same information as a column in the tabular output.

Hope this brief post provides some valuable information regarding new Percona Monitoring and Management 1.4.0 functionality. Let me know about any questions in the comments.

The Galera Cluster & Severalnines Teams Present: How to Manage Galera Cluster with ClusterControl

Join us on November 14th 2017 as we combine forces with the Codership Galera Cluster Team to talk about how to manage Galera Cluster using ClusterControl!

Galera Cluster has become one of the most popular high availability solution for MySQL and MariaDB; and ClusterControl is the de facto automation and management system for Galera Cluster.

We’ll be joined by Seppo Jaakola, CEO of Codership - Galera Cluster, and together, we’ll demonstrate what it is that makes Galera Cluster such a popular high availability solution for MySQL and MariaDB and how to best manage it with ClusterControl.

We’ll discuss the latest features of Galera Cluster with Seppo, one of the creators of Galera Cluster. We’ll also demo how to automate it all from deployment, monitoring, backups, failover, recovery, rolling upgrades and scaling using the new ClusterControl CLI.

Sign up below!

Date, Time & Registration Europe/MEA/APAC

Tuesday, November 14th at 09:00 GMT / 10:00 CET (Germany, France, Sweden)

Register Now

North America/LatAm

Tuesday, November 14th at 09:00 PT (US) / 12:00 ET (US)

Register Now

Agenda
  • Introduction
    • About Codership, the makers of Galera Cluster
    • About Severalnines, the makers of ClusterControl
  • What’s new with Galera Cluster
    • Core feature set overview
    • The latest features
    • What’s coming up
  • ClusterControl for Galera Cluster
    • Deployment
    • Monitoring
    • Management
    • Scaling
  • Live Demo
  • Q&A
Speakers

Seppo Jaakola, Founder of Codership, has over 20 years experience in software engineering. He started his professional career in Digisoft and Novo Group Oy working as a software engineer in various technical projects. He then worked for 10 years in Stonesoft Oy as a Project Manager in projects dealing with DBMS development, data security and firewall clustering. In 2003, Seppo Jaakola joined Continuent Oy, where he worked as team leader for MySQL clustering product. This position linked together his earlier experience in DBMS research and distributed computing. Now he’s applying his years of experience and administrative skills to steer Codership to a right course. Seppo Jaakola has MSc degree in Software Engineering from Helsinki University of Technology.

Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.

Tags:  MySQL MariaDB galera cluster clustercontrol

Deploying AdonisJS Apps to DigitalOcean

In my last post I showed how to deploy AdonisJS apps to Heroku. In this tutorial, I'll be showing you how to deploy your AdonisJS application to Digital Ocean. We'll also be deploying the task list app we built earlier on.

Prerequisites

This tutorial assumes the following:

Create Droplet

Login to your Digital Ocean account and create a new droplet (server). We’ll be going to with One-click apps. Select NodeJS as shown below:

Next, we’ll choose the $10 plan. Though the task list app will work perfectly on the $5 plan, but we won’t be able to install the NPM dependencies because the NPM requires at least 1GB RAM for installing dependencies. Though there is a way around this by creating swap memory which is beyond the scope of this tutorial.

Next, select a datacenter region, we’ll go with the default:

Next, add your SSH key. If you have already added SSH keys to Digital Ocean before, you can choose from those:

If not you will need to click on the New SSH Key button to add a new SSH key. You can get your SSH key by running the command below on your local computer:

cat ~/.ssh/id_rsa.pub

The command above will print your SSH key on the terminal, which you can then copy and paste in the SSH Key Content field. Also, give your SSH key a name.

Finally, choose a hostname for the droplet and click the Create button.

After some couple of seconds, you’ll have your new server up and running on Ubuntu 16.04 and NodeJS version 6.11.2.

Also, note the IP address of the server as we’ll be using it to access the server.

Create Non-root User

Before we start configuring the server for the task app, let’s quickly create a non-root user which we’ll use henceforth for the rest of the tutorial.

Tip: As a security measure, it is recommended to carry out tasks on your server as a non-root user with administrative privileges.

First, we need to login to the server as root. We can do that using the server’s IP address:

ssh root@IP_ADDRESS

Once we are logged in to the server, we can move on to create a new user:

adduser mezie

This will create a new user called mezie, you can name the user whatever you like. You will be asked a few questions, starting with the account password.

Having created the new user, we need to give it administrative privileges. That is, the user will be able to carry out administrative tasks by using sudo command.

usermod -aG sudo mezie

The command above adds the user mezie to sudo group.

Now the user can run commands with superuser privileges.

Setup SSH Key For The New User

You need to copy your public key to your new server. Enter the command below on your local computer:

cat ~/.ssh/id_rsa.pub

This will print your SSH key to the terminal, which you can then copy.

For the new user to login to the server with SSH key, we must add the public key to a special file in the user's home directory.

Still logged in as root on the server, enter the following command:

su - mezie

This will temporarily switch to the new user. Now you’ll be in your new user's home directory.

Next, we need to create a new directory called .ssh and restrict its permission:

mkdir ~/.ssh chmod 700 ~/.ssh

Next, within the .ssh directory, create a new file called authorized_keys:

touch ~/.ssh/authorized_keys

Next, open the file with vim:

vim ~/.ssh/authorized_keys

Next, paste your public key (copied above) into the file. To save the file, hit esc to stop editing, then :wq and press ENTER.

Next, restrict the permissions of the authorized_keys file with this command:

chmod 600 ~/.ssh/authorized_keys

Type the command below to return to the root user:

exit

Now your public key is installed, and you can use SSH keys to log in as your user.

To make sure you can login as the new user with SSH. Enter the command below in a new terminal on your local computer:

ssh mezie@IP_ADDRESS

If all went well, you’ll be logged in to the server as the new user with SSH.

The rest of the tutorial assumes you are logged in to the server with the new user created (mezie in my case).

Updating to Latest NodeJS Version

Notice the server currently have NodeJS version 6.11.2, but AdonisJS v4.0 requires NodeJS v8.0 or greater. So we need to upgrade it to the latest version. We’ll do that using a NPM package called n. This will allow us to install multiple versions for NodeJS which we can easily choose from as the case may be. Enter the commands below on the server:

// install n package globally npm install -g n // install latest version of Node.js which v8.6.0 as at this tutorial n latest

If we check NodeJS version we’ll still see v6.11.2. Though we already installed the latest version, for the this to take effect enter the command below:

bash

This is will simply start another bash instance. It’s more like restarting bash.

Install Nginx

We’ll install Nginx as the webserver to be used for reverse proxy which will allow us to access the app directly with an IP address or domain instead of tacking port to the IP address. Eg. 102.123.83.29:5000.

sudo apt-get update sudo apt-get install nginx

Because we chose One-click apps while creating our droplet, ufw firewall is setup for us and running. Now, we need to open firewall for only HTTP since we are not concerned with SSL in this tutorial:

sudo ufw allow 'Nginx HTTP' Install MySQL

Next, we need to install MySQL and set up database and user to be used by the task list app.

sudo apt-get install mysql-server

Enter a root password when prompted. Next, we'll finish configuring MySQL:

sudo mysql_secure_installation

Enter the necessary options when prompted.

With the MySQL setup, we need to create a database and a user.

First, login to the MyQSL server:

mysql -u root -p

Provide the root password you enter above while installing MySQL. Once you are logged in, create a new database:

CREATE DATABASE adonis_tasks;

Also, create a new user:

CREATE USER 'mezie'@'localhost' IDENTIFIED BY 'password';

Replace mezie and password with your user and password respectively.

Next, we need to grant the new user priviledges to the tables on the new database:

GRANT ALL ON adonis_tasks.* TO 'mezie'@'localhost';

For the changes to take effect, run:

FLUSH PRIVILEGES;

With the initial setups out of the way, let’s now focus on the app itself. We are going to clone the app unto the server directly in the user's home directory (that is, /home/mezie in my case):

git clone https://github.com/ammezie/adonis-tasks.git

Next, we install the dependencies:

cd adonis-tasks npm install --production

Next, create .env file:

vim .env

And paste the following into it:

// .env HOST=127.0.0.1 PORT=3333 NODE_ENV=production CACHE_VIEWS=true APP_KEY=WdpmKSRNxJhejBxxwvgtdGbPM0JBlRxm DB_CONNECTION=mysql DB_HOST=localhost DB_DATABASE=adonis_tasks DB_USER=YOUR_DB_USER DB_PASSWORD=YOUR_DB_PASSWORD

Save the file and exit vim.

Next, we need to run database migrations. First, let’s install the Adonis CLI so we can make use of it great features:

npm i -g @adonisjs/cli

Once installed, let’s run the migrations:

adonis migration:run --force

Because we are on production, we have to use the --``force flag.

Test The App

Before we process, let’s test to make sure the app is working fine. Start the app with the command below:

npm start

Then in a new terminal, enter the command below:

curl http://127.0.0.1:3333

You should get an output as below:

<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Task List</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" /> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.5.1/css/bulma.min.css" /> </head> <body> <section class="section"> <div class="container"> <div class="columns"> <div class="column is-8 is-offset-2"> <div class="box"> <h2 class="title">New Task</h2> <form action="/tasks" method="POST"> <input type="hidden" name="_csrf" value="4f2uYvxk-j_OR-vX1D1_DfRb7KanGrhKwV-s"> <div class="field has-addons"> <div class="control is-expanded"> <input class="input" type="text" name="title" value="" placeholder="Task title"> </div> <div class="control"> <button type="submit" class="button is-primary"> Add Task </button> </div> </div> </form> </div> <div class="box"> <h1 class="title">Task List</h1> <table class="table is-bordered is-striped is-narrow is-fullwidth"> <thead> <tr> <th>SN</th> <th>Title</th> <th>Action</th> </tr> </thead> <tbody> <td colspan="3" class="has-text-centered">No task created yet!</td> </tbody> </table> </div> </div> </div> </div> </section> </body> </html>

This indicates the app is up and running fine. But whenever the app crashes we’ll need to manually start the app again which is not a recommended approach. So, we need a process manager to help us with starting the app and restarting it whenever it crashes. We’ll use PM2 for this.

Install PM2

We’ll install it globally through NPM:

sudo npm install -g pm2

With PM2 installed, we can start the app with it:

pm2 start server.js

Once the app is started you will get an output from PM2 indicating the app has started.

To launch PM2 on system startup or reboot, enter the command below:

pm2 startup systemd

You’ll get the following output:

[PM2] Init System found: systemd [PM2] To setup the Startup Script, copy/paste the following command: sudo env PATH=$PATH:/usr/local/bin /usr/local/lib/node_modules/pm2/bin/pm2 startup systemd -u mezie --hp /home/mezie

Copy and run the last command from the output above:

sudo env PATH=$PATH:/usr/local/bin /usr/local/lib/node_modules/pm2/bin/pm2 startup systemd -u mezie --hp /home/mezie

Now PM2 will start at boot up.

Set Up Nginx as a Reverse Proxy Server

Finally, we set up Nginx as a reverse proxy server. To this, run:

sudo vim /etc/nginx/sites-available/default

Within the server block you should have an existing location / block. Replace the contents of that block with the following configuration:

// /etc/nginx/sites-available/default ... location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://127.0.0.1:3333; proxy_set_header Host $http_host; proxy_cache_bypass $http_upgrade; proxy_redirect off; }

Save and exit vim.

Test to make sure there are no syntax errors in the configuration by running:

sudo nginx -t

Then restart Nginx:

sudo systemctl restart nginx

Now you should be able to access the app with your IP_ADDRESS. You should get something similar to the image below:

Conclusion

That’s it. You have seen how to deploy AdonisJS app to Digital Ocean.

MySQL Connector/NET 6.10.4 GA has been released

Dear MySQL users,

MySQL Connector/Net 6.10.4 GA is the first release which introduces
Online API reference documentation and independent configuration of
Character set and collation in EF Core models.

To download MySQL Connector/Net 6.10.4 GA, see the “Development
Releases” tab at http://dev.mysql.com/downloads/connector/net/

Changes in MySQL Connector/Net 6.10.4 (2017-10-25, General Availability) Functionality Added or Changed * Online API reference documentation is now available for MySQL Connector/Net. This new format replaces the files compressed and deployed in a binary format with the extension .CHM (Compiled HTML) that previously shipped with each connector release. * The deprecated UseProcedureBodies connection-string option was removed. Instead, the CheckParameters option can be used to check stored parameters against the server. * EF Core: Character set and collation now are independently configurable in EF Core models. The values can be set at both the entity and entity-property levels by using either data annotation (new attributes) or new methods added to the code-first fluent API using the MySql.Data.EntityFrameworkCore.Extensions namespace. Specifically, the new items added for setting character set and collation are: + Attributes:[MySqlCharset] and [MySqlCollation] + Methods: ForMySQLHasCharset() and ForMySQLHasCollation() Bugs Fixed * Assemblies within NuGet packages were not fully signed. (Bug #26739307) * EF Core: Inserting a date of '0000-00-00' into a column when also setting Convert Zero Datetime=True in the connection string of a class that derives from DbContext produced an exception, instead of performing the expected conversion. (Bug #26552753, Bug #87120) * EF Core: Foreign key relations were not consistently created within the entity model generated by scaffolding a MySQL database. (Bug #26339430) * Invoking the MySql.Web.Security.MySqlWebSecurity.CreateUserAndAccount method with valid arguments, including additionalUserAttributes as an object with key/value pairs, returned an out-of-range exception. Thanks to Stein Setvik for contributing to the fix. (Bug #25046364) * The default character set and encoding were not set properly when making a connection to MySQL 5.6 and 5.7 servers configured to use the utf8 character set. (Bug #23257011)

Nuget packages are available at:

https://www.nuget.org/packages/MySql.Data/6.10.4
https://www.nuget.org/packages/MySql.Web/6.10.4
https://www.nuget.org/packages/MySql.Data.Entity/6.10.4
https://www.nuget.org/packages/MySql.Data.EntityFrameworkCore/6.10.4
https://www.nuget.org/packages/MySql.Data.EntityFrameworkCore.Design/6.10.4

Enjoy and thanks for the support!

On behalf of the MySQL Release Team,
Nawaz Nazeer Ahamed

Joint Webinar with Severalnines: How to manage Galera Cluster using ClusterControl

Since its creation, Galera Cluster has established itself as the most popular high availability solution for MySQL and MariaDB users worldwide.

ClusterControl is the go-to automation and management system for Galera Cluster users.

And together, we’re going to walk you through all the different aspects that make Galera Cluster such a popular high availability solution for MySQL and MariaDB and how to best manage it with ClusterControl.

We’ll hear about the latest features of Galera Cluster directly from Codership, the creators of Galera Cluster. And we’ll look at how to automate everything from deployment, monitoring (how about ASCII-art graphs?), backups, failover, recovery, rolling upgrades and scaling using the ClusterControl CLI (for a change, we also have a GUI of course).

AGENDA

Introduction
About Codership, the makers of Galera Cluster
About Severalnines, the makers of ClusterControl
What’s new with Galera Cluster
Core feature set overview
What’s coming up
ClusterControl for Galera Cluster
Deployment
Monitoring
Management
Scaling
Live Demo
Q&A

 

Join EMEA timezone webinar Tue November 14, 10 AM CET

 

Join USA timezone webinar Tue November 14, 9 AM PST

 

Presenting:

Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.

Seppo Jaakola, Codership CEO

 

gh-ost hooks for better visibility

In this post I will show you how to leverage Github’s gh-ost hooks for greatly improving visibility of ongoing schema changes for your organization.

One of the potential uses of gh-ost hooks is to send status updates. I will present you with some examples of how to do that via email and Slack.

What are gh-ost hooks?

gh-ost has built-in support for hooks, i.e. external scripts with predefined names, that will be called by gh-ost at certain points of a schema change operation.

The complete list is available here, but the most useful gh-ost hooks for me are:

  • gh-ost-on-startup-hook
    • called at the start of a schema change operation
  • gh-ost-on-status-hook
    • ran every 10 minutes as the tool reports the status of the operation
  • gh-ost-on-success-hook
    • executed when the operation succeeds
  • gh-ost-on-failure-hook
    • invoked only if the operation fails for some reason
  • gh-ost-on-before-cut-over-hook
    • called when the cut-over is ready – specially useful in the case you are manually doing the cutover
  • gh-ost-on-begin-postponed-hook
Status variables

gh-ost exposes information via status variables that are passed to the hooks for reference.

The full list of status variables is available in the doc files, but I will mention the ones I find most useful:

  • GH_OST_DATABASE_NAME
    • db against which gh-ost is running the change
  • GH_OST_TABLE_NAME
    • self explanatory
  • GH_OST_DDL
    • the alter statement gh-ost is executing
  • GH_OST_ELAPSED_SECONDS
    • total runtime
Using gh-ost hooks

To work with gh-ost hooks, simply create your scripts on any directory (e.g. /user1/ghost_hooks/) using the exact names gh-ost expects.
The scripts require execute permission to be set:

chmod +x /user1/ghost_hooks/*

Finally, include the following parameter to your gh-ost command, specifying the location you created:

gh-ost --hooks-path=/user1/ghost_hooks/ ... Integrating with email

A simple way to use gh-ost hooks is have emails sent to people on important events.

I usually go with on-startup, on-failure and on-success hooks. Keep in mind gh-ost-on-status-hook will send an email every 10 minutes (for each gh-ost process you have) so you might want to implement some filtering rules to avoid excessive noise to your inbox if using that one.

Examples

The following scripts require the mailx package installed on the host where you run gh-ost. The host should of course be properly configured to send email (firewall rules, DNS, etc.).

Note the use of status variables as mentioned before.

gh-ost-on-startup-hook

#!/bin/bash # Sample hook file for gh-ost-on-startup notify_email=ivan@test.com text="$(date) gh-ost started on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME, change statement: $GH_OST_DDL" echo "$text" | mailx -v -s "gh-ost started on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME" $notify_email

gh-ost-on-failure-hook

#!/bin/bash # Sample hook file for gh-ost-on-failure notify_email=ivan@test.com text="$(date) gh-ost failed on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME; ghost: $GH_OST_OLD_TABLE_NAME, change statement: $GH_OST_DDL, failed after $GH_OST_ELAPSED_SECONDS" echo "$text" | mailx -v -s "gh-ost failed on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME" $notify_email

gh-ost-on-success-hook

#!/bin/bash # Sample hook file for gh-ost-on-success notify_email=ivan@test.com text="$(date) gh-ost successfully completed on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME, change statement: $GH_OST_DDL. Copied $GH_OST_COPIED_ROWS rows in $GH_OST_ELAPSED_COPY_SECONDS seconds. Total runtime was $GH_OST_ELAPSED_SECONDS seconds." echo "$text" | mailx -v -s "gh-ost successful on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME" $notify_email

gh-ost-on-status-hook

#!/bin/bash # Sample hook file for gh-ost-on-status notify_email=ivan@test.com text="$(date) gh-ost running on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME, change statement: $GH_OST_DDL. Time elapsed: ${GH_OST_ELAPSED_SECONDS}. Detailed status: ${GH_OST_STATUS}" echo "$text" | mailx -v -s "gh-ost running on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME" $notify_email Integrating with Slack

Rather than sending email, a more elegant solution if your organization uses Slack (or a similar IM platform) is to integrate gh-ost to post messages to using web hooks.

The following instructions instructions are for Slack, but should serve as a guideline for other IM platforms as well.

  1. Go to https://my.slack.com/services/new/incoming-webhook/
  2. Choose a channel (I recommend having a dedicated channel for gh-ost e.g. #gh-ost if possible)
  3. Click the button to create the webhook

That will lead you to the newly created webhook page, where you can see the URL which was assigned to your webhook. It will look something like this:

https://hooks.slack.com/services/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

From this page you can also customize the user this integration will post as (e.g. ghostbot) and upload a fancy custom icon.

At this point you are ready to try posting a message to the webhook to validate it works e.g.

curl -X POST --data-urlencode 'payload={"channel": "#gh-ost", "username": "ghostbot", "text": "Test posting to #gh-ost channel"}' https://hooks.slack.com/services/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

The last step is creating the scripts to post to the channel on gh-ost’s behalf.

Examples

gh-ost-on-startup-hook

#!/bin/bash # Sample hook file for gh-ost-on-startup text="$(date) gh-ost started on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME, change statement: $GH_OST_DDL" curl -X POST --data-urlencode 'payload={"channel": "#gh-ost", "username": "ghostbot", "text": '"'$text'"'}' https://hooks.slack.com/services/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

gh-ost-on-status-hook

#!/bin/bash # Sample hook file for gh-ost-on-status text="$(date) gh-ost running on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME, change statement: $GH_OST_DDL. Time elapsed: ${GH_OST_ELAPSED_SECONDS}. Detailed status: ${GH_OST_STATUS}" curl -X POST --data-urlencode 'payload={"channel": "#gh-ost", "username": "ghostbot", "text": '"'$text'"'}' https://hooks.slack.com/services/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

gh-ost-on-failure-hook

#!/bin/bash # Sample hook file for gh-ost-on-failure text="$(date) gh-ost failed on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME; ghost: $GH_OST_OLD_TABLE_NAME, change statement: $GH_OST_DDL, failed after $GH_OST_ELAPSED_SECONDS" curl -X POST --data-urlencode 'payload={"channel": "#gh-ost", "username": "ghostbot", "text": '"'$text'"'}' https://hooks.slack.com/services/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

gh-ost-on-success-hook

#!/bin/bash # Sample hook file for gh-ost-on-success text="$(date) gh-ost successfully completed on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME, change statement: $GH_OST_DDL. Copied $GH_OST_COPIED_ROWS rows in $GH_OST_ELAPSED_COPY_SECONDS seconds. Total runtime was $GH_OST_ELAPSED_SECONDS seconds." curl -X POST --data-urlencode 'payload={"channel": "#gh-ost", "username": "ghostbot", "text": '"'$text'"'}' https://hooks.slack.com/services/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

gh-ost-on-before-cut-over-hook

#!/bin/bash # Sample hook file for gh-ost-on-before-cut-over text="$(date) gh-ost is ready for cutover on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME; change statement: $GH_OST_DDL. Connect to the host and issue echo "cut-over"" curl -X POST --data-urlencode 'payload={"channel": "#gh-ost", "username": "ghostbot", "text": '"'$text'"'}' https://hooks.slack.com/services/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

gh-ost-on-begin-postponed-hook

#!/bin/bash # Sample hook file for gh-ost-on-begin-postponed text="$(date) gh-ost cutover postponed on $(hostname) for $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME, change statement: $GH_OST_DDL. Migration cut-over requires user interaction to proceed; will continue keeping new table in sync in the meantime" curl -X POST --data-urlencode 'payload={"channel": "#gh-ost", "username": "ghostbot", "text": '"'$text'"'}' https://hooks.slack.com/services/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 Final words

gh-ost is a very interesting tool for performing online schema changes, and it is quickly becoming more and more popular. This kind of well-thought interface for integrations are definitely one of the reasons. Using gh-ost hooks already? Have some good ones to share? Let me know in the comments section below!

Migrating to MySQL 8.0 for WordPress – episode 1

Yesterday I decided to upgrade the database server of lefred.be running the latest WordPress to MySQL 8.0.

The previous version was 5.7.19.

My dedicated server is still running on CentOS 6.x, and I had just to enable the MySQL Community 8.0 yum repository and upgrade the rpms:

# yum update mysql-community-server --enablerepo=mysql80-community

After the upgrade, I had to modify my.cnf to remove the lines related to the non regretted Query Cache:

#query_cache_type=0 #query_cache_size=0

After this, I could start mysqld and run mysql_upgrade.

I tried to connect with the command line client and it worked perfectly.

Then, I wanted to remove the maintenance mode on lefred.be but WordPress returned me an error that it couldn’t connect to the database:

I verified the mysql error log, nothing wrong (bad credentials are logged), and no error in the apache’s log.

I decided to run WordPress in debug mode :

in wp-config.php set

define('WP_DEBUG', false)

Then I could see this error message that sounded familiar:

Warning: mysqli_real_connect(): Server sent charset (255) unknown to the client. Please, report to the developers in /var/…/www/wp-includes/wp-db.php on line 1548

I modified again my.cnf to add the following line under [mysqld]:

collation-server = utf8mb4_general_ci

After a restart of mysqld, everything went back normal !

The MySQL connector used on my system for PHP is php55w-mysqlnd-5.5.38-1.w6.x86_64:

Name : php55w-mysqlnd Arch : x86_64 Version : 5.5.38 Release : 1.w6 Size : 956 k Repo : installed From repo : webtatic Summary : A module for PHP applications that use MySQL databases URL : http://www.php.net/ License : PHP Description : The php-mysqlnd package contains a dynamic shared object that will add : MySQL database support to PHP. MySQL is an object-relational database : management system. PHP is an HTML-embeddable scripting language. If : you need MySQL support for PHP applications, you will need to install : this package and the php package. : : This package use the MySQL Native Driver

The next post will be related to the workload analysis using performance_schema & sys and see if something can be improved.

MySQL Point in Time Recovery the Right Way

In this blog, I’ll look at how to do MySQL point in time recovery (PITR) correctly.

Sometimes we need to restore from a backup, and then replay the transactions that happened after the backup was taken. This is a common procedure in most disaster recovery plans, when for example you accidentally drop a table/database or run an update/delete without the “where” clause and lose data.

The usual way is to get a copy of your binlogs and use mysqlbinlog to replay those transactions. But this approach has many pitfalls that can make the whole PITR process a nightmare. Some examples:

  • You need to make sure to run a single mysqlbinlog command with all related binlogs, and pipe them to mysql at once. Otherwise, if binlog.000001 creates a temporary table, and binlog.000002 requires that temporary table, it will not be present. Each execution of MySQL creates a new connection:

shell> mysqlbinlog binlog.000001 | mysql -u root -p # Creates tmp table X shell> mysqlbinlog binlog.000002 | mysql -u root -p # Uses tmp table X

  • We can say that it has to be an atomic operation. If it fails halfway through, it will be very difficult to know where it failed and even more difficult to resume from that point forward. There are many reasons for it to fail: InnoDB lock wait timeout / deadlock with some concurrent transaction, server and client have different max_allowed_packet and you get a Lost connection to MySQL server during query error, and so on.

So how can we overcome those limitations and have a reliable way to do Point In Time Recovery?

We can restore the backup on the desired server, build a second server with just the minimal MySQL required data and move the all binary logs to this “fake” server datadir. Then we need to configure the server where we want the PITR to happen as a slave of the fake server. From this point forward, it’s going to be pure MySQL replication happening.

To illustrate it, I create a Docker container on the machine. I have Percona Server for MySQL running on the box listening on 3306, and have already restored the backup on it. There is a tarball there with all binlogs required. The saved positions for PITR are as follows:

[root@localhost ~]# cat /var/lib/mysql/xtrabackup_binlog_info master-bin.000007 1518932

I create a folder to store the Docker MySQL datadir:

mkdir /tmp/pitr chown -R 1001 /tmp/pitr

I start the Docker container. As we can see from xtrabackup_binlog_info, my binlogs are named master-bin and I’ll be setting the same server-id as original master:

docker run --name ps_pitr -v /tmp/pitr:/var/lib/mysql -p 3307:3306 -e MYSQL_ROOT_PASSWORD=secret -d percona/percona-server:5.7.18 --log_bin=master-bin --server-id=10

In case you want to make usage of GTID, append --gtid-mode=ON --enforce_gtid_consistency=ON to the end of the Docker command.

The command above starts a MySQL instance, invokes mysqld –initialize, sets the root password to secret and it’s port 3306 is mapped back to my local 3307 port. Now I’ll stop it, remove the binlogs that it created, uncompress and move all required binlogs to its datadir mapped folder and start it again:

docker stop ps_pitr rm /tmp/pitr/master-bin.* tar -zxf binlogs.tgz -C /tmp/pitr chown -R 1001 /tmp/pitr/master-bin.* docker start ps_pitr

If it all worked correctly, at this point we can see the full list of binary logs on the Docker container by connecting on port 3307:

mysql -u root -psecret -P 3307 --protocol=TCP -e "SHOW BINARY LOGS" mysql: [Warning] Using a password on the command line interface can be insecure. +-------------------+-----------+ | Log_name | File_size | +-------------------+-----------+ | master-bin.000005 | 26216208 | | master-bin.000006 | 26214614 | | master-bin.000007 | 26214902 | . . . | master-bin.000074 | 154 | +-------------------+-----------+

Now, all we need to do is connect to our server, which has the backup restored, and configure it as a slave from 3307:

mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 6 Server version: 5.7.18-16 Percona Server (GPL), Release 16, Revision d7301f8 Copyright (c) 2009-2017 Percona LLC and/or its affiliates Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> CHANGE MASTER TO MASTER_HOST='127.0.0.1', MASTER_PORT=3307, MASTER_USER='root', MASTER_PASSWORD='secret', MASTER_LOG_FILE='master-bin.000007', MASTER_LOG_POS=1518932; Query OK, 0 rows affected, 2 warnings (0.01 sec) mysql> START SLAVE; Query OK, 0 rows affected (0.01 sec) mysql> SHOW SLAVE STATUSG *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: root Master_Port: 3307 Connect_Retry: 60 Master_Log_File: master-bin.000008 Read_Master_Log_Pos: 449696 Relay_Log_File: localhost-relay-bin.000002 Relay_Log_Pos: 28957 Relay_Master_Log_File: master-bin.000007 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 15217950 Relay_Log_Space: 11476311 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 4382 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 10 Master_UUID: 80b9fe26-a945-11e7-aa1d-0242ac110002 Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Opening tables Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: 1 row in set (0.17 sec) . . . mysql> SHOW SLAVE STATUSG *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: root Master_Port: 3307 Connect_Retry: 60 Master_Log_File: master-bin.000074 Read_Master_Log_Pos: 154 Relay_Log_File: localhost-relay-bin.000133 Relay_Log_Pos: 381 Relay_Master_Log_File: master-bin.000074 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 154 Relay_Log_Space: 819 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 10 Master_UUID: 80b9fe26-a945-11e7-aa1d-0242ac110002 Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: 1 row in set (0.01 sec)

If you want to apply logs up to a particular time you can make use of mysqlbinlog to verify what the last position / GTID it should apply, and use START SLAVE UNTIL MASTER_LOG_FILE = 'log_name', MASTER_LOG_POS = log_pos or START SLAVE SQL_THREAD UNTIL SQL_AFTER_GTIDS = 3E11FA47-71CA-11E1-9E33-C80AA9429562:11-56.

Special thanks to Marcos Albe, who originally showed me this MySQL point in time recovery approach.

Pages