Planet MySQL

Reading the log positions of a snapshot consistently

MySQL 8.0.11 introduced a new performance schema table named log_status, which provides consistent information about MySQL server instance log positions from replication and transactional storage engines.

Used in conjunction with other MySQL 8.0 feature (the backup lock), this new feature will allow backup tools to take backup with minimal impact in DML throughput, but ensuring consistent snapshot with respect to GTIDs, replication, binary logs and transactional storage engine logs.…

Try MariaDB Server 10.3 in Docker

Try MariaDB Server 10.3 in Docker rasmusjohansson Wed, 04/25/2018 - 13:07

There are times when you may want to test specific software or a specific version of software. In my case, I wanted to play with MariaDB Server 10.3.6 Release Candidate and some of the new, upcoming features. I didn’t want to have a permanent installation of it on my laptop so I chose to put it in a Docker container that I can easily copy to another place or remove. These are the steps I had to take to get it done.

I won’t go through how to install Docker itself. There is good documentation for it, which can be found here: https://docs.docker.com/install/

After the installation is completed, make sure Docker is up and running by opening a terminal and typing in a terminal window:

docker info

There are a lot of other alternatives to see that Docker is up and running, but “info” provides useful information about your Docker environment.

After Docker is set up, it’s time to create a container running MariaDB Server. The easy way to do it is to use the MariaDB Dockerfiles available on Docker Hub. These images are updated fairly quickly when a new release is made of MariaDB. It’s this easy to get MariaDB Server 10.3 RC up and running by using the Dockerfile:

docker pull mariadb:10.3 docker run --name mariadbtest -e MYSQL_ROOT_PASSWORD=mypass -d mariadb:10.3

Check that MariaDB started correctly by looking at the logs:

docker logs mariadbtest

The last row in the log will also tell you what version of MariaDB is running.

For documentation on this, refer to Installing and using MariaDB via Docker in the MariaDB documentation.

In my case, I wanted to test out the latest version of MariaDB that wasn’t yet at the time of writing available in the Dockerfile on Docker Hub. I will next go through the steps to create and populate a container without using a Dockerfile.

To get going we’ll need a new container. We need the container to be based on a operating system that is supported for MariaDB. I’ll base it off Ubuntu Xenial (16.04).

docker run -i -t ubuntu:xenial /bin/bash

When running that command, Docker will download the Ubuntu Xenial Docker image and use it as the base for the container. The /bin/bash at the end will take us into the shell of the container.

Inside the container I want to install MariaDB 10.3. I used the repository configuration tool for MariaDB to get the right configuration to add to the clean Xenial installation I now have. The tool gave me the following three commands to run.

add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://mirror.netinch.com/pub/mariadb/repo/10.3/ubuntu xenial main' apt update apt install mariadb-server

The last command will start installing MariaDB, which will ask for a root password for MariaDB to be defined. Once that is done and the installation finishes we can exit from the container and save the configuration that we’ve done. The container id, which is needed as an argument for the commit command is easily fetched from the shell prompt , root@[container id].

exit docker commit [container id] rasmus/mariadb103

It’s pretty useful to be able to have the database data stored outside the container. This is easily done by first defining a place for the data on the host machine. In my case, I chose to put it in /dbdata in my home directory. We want to expose it as the /data directory inside the container. We start the container with this command.

docker run -v="$HOME/dbdata":"/data" -i -t -p 3306 rasmus/mariadb103 /bin/bash

Inside the container, let’s start the MariaDB server and run the normal installation and configuration scripts.

/usr/bin/mysqld_safe & mysql_install_db mysql_secure_installation

After this we can test connecting to MariaDB 10.3 and hopefully everything works.

mysql -p

Welcome to the MariaDB monitor.  Commands end with ; or \g.

Your MariaDB connection id is 16

Server version: 10.3.6-MariaDB-1:10.3.6+maria~xenial-log mariadb.org binary distribution

Now I want to save the configuration so far to easily be able to start from state whenever needed. First, I exit the MariaDB monitor and then shutdown MariaDB.

exit mysqladmin -p shutdown

Then another exit will get us out of the container and then we can save the new version of the container by running the below docker commit command in the host terminal. Again, take the container id from the shell prompt of the container.

exit docker commit -m "mariadb 10.3.6" -author="Rasmus" [container id] rasmus/mariadb103:"basic configuration"

Tadaa, done! MariaDB 10.3.6 is now available in a Docker container and I can start playing with the cool new features of MariaDB Server 10.3 like System Versioned Tables. To start the container, I just run:

docker run -v="$HOME/dbdata":"/data" -i -t -p 3306 rasmus/mariadb103:”basic configuration” /bin/bash

 

There are times when you may want to test specific software or a specific version of software. In my case, I wanted to play with MariaDB Server 10.3.6 Release Candidate and some of the new, upcoming features. I didn’t want to have a permanent installation of it on my laptop so I chose to put it in a Docker container that I can easily copy to another place or remove. These are the steps I had to take to get it done.

Login or Register to post comments

a critical piece is missing for Oracle MySQL 8 (GA) …

Oracle MySQL 8.0 has been declared GA but a critical piece is missing … MySQL 8 is a fantastic release embedding the work of brilliant Oracle engineering. I will not detail all the great features of MySQL 8 as there are a lot of great presentations around it. https://mysqlserverteam.com/whats-new-in-mysql-8-0-generally-available/

One of my main concern regarding [...]

Improve MariaDB Performance using Query Profiling

Query profiling is a useful technique for analyzing the overall performance of a database. Considering that a single mid-to-large sized application can execute numerous queries each and every second, query profiling is an important part of database tuning, both as a proactive measure and in diagnosing problems.  In fact, it can become difficult to determine the exact sources and causes of bottlenecks and sluggish performance without employing some sort of query profiling techniques. This blog will present a few useful query profiling techniques that exploit MariaDB server’s own built-in tools: the Slow Query Log and the Performance Schema.

MariaDB vs. MySQL

Needless to say, the techniques that we’ll be covering here today are likely to be equally applicable to MySQL, due to the close relationship between the two products.

The day that Oracle announced the purchase of Sun back in 2010, Michael “Monty” Widenius forked MySQL and launched MariaDB, taking a swath of MySQL developers with him in the process.  His goal was for the relational database management system (DBMS) to remain free under the GNU GPL.

Today, MariaDB is a drop-in replacement for MySQL, one with more features and better performance.

MariaDB used to be based on the corresponding version of MySQL, where one existed. For example, MariaDB 5.1.53 was based on MySQL 5.1.53, with some added bug fixes, additional storage engines, new features, and performance improvements.  As of this writing, the latest version of MariaDB is 10.2.x. Meanwhile, MySQL 8 is still in RC (Release Candidate) mode.

The Slow Query Log

One feature shared by both MariaDB and MySQL is the slow query log.  Queries that are deemed to be slow and potentially problematic are recorded in the log.  A “slow” query is defined as a query that takes longer than the long_query_time global system variable value (of 10 seconds by default) to run. Microseconds are allowed for file logging, but not for table logging.  

Configuring the Slow Query Log via Global System Variables

Besides the long_query_time global system variable mentioned above, there are a few other variables that determine the behavior of the slow query log.

The slow query log is disabled by default. To enable it, set the slow_query_log system variable to 1. The log_output server system variable determines how the output will be written, and can also disable it. By default, the log is written to file, but it can also be written to table.  

Valid values for the log_output server system variable are TABLE, FILE or NONE.  The default name of the file is host_name-slow.log, but can also be set using the –slow_query_log_file=file_name option. The table used is the slow_log table in the mysql system database.

These variables are best set in the my.cnf or mariadb.cnf configuration files, typically stored in the /etc/mysql/ directory on Linux and in the Windows System Directory, usually C:\Windows\System, on Windows.  (See Configuring MariaDB with my.cnf for all of the possible locations.)  The following settings, appended in the [mysqld] section, will:

  1. Enable the slow query log.
  2. Set time in seconds/microseconds defining a slow query.
  3. Provide the name of the slow query log file.
  4. Log queries that don’t use indexes.

[1] slow_query_log = 1
[2] long_query_time = 5
[3] slow_query_log_file = /var/log/mysql/slow-query.log
[4] log_queries_not_using_indexes

Settings will take effect after a server restart.

Viewing the Slow Query Log

Slow query logs written to file can be viewed with any text editor.  Here are some sample contents:

# Time: 150109 11:38:55 # User@Host: root[root] @ localhost [] # Thread_id: 40 Schema: world Last_errno: 0 Killed: 0 # Query_time: 0.012989 Lock_time: 0.000033 Rows_sent: 4079 Rows_examined: 4079 Rows_affected: 0 Rows_read: 4079 # Bytes_sent: 161085 # Stored routine: world.improved_sp_log SET timestamp=1420803535; SELECT * FROM City; # User@Host: root[root] @ localhost [] # Thread_id: 40 Schema: world Last_errno: 0 Killed: 0 # Query_time: 0.001413 Lock_time: 0.000017 Rows_sent: 4318 Rows_examined: 4318 Rows_affected: 0 Rows_read: 4318 # Bytes_sent: 194601 # Stored routine: world.improved_sp_log SET timestamp=1420803535;

The only drawback to viewing the slow query log with text editor is that it could (and does!) soon grow to such a size that it becomes increasingly difficult to parse through all the data.  Hence, there is a risk that problematic queries will get lost in the log contents. MariaDB offers the mysqldumpslow tool to simplify the process by summarizing the information.  The executable is bundled with MariaDB. To use it, simply run the command and pass in the log path. The resulting rows are more readable as well as grouped by query:

mysqldumpslow /tmp/slow_query.log Reading mysql slow query log from /tmp/slow_query.log Count: 1 Time=23.99s (24s) Lock=0.00s (0s) Rows_sent=1.0 (1), Rows_examined=0.0 (0), Rows_affected=0.0 (0), root[root](#)@localhost SELECT * from large_table Count: 6 Time=6.83s (41s) Lock=0.00s (0s) Rows_sent=1.0 (6), Rows_examined=0.0 (0), Rows_affected=0.0 (0), root[root](#)@localhost SELECT * from another_large_table

There are various parameters that can be used with the command to help customize the output. In the next example, the top 5 queries sorted by the average query time will be displayed:

mysqldumpslow -t 5 -s at /var/log/mysql/localhost-slow.log

Working with the slow_log Table

Slow query logs written to table can be viewed by querying the slow_log table.

It contains the following fields:

Field Type Default Description start_time timestamp(6) CURRENT_TIMESTAMP(6) Time the query began. user_host mediumtext NULL User and host combination. query_time time(6) NULL Total time the query took to execute. lock_time time(6) NULL Total time the query was locked. rows_sent int(11) NULL Number of rows sent. rows_examined int(11) NULL Number of rows examined. db varchar(512) NULL Default database. last_insert_id int(11) NULL last_insert_id. insert_id int(11) NULL Insert id. server_id int(10) unsigned NULL The server’s id. sql_text mediumtext NULL Full query. thread_id bigint(21) unsigned NULL Thread id. rows_affected int(11) NULL Number of rows affected by an UPDATE or DELETE (as of MariaDB 10.1.2)

Here are some sample results of a SELECT ALL against the slow_log table:

SELECT * FROM mysql.slow_log\G *************************** 2. row *************************** start_time: 2014-11-11 07:56:28.721519 user_host: root[root] @ localhost [] query_time: 00:00:12.000215 lock_time: 00:00:00.000000 rows_sent: 1 rows_examined: 0 db: test last_insert_id: 0 insert_id: 0 server_id: 1 sql_text: SELECT * FROM large_table thread_id: 74 rows_affected: 0 ... Ordering Slow Query Log Rows

If you want to emulate the Linux “tail -100 log-slow.log” command with the slow_log table, which lists the latest queries in the end, you can issue the following query:

SELECT * FROM (SELECT * FROM slow_log ORDER BY start_time DESC LIMIT 100) sl ORDER BY start_time;

That will list the last 100 queries in the table.
Rather than typing the same SELECT statement every time you want to list the latest queries last, I would recommend creating a stored procedure, something like SHOW_LATEST_SLOW_QUERIES and use it instead. The number of queries to show can be passed to your proc as an input parameter.

Testing the Slow Query Log

Before attempting to scour through the slow query log in a production environment, it’s a good idea to test its operation by executing a few test queries; perhaps some that should trigger logging and others that should not.

As mentioned previously, when logging is enabled, queries that takes longer than the long_query_time global system variable value to run are recorded in the slow log file or the slow_log table, based on the log_output variable’s value.

You can certainly work with your data to construct SELECT queries that take variable amounts of time to execute, but perhaps an easier approach is to employ the sleep() function:

SLEEP(duration)

The sleep() function pauses query execution for the number of seconds given by the duration argument, then returns 0.  If SLEEP() is interrupted, it returns 1. The duration may have a fractional part given in microseconds, but there is really no need for the purposes of slow log testing.  Here’s an example:

SELECT sleep(5); +------------+ | sleep(5) | +------------+ | 0 | +------------+ 1 row in set (5.0 sec)

Suppose that the long_query_time global system variable has not been explicitly assigned a value. In that instance it would have the default value of 10 seconds. Therefore, the following SELECT statement would be recorded to the slow log:
SELECT SLEEP(11);

Query Profiling with Performance Schema

Another tool that we can use to monitor server performance is the Performance Schema. Introduced in MariaDB 5.5, the Performance Schema is implemented as a storage engine, and so will appear in the list of storage engines available:

SHOW ENGINES; +--------------------+---------+-------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +--------------------+---------+-------------------------------+--------------+------+------------+ | InnoDB | DEFAULT | Default engine | YES | YES | YES | | PERFORMANCE_SCHEMA | YES | Performance Schema | NO | NO | NO | | ... | | | | | | +--------------------+---------+-------------------------------+--------------+------+------------+

It is disabled by default for performance reasons, but it can easily be enabled as follows:
First, add the following line in your my.cnf or my.ini file, in the [mysqld] section:
performance_schema=on

The performance schema cannot be activated at runtime – it must be set when the server starts, via the configuration file.

The Performance Schema storage engine contains a database called performance_schema, which in turn consists of a number of tables that can be queried with regular SQL statements for a wide range of performance information.

In order to collect data, you need to set up all consumers (starting the collection of data) and instrumentations (what data to collect).  These may be set either on server startup or at runtime.

The following statements set up consumers and instrumentations at runtime:

UPDATE performance_schema.setup_consumers SET ENABLED = 'YES';

UPDATE performance_schema.setup_instruments SET ENABLED = ‘YES’, TIMED = ‘YES’;

You can decide what to enable/disable with WHERE NAME like “%what_to_enable”; conversely, you can disable instrumentations by setting ENABLED to “NO”.
The following enables all instrumentation of all stages (computation units) in the configuration file:

[mysqld] performance_schema=ON performance-schema-instrument='stage/%=ON' performance-schema-consumer-events-stages-current=ON performance-schema-consumer-events-stages-history=ON performance-schema-consumer-events-stages-history-long=ON

With regards to Query Profiling:

  1. Ensure that statement and stage instrumentation is enabled by updating the setup_instruments table as follows:

UPDATE performance_schema.setup_instruments SET ENABLED = 'YES', TIMED = 'YES'
WHERE NAME LIKE '%statement/%';

UPDATE performance_schema.setup_instruments SET ENABLED = 'YES', TIMED = 'YES'
WHERE NAME LIKE '%stage/%';

      2. Enable the events_statements_* and events_stages_* consumers:

UPDATE performance_schema.setup_consumers SET ENABLED = 'YES'
WHERE NAME LIKE '%events_statements_%';

UPDATE performance_schema.setup_consumers SET ENABLED = 'YES'
WHERE NAME LIKE '%events_stages_%';

Once you’ve narrowed down what you are interested in, there are two ways to start monitoring:

  1. View raw data in the summary views.
    This gives you an overall picture of usage on the instance.
  2. Snapshot data, and compute deltas over time.
    This gives you an idea of the rates of changes for events.

Let’s start with viewing raw summary data:

  1. Run the statement(s) that you want to profile. For example:
SELECT * FROM acme.employees WHERE emp_no = 99; +--------+------------+------------+-----------+--------+------------+ | emp_no | birth_date | first_name | last_name | gender | hire_date | +--------+------------+------------+-----------+--------+------------+ | 99 | 1979-01-29 | Bill | Bixby | M | 2006-06-05 | +--------+------------+------------+-----------+--------+------------+

Identify the EVENT_ID of the statement by querying the events_statements_history_long table. This step is similar to running SHOW PROFILES to identify the Query_ID. The following query produces output similar to SHOW PROFILES:

SELECT EVENT_ID, TRUNCATE(TIMER_WAIT/1000000000000,6) as Duration, SQL_TEXT FROM performance_schema.events_statements_history_long WHERE SQL_TEXT like '%99%'; +----------+----------+--------------------------------------------------------+ | event_id | duration | sql_text | +----------+----------+--------------------------------------------------------+ | 22 | 0.021470 | SELECT * FROM acme.employees WHERE emp_no = 99 | +----------+----------+--------------------------------------------------------+

Query the events_stages_history_long table to retrieve the statement’s stage events. Stages are linked to statements using event nesting. Each stage event record has a NESTING_EVENT_ID column that contains the EVENT_ID of the parent statement.

SELECT event_name AS Stage, TRUNCATE(TIMER_WAIT/1000000000000,6) AS Duration FROM performance_schema.events_stages_history_long WHERE NESTING_EVENT_ID=22; +--------------------------------+----------+ | Stage | Duration | +--------------------------------+----------+ | stage/sql/starting | 0.000080 | | stage/sql/checking permissions | 0.000005 | | stage/sql/Opening tables | 0.027759 | | stage/sql/init | 0.000052 | | stage/sql/System lock | 0.000009 | | stage/sql/optimizing | 0.000006 | | stage/sql/statistics | 0.000082 | | stage/sql/preparing | 0.000008 | | stage/sql/executing | 0.000000 | | stage/sql/Sending data | 0.000017 | | stage/sql/end | 0.000001 | | stage/sql/query end | 0.000004 | | stage/sql/closing tables | 0.000006 | | stage/sql/freeing items | 0.000272 | | stage/sql/cleaning up | 0.000001 | +--------------------------------+----------+ Conclusion

This blog presented a few useful query profiling techniques that employ a couple of MariaDB server’s built-in tools: the Slow Query Log and the Performance Schema.
The Slow Query Log records queries that are deemed to be slow and potentially problematic, that is, queries that take longer than the long_query_time global system variable value to run.
The slow query log may be viewed with any text editor. Alternatively, MariaDB’s mysqldumpslow tool can simplify the process by summarizing the information. The resulting rows are more readable as well as grouped by query.
The Performance Schema is a storage engine that contains a database called performance_schema, which in turn consists of a number of tables that can be queried with regular SQL statements for a wide range of performance information. It may be utilized to view raw data in the summary views as well as review performance over time.

The post Improve MariaDB Performance using Query Profiling appeared first on Monyog Blog.

List of Conferences & Events w/ MySQL, April - June 2018! - continued

As an update to the blog posted on April 4, 2018 we would like to update the list of events where you can find MySQL. Please see the four new conferences below: 

  • DevTalks, Cluj-Napoca, Romania, May 16, 2018
    • MySQL became a customized sponsor of this show. We will have  MySQL keynote given by Georgi Kodinov, the MySQL Senior SW Development Manager. We are still working on the topic, please watch the organizers’ website for further updates. 
  • SyntaxCon,  Charleston, SC, US, June 6-8, 2018  
    • MySQL Community team is going to be Bronze sponsor of SyntaxCon conference. This time we are going without booth, but with already approved MySQL talk. Please find the talk in the schedule, David Stokes, the MySQL Community Manager will be talking about “MySQL 8 - A New Beginning”. Talk is scheduled for Thursday, June 7 @1:15pm  
  • PyCon Thailand, Bangkok, Thailand, June 16-17,
    • 2018  MySQL is going to support & attend this conference. This time we are going without booth, but with "on site" staffing by Ronen Baram, the Principal Sales Consultant. Ronen also submitted a MySQL talk and we hope it will be approved. Please watch organizers website for further updates.  
  • DataOps Barcelona, Spain, June 21-22, 2018
    • We are happy to announce that MySQL Community team is going to be Community sponsor of DataOps Barcelona. The MySQL Community Manager, Fred Descamps will be talking about MySQL 8.0, Cluster & Document store. Please do not miss his and others' well known MySQL speakers's talks which will be announced in the schedule section of the conference website.

Please be aware that the list does not have to be final, during the time more events could be added or some of them removed. We will keep you informed!

 

 

Percona Live 2018 Sessions: Ghostferry – the Swiss Army Knife of Live Data Migrations with Minimum Downtime

In this blog post on Percona Live 2018 sessions, we’ll talk with Shuhoa Wu, Software Developer for Shopify, Inc. about how Ghostferry is the Swiss Army knife of live data migrations.

Existing tools like mysqldump and replication cannot migrate data between GTID-enabled MySQL and non-GTID-enabled MySQL – a common configuration across multiple cloud providers that cannot be changed. These tools are also cumbersome to operate and error-prone, thus requiring a DBA’s attention for each data migration. Shopify’s team introduced a tool that allows for easy migration of data between MySQL databases with constant downtime on the order of seconds.

Inspired by gh-ost, their tool is named Ghostferry and allows application developers at Shopify to migrate data without assistance from DBAs. It has been used to rebalance sharded data across databases. They open sourced Ghostferry at the Percona Live 2018 conference so that anyone can migrate their own data with minimal hassle and downtime. Since Shopify wrote Ghostferry as a library, you can use it to build specialized data movers that move arbitrary subsets of data from one database to another.

Shuhao walked through what data migration is, how it works, and how Ghostferry works to make this process simpler and standard across platforms – especially in systems (like cloud providers such as AWS or Google) where you don’t have control of the instances. Ghostferry also simplifies the replication process and allows someone to copy across instances with a single Ghostferry command, rather than having to understand both the source and target instances.

After the Percona Live 2018 sessions talk, I had a chance to speak with Shuhao about Ghostferry, Check it out below.

The post Percona Live 2018 Sessions: Ghostferry – the Swiss Army Knife of Live Data Migrations with Minimum Downtime appeared first on Percona Database Performance Blog.

Percona Live 2018 Sessions: Microsoft Built MySQL, PostgreSQL and MariaDB for the Cloud

In this blog post on Percona Live 2018 sessions, we’ll talk with Jun Su, Principal Engineering Manager at Microsoft about how Microsoft built MySQL, PostgreSQL and MariaDB for the cloud.

Offering MySQL, PostgreSQL and MariaDB database services in the cloud is different than doing so on-premise. Latency, connection redirection, optimal performance configuration are just a few challenges. In this session, Jun Su walked us through Microsoft’s journey to not only offer these popular OSS RDBMS in Microsoft Azure, but how they are implemented in Azure as a true DBaaS. We learned about Microsoft’s Azure Database Services platform architecture, and how these services are built to scale.

In Azure, database engine instances are services managed by the Azure Service Fabric, which is a platform for reliable, hyperscale, microservice-based applications. So each database engine gets treated as a microservice. When coupled with Azure’s clustering — a set of machines that the Service Fabric stitches together — you can scale up to 1000+ machines. This provides some pretty impressive scaling opportunities. Jun also walked through some of the issues with multi-tenancy, and how different levels of multi-tenancy have different trade-offs in cost, capacity and density.

After the talk, I spoke briefly with Jun about Microsoft’s efforts to provide the different open source databases on the Azure platform.

The post Percona Live 2018 Sessions: Microsoft Built MySQL, PostgreSQL and MariaDB for the Cloud appeared first on Percona Database Performance Blog.

One Giant Leap For SQL: MySQL 8.0 Released

One Giant Leap For SQL: MySQL 8.0 Released

“Still using SQL-92?” is the opening question of my “Modern SQL” presentation. When I ask this question, an astonishingly large portion of the audience openly admits to using 25 years old technology. If I ask who is still using Windows 3.1, which was also released in 1992, only a few raise their hand…but they’re joking, of course.

Clearly this comparison is not entirely fair. It nevertheless demonstrates that the know-how surrounding newer SQL standards is pretty lacking. There were actually five updates since SQL-92—many developers have never heard of them. The latest version is SQL:2016.

As a consequence, many developers don’t know that SQL hasn’t been limited to the relational algebra or the relational model since 1999. SQL:1999 introduced operations that don't exist in relational algebra (with recursive, lateral) and types (arrays!) that break the traditional interpretation of the first normal form.0

Since then, so for 19 years, whether or not a SQL feature fits the relational idea isn’t important anymore. What is important is that a feature has well-defined semantics and solves a real problem. The academic approach has given way to a pragmatic one. Today, the SQL standard has a practical solution for almost every data processing problem. Some of them stay within the relational domain, while others do not.

Resolution

Don’t say relational database when referring to SQL databases. SQL is really more than just relational.

It’s really too bad that many developers still use SQL in the same way it was being used 25 years ago. I believe the main reasons are a lack of knowledge and interest1 among developers along with poor support for modern SQL in database products.

Let’s have a look at this argument in the context of MySQL. Considering its market share, I think that MySQL’s lack of modern SQL has contributed more than its fair share to this unfortunate situation. I once touched on that argument in my 2013 blog post “MySQL is as Bad for SQL as MongoDB is to NoSQL”. The key message was that “MongoDB is a popular, yet poor representative of its species—just like MySQL is”. Joe Celko has expressed his opinion about MySQL differently: “MySQL is not SQL, it merely borrows the keywords from SQL”.

You can see some examples of the questionable interpretation of SQL in the MySQL WAT talk on YouTube.2 Note that this video is from 2012 and uses MySQL 5.5 (the current GA version at that time). Since then, MySQL 5.6 and 5.7 came out, which improved the situation substantially. The default settings on a fresh installation are much better now.3

It is particularly nice that they were really thinking about how to mitigate the effects of changing defaults. When they enabled ONLY_FULL_GROUP_BY by default, for example, they went the extra mile to implement the most complete functional dependencies checking among the major SQL databases:

About the same time MySQL 5.7 was released, I stopped bashing MySQL. Of course I'm kidding. I'm still bashing MySQL occasionally…but it has become a bit harder since then.

By the way, did you know MySQL still doesn’t support check constraints? Just as in previous versions, you can use check constraints in the create table statement but they are silently ignored. Yes—ignored without warning. Even MariaDB fixed that a year ago.

Uhm, I’m bashing again! Sorry—old habits die hard.

Nevertheless, the development philosophy of MySQL has visibly changed over the last few releases. What happened? You know the answer already: MySQL is under new management since Oracle bought it through Sun. I must admit: it might have been the best thing that happened to SQL in the past 10 years, and I really mean SQL—not MySQL.

The reason I think a single database release has a dramatic effect on the entire SQL ecosystem is simple: MySQL is the weakest link in the chain. If you strengthen that link, the entire chain becomes stronger. Let me elaborate.

MySQL is very popular. According to db-engines.com, it’s the second most popular SQL database overall. More importantly: it is, by a huge margin, the most popular free SQL database. This has a big effect on anyone who has to cope with more than one specific SQL database. These are often software vendors that make products like content management systems (CRMs), e-commerce software, or object-relational mappers (ORMs). Due to its immense popularity, such vendors often need to support MySQL. Only a few of them bite the bullet and truly support multiple database—Java Object Oriented Querying (jOOQ) really stands out in this regard. Many vendors just limit themselves to the commonly supported SQL dialect, i.e. MySQL.

Another important group affected by MySQL’s omnipresence are people learning SQL. They can reasonably assume that the most popular free SQL database is a good foundation for learning. What they don't know is that MySQL limits their SQL-foo to the weakest SQL dialect among those being widely used. Based loosely on Joe Celko’s statement: these people know the keywords, but don’t understand their real meaning. Worse still, they have not heard anything about modern SQL features.

Last week, that all changed when Oracle finally published a generally available (GA) release of MySQL 8.0. This is a landmark release as MySQL eventually evolved beyond SQL-92 and the purely relational dogma. Among a few other standard SQL features, MySQL now supports window functions (over) and common table expressions (with). Without a doubt, these are the two most important post-SQL-92 features.

The days are numbered in which software vendors claim they cannot use these features because MySQL doesn't support them. Window functions and CTEs are now in the documentation of the most popular free SQL database. Let me therefore boldly claim: MySQL 8.0 is one small step for a database, one giant leap for SQL.4

It gets even better and the future is bright! As a consequence of Oracle getting its hands on MySQL, some of the original MySQL team (among them the original creator) created the MySQL fork MariaDB. Apparently, their strategy is to add many new features to convince MySQL users to consider their competing product. Personally I think they sacrifice quality—very much like they did before with MySQL—but that’s another story. Here it is more relevant that MariaDB has been validating check constraints for a year now. That raises a question: how much longer can MySQL afford to ignore check constraints? Or to put it another way, how much longer can they endure my bashing ;)

Besides check constraints, MariaDB 10.2 also introduced window functions and common table expressions (CTEs). At that time, MySQL had a beta with CTEs but no window functions. MariaDB is moving faster.5

In 10.3, MariaDB is set to release “system versioned tables”. In a nutshell: once activated for a table, system versioning keeps old versions for updated and deleted rows. By default, queries return the current version as usual, but you can use a special syntax (as of) to get older versions. Your can read more about this in MariaDBs announcement.

System versioning was introduced into the SQL standard in 2011. As it looks now, MariaDB will be the first free SQL database supporting it. I hope this an incentive for other vendors—and also for users asking their vendors to support more modern SQL features!

Now that the adoption of modern SQL has finally gained some traction, there is only one problem left: the gory details. The features defined by the standard have many subfeatures, and due to their sheer number, it is common practice to support only some of them. That means it is not enough to say that a database supports window functions. Which window functions does it actually support? Which frame units (rows, range, groups)? The answers to these questions make all the difference between a marketing gag and a powerful feature.

In my mission to make modern SQL more accessible to developers, I’m testing these details so I can highlight the differences between products. The results of these tests are shown in matrices like the ones above. The rest of this article will thus briefly go through the new standard SQL features introduced with MySQL 8.0 and discuss some implementation differences. As you will see, MySQL 8.0 is pretty good in this regard. The notable exception is its JSON functionality.

Window Functions

There is SQL before window functions and SQL after window functions. Without exaggeration, window functions are a game changer. Once you understood window functions, you cannot imagine how you could ever have lived without them. The most common use cases, for example finding the best N rows per group, building running totals or moving averages, and grouping consecutive events, are just the tip of the iceberg. Window functions are one of the most important tools to avoid self-joins. That alone makes many queries less redundant and much faster. Window functions are so powerful that even newcomers like several Apache SQL implementations (Hive, Impala, Spark), NuoDB and Google BigQuery introduced them years ago. It’s really fair to say that MySQL is pretty late to this party.

The following matrix shows the support of the over clause for some major SQL databases. As you can see, MySQL’s implementation actually exceeds the capabilities of “the world’s most advanced open source relational database”, as PostgreSQL claims on its new homepage. However, PostgreSQL 11 is set to recapture the leader position in this area.

The actual set of window functions offered by MySQL 8.0 is also pretty close to the state of the art:

Common Table Expressions (with [recursive])

The next major enhancement for MySQL 8.0 are common table expressions or the with [recursive] clause. Important use cases are traversing graphs with a single query, generating an arbitrary number of rows, converting CSV strings to rows (reversed listagg / group_concat) or just literate SQL.

Again, MySQL’s first implementation closes the gap.

Other Standard SQL Features

Besides window functions and the with clause, MySQL 8.0 also introduces some other standard SQL features. However compared to the previous two, these are by no means killer features.

As you can see, Oracle pushes standard SQL JSON support. The Oracle database and MySQL are currently the leaders in this area (and both are from the same vendor!). The json_objectagg and json_arrayagg functions were even backported to MySQL 5.7.22. However, it’s also notable that MySQL doesn’t follow the standard syntax for these two functions. Modifiers defined in the standard (e.g. an order by clause) are generally not supported. Json_objectagg neither recognizes the keywords key and value nor accepts the colon (:) to separate attribute names and values. It looks like MySQL parses these as regular functions calls—as opposed to syntax described by the standard.

It’s also interesting to see that json_arrayagg handles null values incorrectly, very much like the Oracle database (they don’t default to absent on null6). Seeing the same issue in two supposedly unrelated products is always interesting. Adding the fact that both products come from the same vendor adds another twist.

The two last features in the list, grouping function (related to rollup) and column names in the from clause are solutions to pretty specific problems. Their MySQL 8.0 implementation is basically on par with that of other databases.

Furthermore, MySQL 8.0 also introduced standard SQL roles. The reason this is not listed in the matrix above is simple: the matrices are based on actual tests I run against all these databases. My homegrown testing framework does not yet support test cases that require multiple users—currently all test are run with a default user, so I cannot test access rights yet. However, the time for that will come—stay tuned.

Other Notable Enhancements

I'd like to close this article with MySQL 8.0 fixes and improvements that are not related to the SQL standard.

One of them is about using the desc modifier in index declarations:

CREATE INDEX … ON … (<column> [ASC|DESC], …)

Most—if not all—databases use the same logic in the index creation as for the order by clause, i.e. by default, the order of column values is ascending. Sometimes it is needed to sort some index columns in the opposite direction. That’s when you specify desc in an index. Here’s what the MySQL 5.7 documentation said about this:

An index_col_name specification can end with ASC or DESC. These keywords are permitted for future extensions for specifying ascending or descending index value storage. Currently, they are parsed but ignored; index values are always stored in ascending order.

“They are parsed but ignored”? To be more specific: they are parsed but ignored without warning very much like check constraints mentioned above.

However, this has been fixed with MySQL 8.0. Now there is a warning. Just kidding! Desc is honored now.

There are many other improvements in MySQL 8.0. Please refer to “What’s New in MySQL 8.0?” for a great overview. How about a small appetizer:

One Giant Leap For SQL: MySQL 8.0 Released” by Markus Winand was originally published at modern SQL.

MySQL Community Awards 2018: the Winners

The MySQL Community Awards initiative is an effort to acknowledge and thank individuals and corporations for their contributions to the MySQL ecosystem. It is a from-the-community, by-the-community, and for-the-community effort. The committee is composed of an independent group of community members of different orientation and opinion, themselves past winners or known contributors to the community.

The 2018 community awards were presented on April 23, 2018, during the Welcome Reception at the Percona Live conference. The winners are:

MySQL Community Awards: Community Contributor of the year 2018

  • Jean-François Gagné
    Jean-François was nominated for his many blog posts, bug reports, and experiment results that make MySQL much better. Here is his blog: https://jfg-mysql.blogspot.com/
  • Sveta Smirnova
    Sveta spreads knowledge and good practice on all things MySQL as a frequent speaker and blogger. Her years of experience in testing, support, and consulting are shared in webinars, technical posts, conferences around the world and in her book “MySQL Troubleshooting”.

MySQL Community Awards: Application of the year 2018

  • MyRocks
    MyRocks is now in MariaDB, Percona Server and PolarDB (Alibaba). Intel, MariaDB and Rockset are optimizing it for cloud native storage. It will soon be available for MySQL 8.x. As a bonus, they have engaged with academia to explain how to make tuning easier.
  • ProxySQL
    ProxySQL made an appearence at the awards for a second year in a row, this time for the product as a whole, rather than just its creator, because it is just so popular. It solves serious, real-world problems experienced by DBAs in an elegant way. Here is what some of the nominations said: “ProxySQL is a great alternative for MaxScale and MySQL Proxy. Many companies are using it in production. It is simple, easy to configure, and works reliably!” “great results, frequent updates and support from a committed team”
  • Vitess
    Vitess is a database clustering system for horizontal scaling of MySQL. Originally developed at YouTube/Google and now under CNCF, Vitess is free and open source and aims to solve scaling problems for MySQL installations. Vitess enjoys an active contributing community, runs in production at such companies such as Square and Slack, and being evaluated and adopted by many others. Vitess has a public, active and welcoming Slack channel. “Vitess helps users scale MySQL by making sharding painless. It minimizes changes needed on the application side, and at the same time simplifies the management of shards.”

MySQL Community Awards: Corporate Contributor of the year 2018

  • Alibaba Cloud
    Alibaba Cloud authors AliSQL, a free and open source MySQL branch. They both take patches from various contributors as well as provide Alibaba Cloud’s own at-scale patches for the MySQL server, that are innovative and disrupting.
    Alibaba Cloud has made an investment in MariaDB, that will no doubt help in keeping the MySQL ecosystem competitive and thriving

Committee:
Alexey Kopytov
Baron Schwartz
Bill Karwin
Colin Charles
Daniël van Eeden
Eric Herman
Frédéric Descamps
Justin Swanhart
Mark Leith
Santiago Lertora
Shlomi Noach
Simon Mudd

Co-secretaries:
Agustín Gallego
Emily Slocombe

Thank you to this year’s anonymous sponsor for donating the goblets. Thank you to Colin Charles for acquiring and transporting the goblets. Thank you to  Santiago Lertora for our website!

Here are some more photos from leFred Descamps during the event:

 

Percona Live 2018 Sessions: Query Optimizer – MySQL vs. PostgreSQL

In this blog post on Percona Live 2018 sessions, we’ll talk with Christian Antognini, Senior Principal Consultant at Trivadis about the differences between MySQL and PostgreSQL query optimizers.

MySQL and PostgreSQL are two of the most popular open-source relational databases. Why would you pick one over the other to support your applications? Of course, it depends on the use case, environment and workload. To help with choosing between them, the people at Trivadis ran a comparison of their query optimizers. The aim of this session was to summarize the outcome of the comparison. Specifically, to point out optimizer-related strengths and weaknesses.

Christian spent a lot of time looking at the differences in indexing with regard to sorts, keys and partitioning, as well as joins and merges.

Both engines have good configuration capabilities, metadata use, and indexing capabilities. MySQL has better hints, while PostgreSQL wins on object statistics and joins.

After the lecture, I had a chance to speak with Christian about the differences between the query optimizers for MySQL and PostgreSQL. Check it out below.

The post Percona Live 2018 Sessions: Query Optimizer – MySQL vs. PostgreSQL appeared first on Percona Database Performance Blog.

Percona Live 2018 Sessions: MySQL at Twitter

In this Percona Live 2018 blog, we’ll talk with Ronald Francisco, SRE of Database Infrastructure at Twitter about why they moved from a fork of MySQL to MySQL 5.7.

We already started today with a great set of keynote sessions, and now the breakout sessions have begun in earnest. I’ve been looking in on the talks and stopping to talk with some of the presenters.

In this session, Ronald Ramon Francisco (Twitter Inc) SRE, Database Infrastructure presented the motivation for moving from a fork to MySQL to MySQL proper, and why they decided to do it. Twitter has been using their own fork of MySQL for many years. Last year the team decided to migrate to the community version of MySQL 5.7 and abandoned their own version. The road to the community version was full of challenges.

He also discussed the challenges and surprises encountered and how they overcome them. Finally, He looked at lessons learned, recommendations and their future plans.

I got a chance to speak with Ronald after his talk, and ask a few questions.

Check it out below.

The post Percona Live 2018 Sessions: MySQL at Twitter appeared first on Percona Database Performance Blog.

MySQL Performance : over 1.8M QPS with 8.0 GA on 2S Skylake !

Last year we already published our over 2.1M QPS record with MySQL 8.0 -- it was not yet GA on that moment and the result was obtained on the server with 4CPU Sockets (4S) Intel Broadwell v4. We did not plan any improvement in 8.0 for RO related workloads, and the main target of this test was to ensure there is NO regressions in the results (yet) comparing to MySQL 5.7 (where the main RO improvements were delivered). While for MySQL 8.0 we mostly focused our efforts on lagging WRITE performance in MySQL/InnoDB, and our "target HW" was 2CPU Sockets servers (2S) -- which is probably the most widely used HW configuration for todays MySQL Server deployments..

However, not only SW, but also HW is progressing quickly these days ! -- and one of my biggest surprises last time was about Intel Skylake CPU ;-)) -- the following graph is reflecting the difference between similar 2S servers, where one is having the "old" 44cores-HT Broadwell v4, and another the "new" 48cores-HT Skylake CPUs :


the difference is really impressive, specially when you see that just on 32 users load (when CPU is not at all saturated not on 44cores nor 48cores) there is already 50% gain with Skylake ! (and this is about a pure "response time"), and on peak QPS level it's over 1.8M QPS (not far from 80% gain over Brodawell)..

And this results is marking our next milestone in MySQL RO performance on 2S HW ! ;-))

Sysbench RO Point-Selects 10Mx8-tables latin1 on 2S 48cores-HT Skylake

As already mentioned, the main gain is coming from MySQL 5.7 changes, and we're probably just little bit lucky here to see MySQL 8.0 slightly better than 5.7 ;-)) (while as you can see from the chart, it was also a good reason for MariaDB to move to InnoDB 5.7 to match similar gains comparing to InnoDB 5.6)..

So, it's as expected, to not see any difference on mixed OLTP_RO :

Sysbench OLTP_RO 10Mx8-tables latin1 on 2S 48cores-HT Skylake

and what is amazing here is that we're reaching on 2S Skylake now the 1M QPS result that we obtained in the past with MySQL 5.7 on the same 4S Broadwell v4 box (which was not yet upgraded to 96cores on that time).. And it makes me smile now to recall all the discussions with our users mentioning "they will never use anything bigger than 2S server" -- and here we're ! -- its exactly 2S, 48cores "only" box, but pointing on ALL the problems we already investigated on bigger HW and fixed them ON TIME ! ;-)) -- well, there is still a lot of work ahead, so let's hope we'll be "always on time", let's see..

The most "sensible" RO workload for me was always "distinct ranges" in Sysbench, as it's pointing on issues which anyone can have with "group by" or "distinct" queries.. -- which is involving temp memory tables code, and this code was completely rewritten in 8.0 "from scratch" to be finally more simple, "maintainable" and more scalable. But this is not always meaning "as efficient as before" (the most efficient it could be probably if it was re-written on assembler, but again -- who will want to maintain it again ? ;-))

Sysbench RO Distinct-Ranges 10Mx8-tables latin1 on 2S 48cores-HT Skylake

so far, MySQL 8.0 is doing mostly the same as 5.7 here, and it's really good keeping in mind the impact of the changes.. (while further improvements may still be done, as well many old "broken" cases are solved now, etc.)

(but have nothing to say about MariaDB 10.3.5 here)

well, this was about "latin1" RO results, stay tuned, there is more to come ;-))

And Thank You for using MySQL !

P.S. an "attentive reader" may ask himself -- if over 1.8M QPS are reached on 2S Skylake, what QPS level then will be matched on 4S Skylake ??? -- and I'd say you : we don't care ;-)) we know the result will be higher, but we're NOT running after numbers, and there are other more important problems waiting on us to be resolved, rather to show you "yet more good numbers" ;-))

Rgds,
-Dimitri

Percona Live 2018 Keynotes, Day One

Welcome to Percona Live 2018 keynotes, day one!

Percona Live 2018 is up and running! We call this day one, but in reality, yesterday was filled with tutorials that provided excellent and practical information on how to get your MySQL, MongoDB, MariaDB and PostgreSQL environments up, running and optimized.

Today we started with keynote presentations from Percona, a technology panel, Oracle and Netflix. You can view the recording of today’s keynotes here.

Percona Welcome

Laurie Coffin (Percona)

Laurie Coffin is Percona’s CMO, and she welcomed everyone to Percona Live Open Source Database Conference 2018 and announced some important items, like downloading the Percona app.

Make sure you download the app to use for the rest of the conference.

Open Source for the Modern Business

Peter Zaitsev (Percona)

As open source database adoption continues to grow in enterprise organizations, the expectations and definitions of what constitutes success continue to change. In today’s environment, it’s no longer a question of which database to use, but which databases do you need, what platforms will you deploy them on, and how do you get them to work together. A single technology for everything is no longer an option; welcome to the polyglot world.

Percona sees a lot of compelling open source projects and trends that interest the community. Peter also announced the beginning or Percona’s PostgreSQL support plans, as well as partnerships with Mesosphere and Microsoft. This makes Percona the only company that supports three of the major cloud providers and four of the major open source database platforms.

Cool Technologies Showcase

Nikolay Samokhvalov (Nombox), Sugu Sougoumarane (PlanetScale Data), Shuhao Wu (Shopify Inc.), Andy Pavlo (Carnegie Mellon University)

In this series of quick talks, we were treated to different perspectives on emerging database technologies:

  • Automatization of Postgres Administration. Cloud services like Amazon RDS or Google Cloud SQL help to automate half of DBA tasks: launch database instances, provision replicas, create backups. But for the most part, database tuning and query optimization aren’t. The purpose of automation should be to detect, predict and ultimately prevent database issues.
  • High Performance, Scalable, and Available MySQL Clustering System for the Cloud. Multiple companies now use Vitess in production. Vitess shines in this area by providing query logs, transaction logs, information URLs, and status variables that can feed into a monitoring system like Prometheus. Vitess won a MySQL community award last night at the Community Reception.
  • Ghostferry: the Swiss Army Knife of Live Data Migrations with Minimum Downtime. Inspired by gh-ost, Ghostferry allows application developers at Shopify to migrate data without assistance from DBAs. They plan to open source Ghostferry at the conference so that anyone can migrate their own data with minimal hassle and downtime.
  • What is a Self-Driving Database Management System? People are touting the rise of “self-driving” database management systems (DBMSs). But nobody has clearly defined what it means for a DBMS to be self-driving. Thus, in this keynote, Andy provides the history of autonomous databases and what is needed to make a true self-driving DBMS. Along with a new way to measure how close you are to academic tenure.

State of the Dolphin 8.0

Tomas Ulin (Oracle)

Oracle just announced the availability of MySQL 8.0 GA. Today, Tomas Ulin talked about the focus, strategy, investments and innovations that allow MySQL to provide next-generation Web, mobile, cloud and embedded applications. He also discussed features, fixes and changes in the latest and the most significant MySQL database release ever in its history: MySQL 8.0.

Linux Performance 2018

Brendan Gregg (Netflix)

At over one thousand code commits per week, it’s hard to keep up with Linux developments. This keynote will summarize recent Linux performance features, for a wide audience: the KPTI patches for Meltdown, eBPF for performance observability, Kyber for disk I/O scheduling, BBR for TCP congestion control, and more. This is about exposure: knowing what exists, so you can learn and use it later when needed. Get the most out of your systems, whether they are databases or application servers, with the latest Linux kernels and exciting features.

All the keynotes today highlighted the many different aspects of the open source database community that come together to solve database challenges. Percona Live runs through Wednesday 4/25. Check out tomorrow’s keynotes here, as well as the numerous breakout sessions with top open source database experts.

The post Percona Live 2018 Keynotes, Day One appeared first on Percona Database Performance Blog.

Congratulations to Oracle on MySQL 8.0


Last week, Oracle announced the general availability of MySQL 8.0. This is good news for database users, as it means Oracle is still developing MySQL.


I decide to celebrate the event by doing a quick test of MySQL 8.0. Here follows a step-by-step description of my first experience with MySQL 8.0.
Note that I did the following without reading the release notes, as is what I have done with every MySQL / MariaDB release up to date; In this case it was not the right thing to do.

I pulled MySQL 8.0 from ghit@github.com:mysql/mysql-server.git
I was pleasantly surprised that 'cmake . ; make' worked without without any compiler warnings! I even checked the used compiler options and noticed that MySQL was compiled with -Wall + several other warning flags. Good job MySQL team!

I did have a little trouble finding the mysqld binary as Oracle had moved it to 'runtime_output_directory'; Unexpected, but no big thing.

Now it's was time to install MySQL 8.0.

I did know that MySQL 8.0 has removed mysql_install_db, so I had to use the mysqld binary directly to install the default databases:
(I have specified datadir=/my/data3 in the /tmp/my.cnf file)

> cd runtime_output_directory
> mkdir /my/data3
> ./mysqld --defaults-file=/tmp/my.cnf --install

2018-04-22T12:38:18.332967Z 1 [ERROR] [MY-011011] [Server] Failed to find valid data directory.
2018-04-22T12:38:18.333109Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
2018-04-22T12:38:18.333135Z 0 [ERROR] [MY-010119] [Server] Aborting

A quick look in mysqld --help --verbose output showed that the right command option is --initialize. My bad, lets try again,

> ./mysqld --defaults-file=/tmp/my.cnf --initialize

2018-04-22T12:39:31.910509Z 0 [ERROR] [MY-010457] [Server] --initialize specified but the data directory has files in it. Aborting.
2018-04-22T12:39:31.910578Z 0 [ERROR] [MY-010119] [Server] Aborting

Now I used the right options, but still didn't work.
I took a quick look around:

> ls /my/data3/
binlog.index

So even if the mysqld noticed that the data3 directory was wrong, it still wrote things into it.  This even if I didn't have --log-binlog enabled in the my.cnf file. Strange, but easy to fix:

> rm /my/data3/binlog.index
> ./mysqld --defaults-file=/tmp/my.cnf --initialize

2018-04-22T12:40:45.633637Z 0 [ERROR] [MY-011071] [Server] unknown variable 'max-tmp-tables=100'
2018-04-22T12:40:45.633657Z 0 [Warning] [MY-010952] [Server] The privilege system failed to initialize correctly. If you have upgraded your server, make sure you're executing mysql_upgrade to correct the issue.
2018-04-22T12:40:45.633663Z 0 [ERROR] [MY-010119] [Server] Aborting

The warning about the privilege system confused me a bit, but I ignored it for the time being and removed from my configuration files the variables that MySQL 8.0 doesn't support anymore. I couldn't find a list of the removed variables anywhere so this was done with the trial and error method.

> ./mysqld --defaults-file=/tmp/my.cnf

2018-04-22T12:42:56.626583Z 0 [ERROR] [MY-010735] [Server] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
2018-04-22T12:42:56.827685Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2018-04-22T12:42:56.838501Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2018-04-22T12:42:56.848375Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables
2018-04-22T12:42:56.848863Z 0 [ERROR] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-001146 - Table 'mysql.component' doesn't exist
2018-04-22T12:42:56.848916Z 0 [Warning] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-003543 - The mysql.component table is missing or has an incorrect definition.
....
2018-04-22T12:42:56.854141Z 0 [System] [MY-010931] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld: ready for connections. Version: '8.0.11' socket: '/tmp/mysql.sock' port: 3306 Source distribution.

I figured out that if there is a single wrong variable in the configuration file, running mysqld --initialize will leave the database in an inconsistent state. NOT GOOD! I am happy I didn't try this in a production system!

Time to start over from the beginning:

> rm -r /my/data3/*
> ./mysqld --defaults-file=/tmp/my.cnf --initialize

2018-04-22T12:44:45.548960Z 5 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: px)NaaSp?6um
2018-04-22T12:44:51.221751Z 0 [System] [MY-013170] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld (mysqld 8.0.11) initializing of server has completed

Success!

I wonder why the temporary password is so complex; It could easily have been something that one could easily remember without decreasing security, it's temporary after all. No big deal, one can always paste it from the logs. (Side note: MariaDB uses socket authentication on many system and thus doesn't need temporary installation passwords).

Now lets start the MySQL server for real to do some testing:

> ./mysqld --defaults-file=/tmp/my.cnf

2018-04-22T12:45:43.683484Z 0 [System] [MY-010931] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld: ready for connections. Version: '8.0.11' socket: '/tmp/mysql.sock' port: 3306 Source distribution.

And the lets start the client:

> ./client/mysql --socket=/tmp/mysql.sock --user=root --password="px)NaaSp?6um"
ERROR 2059 (HY000): Plugin caching_sha2_password could not be loaded: /usr/local/mysql/lib/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory

Apparently MySQL 8.0 doesn't work with old MySQL / MariaDB clients by default :(

I was testing this in a system with MariaDB installed, like all modern Linux system today, and didn't want to use the MySQL clients or libraries.

I decided to try to fix this by changing the authentication to the native (original) MySQL authentication method.

> mysqld --skip-grant-tables

> ./client/mysql --socket=/tmp/mysql.sock --user=root
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)

Apparently --skip-grant-tables is not good enough anymore. Let's try again with:

> mysqld --skip-grant-tables --default_authentication_plugin=mysql_native_password

> ./client/mysql --socket=/tmp/mysql.sock --user=root mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 8.0.11 Source distribution

Great, we are getting somewhere, now lets fix "root"  to work with the old authenticaion:

MySQL [mysql]> update mysql.user set plugin="mysql_native_password",authentication_string=password("test") where user="root";
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '("test") where user="root"' at line 1

A quick look in the MySQL 8.0 release notes told me that the PASSWORD() function is removed in 8.0. Why???? I don't know how one in MySQL 8.0 is supposed to generate passwords compatible with old installations of MySQL. One could of course start an old MySQL or MariaDB version, execute the password() function and copy the result.

I decided to fix this the easy way and use an empty password:

(Update:: I later discovered that the right way would have been to use: FLUSH PRIVILEGES;  ALTER USER' root'@'localhost' identified by 'test'  ; I however dislike this syntax as it has the password in clear text which is easy to grab and the command can't be used to easily update the mysql.user table. One must also disable the --skip-grant mode to do use this)

MySQL [mysql]> update mysql.user set plugin="mysql_native_password",authentication_string="" where user="root";
Query OK, 1 row affected (0.077 sec)
Rows matched: 1 Changed: 1 Warnings: 0
 
I restarted mysqld:
> mysqld --default_authentication_plugin=mysql_native_password

> ./client/mysql --user=root --password="" mysql
ERROR 1862 (HY000): Your password has expired. To log in you must change it using a client that supports expired passwords.

Ouch, forgot that. Lets try again:

> mysqld --skip-grant-tables --default_authentication_plugin=mysql_native_password

> ./client/mysql --user=root --password="" mysql
MySQL [mysql]> update mysql.user set password_expired="N" where user="root";

Now restart and test worked:

> ./mysqld --default_authentication_plugin=mysql_native_password

>./client/mysql --user=root --password="" mysql

Finally I had a working account that I can use to create other users!

When looking at mysqld --help --verbose again. I noticed the option:

--initialize-insecure
Create the default database and exit. Create a super user
with empty password.

I decided to check if this would have made things easier:

> rm -r /my/data3/*
> ./mysqld --defaults-file=/tmp/my.cnf --initialize-insecure


2018-04-22T13:18:06.629548Z 5 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.

Hm. Don't understand the warning as--initialize-insecure is not an option that one would use more than one time and thus nothing one would 'switch off'.

> ./mysqld --defaults-file=/tmp/my.cnf

> ./client/mysql --user=root --password="" mysql
ERROR 2059 (HY000): Plugin caching_sha2_password could not be loaded: /usr/local/mysql/lib/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory

Back to the beginning :(

To get things to work with old clients, one has to initialize the database with:
> ./mysqld --defaults-file=/tmp/my.cnf --initialize-insecure --default_authentication_plugin=mysql_native_password

Now I finally had MySQL 8.0 up and running and thought I would take it up for a spin by running the "standard" MySQL/MariaDB sql-bench test suite. This was removed in MySQL 5.7, but as I happened to have MariaDB 10.3 installed, I decided to run it from there.

sql-bench is a single threaded benchmark that measures the "raw" speed for some common operations. It gives you the 'maximum' performance for a single query. Its different from other benchmarks that measures the maximum throughput when you have a lot of users, but sql-bench still tells you a lot about what kind of performance to expect from the database.

I tried first to be clever and create the "test" database, that I needed for sql-bench, with
> mkdir /my/data3/test

but when I tried to run the benchmark, MySQL 8.0 complained that the test database didn't exist.

MySQL 8.0 has gone away from the original concept of MySQL where the user can easily
create directories and copy databases into the database directory. This may have serious
implication for anyone doing backup of databases and/or trying to restore a backup with normal OS commands.

I created the 'test' database with mysqladmin and then tried to run sql-bench:

> ./run-all-tests --user=root


The first run failed in test-ATIS:

Can't execute command 'create table class_of_service (class_code char(2) NOT NULL,rank tinyint(2) NOT NULL,class_description char(80) NOT NULL,PRIMARY KEY (class_code))'
Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'rank tinyint(2) NOT NULL,class_description char(80) NOT NULL,PRIMARY KEY (class_' at line 1

This happened because 'rank' is now a reserved word in MySQL 8.0. This is also reserved in ANSI SQL, but I don't know of any other database that has failed to run test-ATIS before. I have in the past run it against Oracle, PostgreSQL, Mimer, MSSQL etc without any problems.

MariaDB also has 'rank' as a keyword in 10.2 and 10.3 but one can still use it as an identifier.

I fixed test-ATIS and then managed to run all tests on MySQL 8.0.

I did run the test both with MySQL 8.0 and MariaDB 10.3 with the InnoDB storage engine and by having identical values for all InnoDB variables, table-definition-cache and table-open-cache. I turned off performance schema for both databases. All test are run with a user without an empty password (to keep things comparable and because it's was too complex to generate a password in MySQL 8.0)

The result are as follows
Results per test in seconds:

Operation         |MariaDB|MySQL-8|

-----------------------------------
ATIS              | 153.00| 228.00|
alter-table       |  92.00| 792.00|
big-tables        | 990.00|2079.00|
connect           | 186.00| 227.00|
create            | 575.00|4465.00|
insert            |4552.00|8458.00|
select            | 333.00| 412.00|
table-elimination |1900.00|3916.00|
wisconsin         | 272.00| 590.00|
-----------------------------------

This is of course just a first view of the performance of MySQL 8.0 in a single user environment. Some reflections about the results:

  • Alter-table test is slower (as expected) in 8.0 as some of the alter tests benefits of the instant add column in MariaDB 10.3.
  • connect test is also better for MariaDB as we put a lot of efforts to speed this up in MariaDB 10.2
  • table-elimination shows an optimization in MariaDB for the  Anchor table model, which MySQL doesn't have.
  • CREATE and DROP TABLE is almost 8 times slower in MySQL 8.0 than in MariaDB 10.3. I assume this is the cost of 'atomic DDL'. This may also cause performance problems for any thread using the data dictionary when another thread is creating/dropping tables.
  • When looking at the individual test results, MySQL 8.0 was slower in almost every test, in many significantly slower.
  • The only test where MySQL was faster was "update_with_key_prefix". I checked this and noticed that there was a bug in the test and the columns was updated to it's original value (which should be instant with any storage engine). This is an old bug that MySQL has found and fixed and that we have not been aware of in the test or in MariaDB.
  • While writing this, I noticed that MySQL 8.0 is now using utf8mb4 as the default character set instead of latin1. This may affect some of the benchmarks slightly (not much as most tests works with numbers and Oracle claims that utf8mb4 is only 20% slower than latin1), but needs to be verified.
  • Oracle claims that MySQL 8.0 is much faster on multi user benchmarks. The above test indicates that they may have done this by sacrificing single user performance.
  •  We need to do more and many different benchmarks to better understand exactly what is going on. Stay tuned!


Short summary of my first run with MySQL 8.0:
  • Using the new caching_sha2_password authentication as default for new installation is likely to cause a lot of problems for users. No old application will be able to use MySQL 8.0, installed with default options, without moving to MySQL's client libraries. While working on this blog I saw MySQL users complain on IRC that not even MySQL Worklog can authenticate with MySQL 8.0. This is the first time in MySQL's history where such an incompatible change has ever been done!
  • Atomic DDL is a good thing (We plan to have this in MariaDB 10.4), but it should not have such a drastic impact on performance. I am also a bit skeptical of MySQL 8.0 having just one copy of the data dictionary as if this gets corrupted you will lose all your data. (Single point of failure)
  • MySQL 8.0 has several new reserved words and has removed a lot of variables, which makes upgrades hard. Before upgrading to MySQL 8.0 one has to check all one's databases and applications to ensure that there are no conflicts.
  • As my test above shows, if you have a single deprecated variable in your configuration files, the installation of MySQL will abort and can leave the database in inconsistent state. I did of course my tests by installing into an empty data dictionary, but one can assume that some of the problems may also happen when upgrading an old installation.



Conclusions:
In many ways, MySQL 8.0 has caught up with some earlier versions of MariaDB. For instance, in MariaDB 10.0, we introduced roles (four years ago). In MariaDB 10.1, we introduced encrypted redo/undo logs (three years ago). In MariaDB 10.2, we introduced window functions and CTEs (a year ago). However, some catch-up of MariaDB Server 10.2 features still remains for MySQL (such as check constraints, binlog compression, and log-based rollback).

MySQL 8.0 has a few new interesting features (mostly Atomic DDL and JSON TABLE functions), but at the same time MySQL has strayed away from some of the fundamental corner stone principles of MySQL:

From the start of the first version of MySQL in 1995, all development has been focused around 3 core principles:
  • Ease of use
  • Performance
  • Stability

With MySQL 8.0, Oracle has sacrifices 2 of 3 of these.

In addition (as part of ease of use), while I was working on MySQL, we did our best to ensure that the following should hold:

  • Upgrades should be trivial
  • Things should be kept compatible, if possible (don't remove features/options/functions that are used)
  • Minimize reserved words, don't remove server variables
  • One should be able to use normal OS commands to create and drop databases, copy and move tables around within the same system or between different systems. With 8.0 and data dictionary taking backups of specific tables will be hard, even if the server is not running.
  • mysqldump should always be usable backups and to move to new releases
  • Old clients and application should be able to use 'any' MySQL server version unchanged. (Some Oracle client libraries, like C++, by default only supports the new X protocol and can thus not be used with older MySQL or any MariaDB version)

We plan to add a data dictionary to MariaDB 10.4 or MariaDB 10.5, but in a way to not sacrifice any of the above principles!

The competition between MySQL and MariaDB is not just about a tactical arms race on features. It’s about design philosophy, or strategic vision, if you will.

This shows in two main ways: our respective view of the Storage Engine structure, and of the top-level direction of the roadmap.

On the Storage Engine side, MySQL is converging on InnoDB, even for clustering and partitioning. In doing so, they are abandoning the advantages of multiple ways of storing data. By contrast, MariaDB sees lots of value in the Storage Engine architecture: MariaDB Server 10.3 will see the general availability of MyRocks (for write-intensive workloads) and Spider (for scalable workloads). On top of that, we have ColumnStore for analytical workloads. One can use the CONNECT engine to join with other databases. The use of different storage engines for different workloads and different hardware is a competitive differentiator, now more than ever.

On the roadmap side, MySQL is carefully steering clear of features that close the gap between MySQL and Oracle. MariaDB has no such constraints. With MariaDB 10.3, we are introducing PL/SQL compatibility (Oracle’s stored procedures) and AS OF (built-in system versioned tables with point-in-time querying). For both of those features, MariaDB is the first Open Source database doing so. I don't except Oracle to provide any of the above features in MySQL!

Also on the roadmap side, MySQL is not working with the ecosystem in extending the functionality. In 2017, MariaDB accepted more code contributions in one year, than MySQL has done during its entire lifetime, and the rate is increasing!

I am sure that the experience I had with testing MySQL 8.0 would have been significantly better if MySQL would have an open development model where the community could easily participate in developing and testing MySQL continuously. Most of the confusing error messages and strange behavior would have been found and fixed long before the GA release.


Before upgrading to MySQL 8.0 please read https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html to see what problems you can run into! Don't expect that old installations or applications will work out of the box without testing as a lot of features and options has been removed (query cache, partition of myisam tables etc)! You probably also have to revise your backup methods, especially if you want to ever restore just a few tables. (With 8.0, I don't know how this can be easily done).

According to the MySQL 8.0 release notes, one can't use mysqldump to copy a database to MySQL 8.0. One has to first to move to a MySQL 5.7 GA version (with mysqldump, as recommended by Oracle) and then to MySQL 8.0 with in-place update. I assume this means that all old mysqldump backups are useless for MySQL 8.0?

MySQL 8.0 seams to be a one way street to an unknown future. Up to MySQL 5.7 it has been trivial to move to MariaDB and one could always move back to MySQL with mysqldump. All MySQL client libraries has worked with MariaDB and all MariaDB client libraries has worked with MySQL. With MySQL 8.0 this has changed in the wrong direction.

As long as you are using MySQL 5.7 and below you have choices for your future, after MySQL 8.0 you have very little choice. But don't despair, as MariaDB will always be able to load a mysqldump file and it's very easy to upgrade your old MySQL installation to MariaDB :)

I wish you good luck to try MySQL 8.0 (and also the upcoming MariaDB 10.3)!

Announcing ClusterControl 1.6 - automation and management of open source databases in the cloud

Today we are excited to announce the 1.6 release of ClusterControl - the all-inclusive database management system that lets you easily deploy, monitor, manage and scale highly available open source databases - and load balancers - in any environment: on-premise or in the cloud.

ClusterControl 1.6 introduces a new set of cloud features in BETA status that allow users to deploy and manage their open source database clusters on public clouds such AWS, Google Cloud and Azure. The release also provides a Point In Time Recovery functionality for MySQL/MariaDB systems, as well as new topology views for PostgreSQL Replication clusters, MongoDB ReplicaSets and Sharded clusters.

Release Highlights Related resources  ClusterControl ChangeLog  Download ClusterControl  Learn More About ClusterControl

Deploy and manage clusters on public Clouds (BETA)

  • Supported cloud providers: Amazon Web Services (VPC), Google Cloud, and Azure
  • Supported databases: MySQL/MariaDB Galera, Percona XtraDB Cluster, PostgreSQL, MongoDB ReplicaSet

Point In Time Recovery - PITR (MySQL)

  • Position and time-based recovery for MySQL based clusters

Enhanced Topology View

  • Support added for PostgreSQL Replication clusters; MongoDB ReplicaSets and Sharded clusters

Additional Highlights

  • Deploy multiple clusters in parallel and increase deployment speed
  • Enhanced Database User Management for MySQL/MariaDB based systems
  • Support for MongoDB 3.6
ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE View Release Details and Resources Release Details Deploy and manage open source database clusters on public Clouds (BETA)

With this latest release, we continue to add deeper cloud functionality to ClusterControl. Users can now launch cloud instances and deploy database clusters on AWS, Google Cloud and Azure right from their ClusterControl console; and they can now also upload/download backups to Azure cloud storage. Supported cloud providers currently include Amazon Web Services (VPC), Google Cloud, and Azure as well as the following databases: MySQL/MariaDB Galera, PostgreSQL, MongoDB ReplicaSet.

Point In Time Recovery - PITR (MySQL)

Point-in-Time recovery of MySQL & MariaDB involves restoring the database from backups prior to the target time, then uses incremental backups and binary logs to roll the database forward to the target time. Typically, database administrators use backups to recover from different types of cases such as a database upgrade that fails and corrupts the data or storage media failure/corruption. But what happens when an incident occurs at a time in between two backups? This is where binary logs come in: as they store all of the changes, users can also use them to replay traffic. ClusterControl automates that process for you and helps you minimize data loss after an outage.

New Topology View

The ClusterControl Topology View provides a visual representation of your database nodes and load balancers in real time, in a simple and friendly interface without the need to install any additional tools. Distributed databases or clusters typically consist of multiple nodes and node types, and it can be a challenge to understand how these work together. If you also have load balancers in the mix, hosts with multiple IP addresses and more, then the setup can quickly become too complex to visualise. That’s where the new ClusterControl Topology View comes in: it shows all the different nodes that form part of your database cluster (whether database nodes, load balancers or arbitrators), as well as the connections between them in an easy to view visual. With this release, we have added support for PostgreSQL Replication clusters as well as MongoDB ReplicaSets and Sharded clusters.

Enhanced Database User Management for MySQL based clusters

One important aspect of being a database administrator is to protect access to the company’s data. We have redesigned our DB User Management for MySQL based clusters with a more modern user interface, which makes it easier to view and manage the database accounts and privileges directly from ClusterControl.

Additional New Functionalities
  • Improved cluster deployment speed by utilizing parallel jobs. Deploy multiple clusters in parallel.
  • Support to deploy and manage MongoDB cluster on v3.6

Download ClusterControl today!

Happy Clustering!

Tags:  clustercontrol cloud database MySQL MongoDB PostgreSQL point in time recovery

Taking advantage of new transaction length metadata

MySQL 8.0.2 introduced a small yet powerful per transaction metadata information containing the transaction length in binary logs.

MySQL binary logs are being used for many other things than MySQL replication or backup/recovery: replicate to Hadoop; replicate to other databases, such as Oracle; capture data change (CDC) and extract-transform-load (ETL); record change notification for cache invalidation; change tracking for differential backups; etc.…

Percona Partners with Microsoft and Mesosphere To Help Enterprises Optimize and Maintain DBaaS Environments

This week, Percona partners with Microsoft and Mesosphere to make it easier for organizations to take advantage of cloud and container environments and run their open source databases, ensuring optimal performance while shifting their focus to improving applications and better supporting the business.

Percona’s new partnership with Microsoft to support Microsoft Azure customers, along with the availability of Percona Server for MongoDB in the Mesosphere DC/OS community, reflect Percona’s commitment to providing comprehensive support for open source databases in cloud deployments. 

With open source databases now standard in the enterprise, organizations are increasingly looking to deploy those databases in public cloud environments to benefit from the flexibility, scalability and economics of the cloud. Percona previously announced support partnerships with Google to offer Google Cloud SQL services and Amazon Web Services to support the use of Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (Amazon RDS), and Amazon Aurora on AWS.

Partnership with Microsoft for Microsoft Azure Customers

Percona has entered into a partnership with Microsoft, a Platinum Sponsor of the Percona Live Open Source Database Conference, to make it easier for clients to take advantage of the Microsoft Azure cloud platform. Percona software will now be available via the Azure Marketplace, simplifying deployment into Azure Virtual Machines. Additionally, the Percona DBA Service for Microsoft Azure, as well as other support services, will help Azure customers derive the most benefit from their open source database environments.

Percona Server for MongoDB Certified on Mesosphere

Percona has partnered with Mesosphere to deliver Percona Server for MongoDB on Mesosphere DC/OS (data center operating system). As reliance on hybrid, on-premises, and cloud environments continues to grow, enterprises are increasingly turning to Mesosphere DC/OS to run large-scale applications. Percona Server for MongoDB offers these organizations the best of both worlds: a free and open source, enterprise-grade version of MongoDB backed by the renowned experts at Percona, plus the confidence that the solution meets Mesosphere’s rigorous standards for compatibility. Percona Server for MongoDB is available to all users of DC/OS 1.10 and newer as a beta solution through the DC/OS Universe package catalog, with full Mesosphere certification coming soon.

The post Percona Partners with Microsoft and Mesosphere To Help Enterprises Optimize and Maintain DBaaS Environments appeared first on Percona Database Performance Blog.

Percona Server for MySQL 5.7.21-21 Is Now Available with Increased Built-In Security Enhancements

Percona announces the GA release of Percona Server for MySQL 5.7.21-21 on on April 24, 2018. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

This version of Percona Server for MySQL 5.7.21 includes three new encryption features – Vault keyring plug-in, encryption for InnoDB general tablespaces, and encryption for binary log files.

These new capabilities, which allow companies to immediately increase security for their existing databases, are also part of a larger project to build complete, robust, enterprise-grade encryption capabilities into Percona Server for MySQL, allowing customers and the community to satisfy their most rigorous security compliance requirements. Percona also announced the release of a new version of Percona XtraBackup that supports backing up Percona Server for MySQL instances that have these encryption features enabled.

Based on MySQL 5.7.21, including all the bug fixes in it, Percona Server for MySQL 5.7.21-21 is the current GA release in the Percona Server for MySQL 5.7 series. Percona provides completely open-source and free software.

New Features:
  • A new variable innodb_temp_tablespace_encrypt is introduced to turn encryption of temporary tablespace and temporary InnoDB file-per-table tablespaces on/off. Bug fixed #3821.
  • A new variable innodb_encrypt_online_alter_logs simultaneously turns on encryption of files used by InnoDB for merge sort, online DDL logs, and temporary tables created by InnoDB for online DDL. Bug fixed #3819.
  • A new variable innodb_encrypt_tables can be set to ON, making InnoDB tables encrypted by default, to FORCE, disabling creation of unencrypted tables, or OFF, restoring the like-before behavior. Bug fixed #1525.
  • Query response time plugin now can be disabled at session level with use of a new variable query_response_time_session_stats.
Bugs Fixed:
  • Attempting to use a partially-installed query response time plugin could have caused server crash. Bug fixed #3959.
  • There was a server crash caused by a materialized temporary table from semi-join optimization with key length larger than 1000 bytes. Bug fixed #296.
  • A regression in the original 5.7 port was causing integer overflow with thread_pool_stall_limit variable values bigger than 2 seconds. Bug fixed #1095.
  • A memory leak took place in Percona Server when performance schema is used in conjunction with thread pooling. Bug fixed #1096.
  • A code clean-up was done to fix compilation with clang, both general warnings (bug fixed #3814, upstream #89646) and clang 6 specific warnings and errors (bug fixed #3893, upstream #90111).
  • Compilation warning was fixed for -DWITH_QUERY_RESPONSE_TIME=ON CMake compilation option, which makes QRT to be linked statically. Bug fixed #3841.
  • Percona Server returned empty result for SELECT query if number of connections exceeded 65535. Bug fixed #314 (upstream #89313).
  • A clean-up in Percona Server binlog-related code was made to avoid uninitialized memory comparison. Bug fixed #3925 (upstream #90238).
  • mysqldump utility with --innodb-optimize-keys option was incorrectly working with foreign keys on the same table, producing invalid SQL statements. Bugs fixed #1125 and #3863.
  • A fix of the mysqld startup script failed to detect jemalloc library location for preloading, thus not starting on systemd based machines, introduced in Percona Server 5.7.21-20, was improved to take into account previously created configuration file. Bug fixed #3850.
  • The possibility of a truncated bitmap file name was fixed in InnoDB logging subsystem. Bug fixed #3926.
  • Temporary file I/O was not instrumented for Performance Schema. Bug fixed #3937 (upstream #90264).
  • A crash in the unsafe query warning checks with views took place for UPDATE statement in case of statement binlogging format. Bug fixed #290.
MyRocks Changes:
  • A re-implemented variable rpl_skip_tx_api allows to turn on simple RocksDB write batches functionality, increasing replication performance by the transaction api skip. Bug fixed MYR-47.
  • Decoding value-less padded varchar fields could under some circumstances cause assertion and/or data corruption. Bug fixed MYR-232.
TokuDB Changes:
  • Two new variables introduced for the TokuDB fast updates feature, tokudb_enable_fast_update and tokudb_enable_fast_upsert should be now used instead of the NOAR keyword, which is now optional at compile time and off by default. Bugs fixed #63 and #148.
  • A set of compilation fixes was introduced to make TokuDB successfully build in MySQL / Percona Server 8.0. Bugs fixed #84, #85, #114, #115, #118, #128, #139, #141, and #172.
  • Conditional compilation code dependent on version ID in the TokuDB tree was separated and arranged to specific version branches. Bugs fixed #133, #134, #135, and #136.
  • ALTER TABLE ... COMMENT = ... statement caused TokuDB to rebuild the whole table, which is not needed, as only FRM metadata should be changed. Bug fixed #130, and #137.
  • Data race on the cache table pair attributes was fixed.

Other bugs fixed: #3793, #3812, #3813, #3815, #3818, #3835, #3875 (upstream #89916), #3843 (upstream #89822), #3848, #3856, #3887, MYR-160, MYR-245, #109, #111,#180, #181, #182, and #188.

The release notes for Percona Server for MySQL 5.7.21-20 are available in the online documentation. Please report any bugs on the project bug tracking system.

The post Percona Server for MySQL 5.7.21-21 Is Now Available with Increased Built-In Security Enhancements appeared first on Percona Database Performance Blog.

Percona XtraBackup 2.4.11 Is Now Available

Percona announces the GA release of Percona XtraBackup 2.4.11 on April 24, 2018. This release is based on MySQL 5.7.19. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

New features and improvements:

Release notes with all the improvements for version 2.4.11 are available in our online documentation. Please report any bugs to the issue tracker.

The post Percona XtraBackup 2.4.11 Is Now Available appeared first on Percona Database Performance Blog.

MySQL InnoDB Cluster 8.0 – A Hands-on Tutorial

MySQL InnoDB Cluster has proven with the first GA release its potential as the native out-of-the-box HA solution for MySQL. With the second GA release, usability, stability and flexibility were greatly improved as described in the previous blog post.

This tutorial aims to show how does a typical InnoDB cluster setup look like and how that setup even became simpler and more effective as before.…

Pages