Planet MySQL

Parallel Replication: Post on Booking.com Blog, Talk at Percona Live Amsterdam 2016, and more

I have a new blog post on blog.booking.com describing MariaDB 10.1 Optimistic Parallel Replication (with benchmark results): Evaluating MySQL Parallel Replication Part 4: More Benchmarks in Productionhttp://blog.booking.com/evaluating_mysql_parallel_replication_4-more_benchmarks_in_production.html If you want to know more about MySQL/MariaDB Parallel Replication and if you are attending Percona

Thank You For Attending Oracle OpenWorld 2016 - Download Slides

MySQL Support was represented at Oracle OpenWorld 2016 with two engineers: Ligaya Turmelle and Jesper Krogh. We did a total of four talks and spend 20 some hours in total at the Support Stars Bar. We would like to thank all of you who attended. It was good meeting some of you and talk about your experiences - both good and not so good - using MySQL.

If you attended one of our talks and would like to study some of the slides in more details - or you were not able to attend, all of our presentations are available from the Oracle OpenWorld content catalogue (except the Support Stars Bar mini briefing which is available from this blog):

Thank you for a great week. Hope to see you all next year again.

Data Pipeline: Salesforce Connector

This is the fifth post in a series covering Yelp's real-time streaming data infrastructure. Our series explores in-depth how we stream MySQL updates in real-time with an exactly-once guarantee, how we automatically track & migrate schemas, how we process and transform streams, and finally how we connect all of this into datastores like Redshift and Salesforce. Read the posts in the series: Billions of Messages a Day - Yelp's Real-time Data Pipeline Streaming MySQL tables in real-time to Kafka More Than Just a Schema Store PaaStorm: A Streaming Processor Data Pipeline: Salesforce Connector Yelp uses Salesforce, a customer relationship management...

MySQL removes the FRM (7 years after Drizzle did)

The new MySQL 8.0.0 milestone release that was recently announced brings something that has been a looooong time coming: the removal of the FRM file. I was the one who implemented this in Drizzle way back in 2009 (July 28th 2009 according to Brian)- and I may have had a flashback to removing the tentacles of the FRM when reading the MySQL 8.0.0 announcement.

As an idea for how long this has been on the cards, I’ll quote Brian from when we removed it in Drizzle:

We have been talking about getting rid of FRM since around 2003. I remember a drive up to northern Finland with Kaj Arnö, where we spent an hour talking about this. I, David, and MontyW have talked about this for years.

http://krow.livejournal.com/642329.html

Soo… it was a known problem for at least thirteen years. One of the issues removing it was how pervasive all of the FRM related things were. I shudder at the mention of “pack_flag” and Jay Pipes probably does too.

At the time, we tried a couple of approaches as to how things should look. Our philosophy with Drizzle was that it should get out of the way at let the storage engines be the storage engines and not try to second guess them or keep track of things behind their back. I still think that was the correct architectural approach: the role of Drizzle was to put SQL on top of a storage engine, not to also be one itself.

Looking at the MySQL code, there’s one giant commit 31350e8ab15179acab5197fa29d12686b1efd6ef. I do mean giant too, the diffstat is amazing:

786 files changed, 58471 insertions(+), 25586 deletions(-)

How anyone even remotely did code review on that I have absolutely no idea. I know the only way I could get it to work in Drizzle was to do it incrementally, a series of patches that gradually chiseled out what needed to be taken out so I could put it an API and the protobuf code.

Oh, and in case you’re wondering:

- uint offset,pack_flag; + uint offset;

Thank goodness. Now, you may not appreciate that as much as I might, but pack_flag was not the height of design, it was… pretty much a catchalll for some kind of data about a field that wasn’t something that already had a field in the FRM. So it may include information on if the field could be null or not, if it’s decimal, how many bytes an integer takes, that it’s a number and how many oh, just don’t ask.

Also gone is the weird interval_id and a whole bunch of limitations because of the FRM format, including one that I either just discovered or didn’t remember: if you used all 256 characters in an enum, you couldn’t create the table as MySQL would pick either a comma or an unused character to be the separator in the FRM!?!

Also changed is how the MySQL server handles default values. For those not aware, the FRM file contains a static copy of the row containing default values. This means the default values are computed once on table creation and never again (there’s a bunch of work arounds for things like AUTO_INCREMENT and DEFAULT NOW()). The new sql/default_values.cc is where this is done now.

For now at least, table metadata is also written to a file that appears to be JSON format. It’s interesting that a SQL database server is using a schemaless file format to describe schema. It appears that these files exist only for disaster recovery or perhaps portable tablespaces. As such, I’m not entirely convinced they’re needed…. it’s just a thing to get out of sync with what the storage engine thinks and causes extra IO on DDL (as well as forcing the issue that you can’t have MVCC into the data dictionary itself).

What will be interesting is to see the lifting of these various limitations and how MariaDB will cope with that. Basically, unless they switch, we’re going to see some interesting divergence in what you can do in either database.

There’s certainly differences in how MySQL removed the FRM file to the way we did it in Drizzle. Hopefully some of the ideas we had were helpful in coming up with this different approach, as well as an extra seven years of in-production use.

At some point I’ll write something up as to the fate of Drizzle and a bit of a post-mortem, I think I may have finally worked out what I want to say…. but that is a post for another day.

SHOW WARNINGS can stall your MySQL server if compressed protocol is used

If there is something that I love about information technology is the incredible amount of unpredictable ways that bugs can bite you where and when you less expect it.

This one I'm about to describe is a serious one, yet there has been a very quiet response from Percona bug team at Launchpad, where I first submitted it (because we use Percona server here). If you have read my other posts you know by now that I am a laz... err, busy guy so bear with me if only today I have verified that this is, in fact, an upstream bug and have raised a bug to Oracle too.
It affects 5.6.33, latest version at the time of this article.

Anyways, the bug started to manifest itself in the form of complete stalls of our main cluster, which was happening at different times and apparently in a way that was unrelated to the amount of traffic hitting the master. When stalling, system CPU time was topping 100% of total available CPUs, no wonder that the server was unresponsive...

Complete stalls are not nice: everything is frozen and you can't even check what's going on, as MySQL would not respond to any command.  In some cases this just lasted few seconds, in some other we had stalls which lasted well over 1 minute .

We have the habit to run a script that takes snapshots of "SHOW ENGINE INNODB STATUS" every 10 seconds on each master, and saves them to files; we use these to do post-mortem analysis of incidents that hit the production (traffic spikes and the like). But this time, the snapshots didn't give any useful information: the DB was just running smoothly, low traffic, no sign of anything suspicious.  Few second later, total stall. Then smooth again.   Weird.

I had to find a way to catch the bug while it was biting. I decided to craft to a small script to enable the general log for 60 seconds, then turn it off again and run pt-query-digest on the generated log,  printing a digest of top offenders. Then I waited for the next occurrence, patiently watching sysstat output. As soon as the system CPU time skyrocketed again, I ran my script that captured 60 seconds of full database traffic.

I was really surprised to find that in the sampled time, the top query by number of invocations was SHOW WARNINGS, with about 10,000  invocations per second.
Whoa!!! What the hell is that?  Why are production applications executing SHOW WARNINGS in the first place? And not only that, but the amount of them was INSANE!!

Well, it was now obvious that the server stall was actually caused by a flood of SHOW WARNINGS. Not so obvious was the reason for such large amount of this statement to arrive on our master.

Lo and behold, turns out that there is a neat feature in Hibernate, the ORM that we use here, that automatically causes a SHOW WARNINGS to be issued after each and every statement that is executed. To have this enabled it is  sufficient to set the Hibernate log level to "warning".  D'oh!!!

Of course, Hibernate's setting was the source of all troubles. We asked the devs for a quick fix and after bringing Hibernate's log level back to "error", the problem disappeared.

Still, the behaviour of MySQL when a large amount of SHOW WARNINGS is received was not proper: it would be very easy for anyone to stall any server by just flooding it with these statements, which do not require any special privilege. I felt that this is a serious bug (after all, it has caused downtime in production for us) so I decided to find a way to reproduce it, and to raise a bug as soon as I succeeded.

So I went to our 32 core test server and set up a sysbench test using this simple piece of LUA code:

pathtest = string.match(test, "(.*/)") or ""
dofile(pathtest .. "./sysbench/tests/db/common.lua")
function thread_init(thread_id)   set_vars()end
function event(thread_id)   db_query("SHOW WARNINGS")
end
Then I ran sysbench in order to try and reproduce the stall.  The command line I used at first was the following:


./sysbench/sysbench --mysql-host=localhost  --mysql-user=sysbench --mysql-password=sbtest --mysql-db=sbtest --test=show_warnings.lua --max_requests=0 --max_time=20 --num-threads=30 --mysql-engine-trx=yes run
To my dismay, this did not produce the desired effect: there was no stall and the server was able to perform approximately 560,000 SHOW WARNINGS per second using 30 cores out of the 32 available on the server (keep in mind I was also running sysbench on the same machine).
This was really head scratching to me, as I was pretty sure that this statement was the root cause of our incidents.  So I decided to look at the connection string used by the offending application and noticed that there was link compression enabled. This was a common setup throughout the company, and was in place for historical reasons, to save bandwidth when we were on the old non-gigabit infrastructure. 
I  decided to add compression to the sysbench test and try again:
./sysbench/sysbench --mysql-host=localhost  --mysql-user=sysbench --mysql-password=sbtest --mysql-db=sbtest --test=show_warnings.lua --max_requests=0 --max_time=20 --num-threads=30 --mysql-engine-trx=yes --mysql-compression=on  run
Needless to say, that did the trick!! System CPU time skyrocketed and used over 80% of the 32 cores of the test server.  Now, this is absolutely weird - what can protocol compression have to do with SHOW WARNINGS?  Moreover, throughput fell down from 560k to 48k qps. 
I have filed this as a bug on Percona launchpad long ago, but there has been very little  activity on it, as you can see.  
I feel that this bug is a serious one, as Hibernate is widely used  at MySQL based shops, and there may be other ORMs that have similar options that can flood poor MySQL with thousands of SHOW WARNINGS per second.
Since I have verified that this is an upstream bug, today I have filed it on bugs.mysql.com as well, hopefully it will receive the attention it deserves.

MySQL 8.0: Making User Management DDLs Atomic

With MySQL 8.0, we are bringing in an important change in the way user management DDLs are executed.

Background

User management DDLs cover following user management actions:

  • Creating user/role (CREATE USER [IF NOT EXISTS] / CREATE ROLE [IF NOT EXISTS])
  • Altering user account properties (ALTER USER [IF EXISTS])
  • Dropping user/role from database (DROP USER [IF EXISTS] / DROP ROLE [IF EXISTS])
  • Granting/Revoking privileges to/from a user/role (GRANT/REVOKE)

Each of these administrative action can handle multiple entities as a part of single command.…

Database Security - MySQL Upgrade Instructions for Zero-day Exploit

You must have heard about the CVE-2016-6662, the recent zero-day exploit exposed in most of MySQL and its variants. The vulnerability flaw can be exploited by a remote attacker to inject malicious settings into your my.cnf,. you can read about the details here.

At the moment, all supported MySQL vendors by ClusterControl (Oracle, Codership, Percona, MariaDB) have been patched with a bug fix and released in their respective package repository:

Vendor Software Patched Release Oracle MySQL Server 5.5.52
5.6.33
5.7.15 Percona Percona Server
Percona XtraDB Cluster 5.5.51-38.1
5.6.32-78.0
5.7.14-7 MariaDB MariaDB Server
MariaDB Galera Cluster 10.1.17
10.0.27
5.5.51 Codership MySQL Galera Cluster 5.5.52
5.6.33

If you are using ClusterControl to manage your MySQL/MariaDB databases, we advise you to do the following as soon as possible:

  1. Upgrade MySQL or MariaDB server for ClusterControl.
  2. Upgrade ClusterControl to the latest version (recommended for #3).
  3. Upgrade your MySQL servers manually or using ClusterControl.
Upgrade MySQL/MariaDB Server on ClusterControl server

ClusterControl stores monitoring data in a MySQL/MariaDB server. The ClusterControl installer script (install-cc) relies on the respective OS’s repository to install MySQL server.

On Debian 8 and Ubuntu 14.04, the latest version of mysql-server package is patched:

ubuntu@ubuntu-trusty:~$ sudo apt list mysql-server mysql-server/trusty-updates,trusty-security 5.5.52-0ubuntu0.14.04.1 all

To upgrade, simply:

$ sudo apt-get update $ sudo apt-get install mysql-server $ sudo service mysql restart #sudo systemctl restart mysql

For RHEL/CentOS 6, taken from Redhat Customer Portal, the MySQL 5.1 packages in Red Hat Enterprise Linux 6 do not implement support for library preloading, therefore preventing the remote attack vector used by the published exploit.

At the moment of writing, there is no patched MariaDB release available from RHEL/CentOS 7 repository:

[root@centos7 ]$ yum list | grep -i mariadb-server mariadb-server.x86_64 1:5.5.44-2.el7.centos @base mariadb-server.x86_64 1:5.5.50-1.el7_2 updates

For the above reason, on RHEL/CentOS, we can apply the patches manually. The below recommendations are taken from Percona blog:

  1. Patch the mysqld_safe and related files

    Compare and patch mysqld_safe, /etc/init.d/mysql and related files according to the diff shown in the following vendor’s Github repository:

    MySQL: https://github.com/mysql/mysql-server/commit/f75735e36e6569b9dae3b0605b1d5915a519260e#diff-144aa2f11374843c969d96b7b84247eaR348

    Percona: https://github.com/percona/percona-server/commit/c14be53e029442f576cced1fb8ff96b58e89f2e0#diff-144aa2f11374843c969d96b7b84247eaR261

    MariaDB: https://github.com/MariaDB/server/commit/684a165f28b3718160a3e4c5ebd18a465d85e97c

  2. Database user permissions

    One way to avoid the vulnerability is making sure no remote user has SUPER or FILE privileges. Verify if there is any user holding unnecessarily these two privileges. You can get a list of remote users that has these privileges by using the following query:

    mysql> SELECT user, host FROM mysql.user WHERE Super_priv='Y' AND File_priv='Y' AND host NOT IN ('localhost','127.0.0.1', '::1');
  3. Configuration files permissions

    The vulnerability needs to be able to write to some MySQL configuration files. Prevent that and you are secure. Make sure you configure permissions for various config files as follows:

    Create an (empty) my.cnf and .my.cnf in the datadir (usually /var/lib/mysql) and make root the owner/group with 0644 permissions:

    $ touch /var/lib/mysql/my.cnf $ touch /var/lib/mysql/.my.cnf $ chmod 644 my.cnf .my.cnf $ chown root:root *.cnf

    Verify other MySQL configuration files as well in other locations:

    $ for i in "/etc/my.cnf /etc/mysql/my.cnf /usr/etc/my.cnf ~/.my.cnf"; do chown root:root $i; chmod 644 $i; done

    This also includes “!includedir” paths defined in your current configurations - make sure they are not writeable by the mysql user as well. Consider “!includedir = /etc/my.cnf.d” is defined in my.cnf:

    $ chmod 644 /etc/my.cnf.d/*.cnf $ chown root:root /etc/my.cnf.d/*.cnf

    Once the RHEL/CentOS releases the patched mysql packages in their respective repository, you can then perform the package upgrade using the yum commands.

Upgrade ClusterControl to the latest version

ClusterControl should be updated to the latest version to ensure the upgrade steps are updated and relevant to the latest release.

The latest stable release for ClusterControl now is 1.3.2 (build 1467). Take note that if you are upgrading from 1.2.12 and below, you should perform some extra steps to re-configure /etc/cmon.cnf to use minimal configuration options. This part is further explained in the ClusterControl upgrade instructions available at our documentation page.

Upgrade the monitored DB server

For Galera Cluster, ClusterControl supports rolling patch upgrade (between minor versions e.g, 5.6.12 to 5.6.30) directly from the UI. To do this, go to ClusterControl -> Manage -> Upgrades -> Upgrade and it will start the rolling upgrade, one node at a time.

You are able to monitor the job progress under Logs -> Jobs:

Alternatively, you can perform the upgrade manually by following the upgrade instructions from the respective database vendor. Minor upgrade does not require you to uninstall the existing packages so it should be a pretty straightforward upgrade. For example, if you are using Percona XtraDB Cluster 5.6 on CentOS 7, you can simply perform the following on one DB node at a time:

$ yum clean all $ yum install Percona-XtraDB-Cluster-56 $ systemctl restart mysql

Ensure the node re-joins the cluster and reaches the Primary state (monitor the wsrep_cluster_status and wsrep_cluster_size status) before proceeding to the next node.

That’s it. For Severalnines subscription customers, you are welcome to contact us via our support portal if you need further assistance on applying the patches.

Tags:

PerconaLive Amsterdam 2016 - Talks and shows

With Oracle Open World behind us, we are now getting ready for the next big event, i.e. the European edition of PerconaLive. I am going to be a presenter three times:

  • MySQL operations in Docker is a three-hour tutorial, and it will be an expansion of the talk by the same title presented at OOW. Attendees who want to play along can do it, by coming prepared with Docker 1.11 or later and the following images already pulled (images with [+] are mandatory, while [-] are optional):

    • mysql/mysql-server [+]
    • mysql/mysql-gr [+]
    • mysql [-]
    • percona/percona-server [-]
    • mariadb [-]
    • mysql/shell [-]
    • datacharmer/mysql-minimal-5.5 [-]
    • datacharmer/mysql-minimal-5.6 [-]
    • datacharmer/mysql-minimal-5.7 [-]
    • datacharmer/mysql-minimal-8.0 [-]
    • datacharmer/my-ubuntu [-]
    • datacharmer/my-debian [-]
    • datacharmer/my-centos [-]
  • The fastest customized MySQL deployment ever is a presentation where I show two ways of deploying MySQL on a custom server, with MySQL Sandbox and Docker.

  • The lightning talks will be held during the Community dinner at Booking.com, diamond sponsor of the conference. If you want to attend, you need to register, and if you want a free ticker for that dinner, there is still ONE SLOT for the lightning talks. Contact me directly with a title and an abstract if you want to be considered for that slot (finding my email is part of the challenge, but it should not be that difficult).

UPDATE: here's the lightning talks program. Percona should eventually add it to the schedule.

  • 19:00: arrival at Booking.com by boat
  • 19:15: welcoming speech and beginning of the LT
  • 20:00 food buffet opens, LT are not finished yet
  • 20:30: LT are done, buffet still open
  • 21:15: buffet closes

The talks accepted are the following:

  • Jaime Crespo
    dbhell: a tiny Python framework for the administration and monitoring of farms of MySQL servers
  • Federico Razzoli
    How to write slow and buggy stored procedures
  • Art Van Scheppingen
    Simple household rules that keep MySQL running
  • Pavel Tru
    What internal statistics every self-respecting dbms should have!
  • Giuseppe Maxia
    Fastest, smallest, newest: download and deploy MySQL at high speed
  • Ronald Bradford
    An Awe-Inspiring Journey Through the World of Numbers

Showing the hidden tables in MySQL 8 data dictionary

The freshly released MySQL 8.0 includes a data dictionary, which makes MySQL much more reliable. Thanks to this features, we don't have any '.frm' files, and querying the information_schema is 30x to 100x faster than previous versions.

One drawback of the implementation is that the data dictionary tables are hidden by design.

While the reason is fully understandable (they don't want to commit on an interface that may change in the future) many curious users are disappointed, because openness is the basis of good understanding and feedback.

The problem to access the dictionary tables can be split in three parts:

  • Finding the list of tables;
  • Finding the description of the tables;
  • Getting the contents of each table.

The first part is easily solved. We know that the data dictionary tables are accessed from some information_schema views (the views are defined during the initialization, at which point the DD tables are readable.)

For example:

show create view information_schema.tables\G
*************************** 1. row ***************************
View: TABLES
Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost`
SQL SECURITY DEFINER VIEW `TABLES` AS select `cat`.`name` AS
`TABLE_CATALOG`,`sch`.`name` AS `TABLE_SCHEMA`,`tbl`.`name` AS
`TABLE_NAME`,`tbl`.`type` AS `TABLE_TYPE`,if((`tbl`.`type` = 'BASE
TABLE'),`tbl`.`engine`,NULL) AS `ENGINE`,if((`tbl`.`type` = 'VIEW'),NULL,10)
AS `VERSION`,`tbl`.`row_format` AS `ROW_FORMAT`,`stat`.`table_rows` AS
`TABLE_ROWS`,`stat`.`avg_row_length` AS `AVG_ROW_LENGTH`,`stat`.`data_length`
AS `DATA_LENGTH`,`stat`.`max_data_length` AS
`MAX_DATA_LENGTH`,`stat`.`index_length` AS `INDEX_LENGTH`,`stat`.`data_free`
AS `DATA_FREE`,`stat`.`auto_increment` AS `AUTO_INCREMENT`,`tbl`.`created` AS
`CREATE_TIME`,`stat`.`update_time` AS `UPDATE_TIME`,`stat`.`check_time` AS
`CHECK_TIME`,`col`.`name` AS `TABLE_COLLATION`,`stat`.`checksum` AS
`CHECKSUM`,if((`tbl`.`type` =
'VIEW'),NULL,get_dd_create_options(`tbl`.`options`,if((ifnull(`tbl`.`partition_expression`,'NOT_PART_TBL')
= 'NOT_PART_TBL'),0,1))) AS
`CREATE_OPTIONS`,internal_get_comment_or_error(`sch`.`name`,`tbl`.`name`,`tbl`.`type`,`tbl`.`options`,`tbl`.`comment`)
AS `TABLE_COMMENT` from ((((`mysql`.`tables` `tbl` join `mysql`.`schemata`
`sch` on((`tbl`.`schema_id` = `sch`.`id`))) join `mysql`.`catalogs` `cat`
on((`cat`.`id` = `sch`.`catalog_id`))) left join `mysql`.`collations` `col`
on((`tbl`.`collation_id` = `col`.`id`))) left join `mysql`.`table_stats`
`stat` on(((`tbl`.`name` = `stat`.`table_name`) and (`sch`.`name` =
`stat`.`schema_name`)))) where (can_access_table(`sch`.`name`,`tbl`.`name`)
and (not(`tbl`.`hidden`)))
character_set_client: latin1
collation_connection: latin1_swedish_ci
1 row in set (0.00 sec)

Here we see many tables (such as mysql.schemata or mysql.collations) that don't show up when we run SHOW TABLES in the mysql database.

We can use a script like this to get the list of all tables:

MYSQL=~/sandboxes/msb_full_8_0_0/use
TABLES=$($MYSQL information_schema -BN -e 'show tables')

function show_tables
{
for T in $TABLES
do
is_view=$($MYSQL information_schema -BN -e "show create table $T\G" | grep ALGORITHM)
if [ -n "$is_view" ]
then
$MYSQL information_schema -e "show create table $T\G" \
| perl -lne 'print $1 while /mysql.\..(\w+)/g'
fi
done
}
show_tables | sort | uniq

This script searches all information_schema tables, skips the ones that are not views, and then searches in the view definition every table from the mysql database. The result is this:

catalogs
character_sets
collations
columns
foreign_key_column_usage
foreign_keys
index_column_usage
indexes
index_stats
schemata
tables
table_stats

Good. Now we have the list of tables that we can't see. The second operation is getting the description.

So, I looked at the source code, and I found out where the prohibition originated. From there, I saw that the table is accessible when the variable skip_dd_table_access_check is set. Looking at the variables inside the server, I did not find any skip_dd_table_access_check, as I was expecting, since it would not make sense to provide this information in the open after going through the pains of making all DD tables unreachable.

I searched the code for the string skip_dd_table_access_check and I found out how it is used in the test suite. The key is using the debug build of the MySQL server.

Using MySQL Sandbox, with a sandbox made from the full tarball of MySQL 8.0, I run:

~/sandboxes/msb_full_8_0_0/restart --mysqld=mysqld-debug

Now I have loaded the debug-enabled server. Let's try:

$ ~/sandboxes/msb_full_8_0_0/use mysql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 91
Server version: 8.0.0-dmr-debug MySQL Community Server - Debug (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql [localhost] {msandbox} (mysql) > show create table schemata\G
ERROR 3554 (HY000): Access to system table 'mysql.schemata' is rejected.

Having the debug build is not enough. We need to use the magic spell.

mysql [localhost] {msandbox} (mysql) > SET SESSION debug='+d,skip_dd_table_access_check';
Query OK, 0 rows affected (0.00 sec)

mysql [localhost] {msandbox} (mysql) > show create table schemata\G
*************************** 1. row ***************************
Table: schemata
Create Table: CREATE TABLE `schemata` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`catalog_id` bigint(20) unsigned NOT NULL,
`name` varchar(64) COLLATE utf8_bin NOT NULL,
`default_collation_id` bigint(20) unsigned NOT NULL,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`last_altered` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `catalog_id` (`catalog_id`,`name`),
KEY `default_collation_id` (`default_collation_id`),
CONSTRAINT `schemata_ibfk_1` FOREIGN KEY (`catalog_id`) REFERENCES `catalogs` (`id`),
CONSTRAINT `schemata_ibfk_2` FOREIGN KEY (`default_collation_id`) REFERENCES `collations` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
1 row in set (0.00 sec)

Yay! The table is now visible! We can modify the above script as follows:

mysql_tables=$(show_tables | sort | uniq)
for T in $mysql_tables
do
echo "-- $T "
$MYSQL -e "SET SESSION debug= '+d,skip_dd_table_access_check'; show create table mysql.$T\G"
done

And we get the description of every table in the data dictionary. Here it goes:

-- catalogs
*************************** 1. row ***************************
Table: catalogs
Create Table: CREATE TABLE `catalogs` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(64) COLLATE utf8_bin NOT NULL,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`last_altered` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- character_sets
*************************** 1. row ***************************
Table: character_sets
Create Table: CREATE TABLE `character_sets` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(64) CHARACTER SET utf8 NOT NULL,
`default_collation_id` bigint(20) unsigned NOT NULL,
`comment` varchar(2048) COLLATE utf8_bin NOT NULL,
`mb_max_length` int(10) unsigned NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`),
KEY `default_collation_id` (`default_collation_id`),
CONSTRAINT `character_sets_ibfk_1` FOREIGN KEY (`default_collation_id`) REFERENCES `collations` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=249 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- collations
*************************** 1. row ***************************
Table: collations
Create Table: CREATE TABLE `collations` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(64) CHARACTER SET utf8 NOT NULL,
`character_set_id` bigint(20) unsigned NOT NULL,
`is_compiled` tinyint(1) NOT NULL,
`sort_length` int(10) unsigned NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`),
KEY `character_set_id` (`character_set_id`),
CONSTRAINT `collations_ibfk_1` FOREIGN KEY (`character_set_id`) REFERENCES `character_sets` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=278 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- columns
*************************** 1. row ***************************
Table: columns
Create Table: CREATE TABLE `columns` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`table_id` bigint(20) unsigned NOT NULL,
`name` varchar(64) CHARACTER SET utf8 COLLATE utf8_tolower_ci NOT NULL,
`ordinal_position` int(10) unsigned NOT NULL,
`type` enum(/* removed */) COLLATE utf8_bin NOT NULL,
`is_nullable` tinyint(1) NOT NULL,
`is_zerofill` tinyint(1) DEFAULT NULL,
`is_unsigned` tinyint(1) DEFAULT NULL,
`char_length` int(10) unsigned DEFAULT NULL,
`numeric_precision` int(10) unsigned DEFAULT NULL,
`numeric_scale` int(10) unsigned DEFAULT NULL,
`datetime_precision` int(10) unsigned DEFAULT NULL,
`collation_id` bigint(20) unsigned DEFAULT NULL,
`has_no_default` tinyint(1) DEFAULT NULL,
`default_value` blob,
`default_value_utf8` text COLLATE utf8_bin,
`default_option` blob,
`update_option` varchar(32) COLLATE utf8_bin DEFAULT NULL,
`is_auto_increment` tinyint(1) DEFAULT NULL,
`is_virtual` tinyint(1) DEFAULT NULL,
`generation_expression` longblob,
`generation_expression_utf8` longtext COLLATE utf8_bin,
`comment` varchar(2048) COLLATE utf8_bin NOT NULL,
`hidden` tinyint(1) NOT NULL,
`options` mediumtext COLLATE utf8_bin,
`se_private_data` mediumtext COLLATE utf8_bin,
`column_key` enum('','PRI','UNI','MUL') COLLATE utf8_bin NOT NULL,
`column_type_utf8` mediumtext COLLATE utf8_bin NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `table_id` (`table_id`,`name`),
UNIQUE KEY `table_id_2` (`table_id`,`ordinal_position`),
KEY `collation_id` (`collation_id`),
CONSTRAINT `columns_ibfk_1` FOREIGN KEY (`table_id`) REFERENCES `tables` (`id`),
CONSTRAINT `columns_ibfk_2` FOREIGN KEY (`collation_id`) REFERENCES `collations` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3450 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- foreign_key_column_usage
*************************** 1. row ***************************
Table: foreign_key_column_usage
Create Table: CREATE TABLE `foreign_key_column_usage` (
`foreign_key_id` bigint(20) unsigned NOT NULL,
`ordinal_position` int(10) unsigned NOT NULL,
`column_id` bigint(20) unsigned NOT NULL,
`referenced_column_name` varchar(64) CHARACTER SET utf8 COLLATE utf8_tolower_ci NOT NULL,
PRIMARY KEY (`foreign_key_id`,`ordinal_position`),
UNIQUE KEY `foreign_key_id` (`foreign_key_id`,`column_id`,`referenced_column_name`),
KEY `column_id` (`column_id`),
CONSTRAINT `foreign_key_column_usage_ibfk_1` FOREIGN KEY (`foreign_key_id`) REFERENCES `foreign_keys` (`id`),
CONSTRAINT `foreign_key_column_usage_ibfk_2` FOREIGN KEY (`column_id`) REFERENCES `columns` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- foreign_keys
*************************** 1. row ***************************
Table: foreign_keys
Create Table: CREATE TABLE `foreign_keys` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`schema_id` bigint(20) unsigned NOT NULL,
`table_id` bigint(20) unsigned NOT NULL,
`name` varchar(64) CHARACTER SET utf8 NOT NULL,
`unique_constraint_id` bigint(20) unsigned NOT NULL,
`match_option` enum('NONE','PARTIAL','FULL') COLLATE utf8_bin NOT NULL,
`update_rule` enum('NO ACTION','RESTRICT','CASCADE','SET NULL','SET DEFAULT') COLLATE utf8_bin NOT NULL,
`delete_rule` enum('NO ACTION','RESTRICT','CASCADE','SET NULL','SET DEFAULT') COLLATE utf8_bin NOT NULL,
`referenced_table_catalog` varchar(64) COLLATE utf8_bin NOT NULL,
`referenced_table_schema` varchar(64) COLLATE utf8_bin NOT NULL,
`referenced_table_name` varchar(64) COLLATE utf8_bin NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `schema_id` (`schema_id`,`name`),
UNIQUE KEY `table_id` (`table_id`,`name`),
KEY `unique_constraint_id` (`unique_constraint_id`),
CONSTRAINT `foreign_keys_ibfk_1` FOREIGN KEY (`schema_id`) REFERENCES `schemata` (`id`),
CONSTRAINT `foreign_keys_ibfk_2` FOREIGN KEY (`unique_constraint_id`) REFERENCES `indexes` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=46 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- index_column_usage
*************************** 1. row ***************************
Table: index_column_usage
Create Table: CREATE TABLE `index_column_usage` (
`index_id` bigint(20) unsigned NOT NULL,
`ordinal_position` int(10) unsigned NOT NULL,
`column_id` bigint(20) unsigned NOT NULL,
`length` int(10) unsigned DEFAULT NULL,
`order` enum('UNDEF','ASC','DESC') COLLATE utf8_bin NOT NULL,
`hidden` tinyint(1) NOT NULL,
UNIQUE KEY `index_id` (`index_id`,`ordinal_position`),
UNIQUE KEY `index_id_2` (`index_id`,`column_id`,`hidden`),
KEY `f2` (`column_id`),
CONSTRAINT `index_column_usage_ibfk_1` FOREIGN KEY (`index_id`) REFERENCES `indexes` (`id`),
CONSTRAINT `index_column_usage_ibfk_2` FOREIGN KEY (`column_id`) REFERENCES `columns` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- indexes
*************************** 1. row ***************************
Table: indexes
Create Table: CREATE TABLE `indexes` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`table_id` bigint(20) unsigned NOT NULL,
`name` varchar(64) CHARACTER SET utf8 COLLATE utf8_tolower_ci NOT NULL,
`type` enum('PRIMARY','UNIQUE','MULTIPLE','FULLTEXT','SPATIAL') COLLATE utf8_bin NOT NULL,
`algorithm` enum('SE_SPECIFIC','BTREE','RTREE','HASH','FULLTEXT') COLLATE utf8_bin NOT NULL,
`is_algorithm_explicit` tinyint(1) NOT NULL,
`is_visible` tinyint(1) NOT NULL,
`is_generated` tinyint(1) NOT NULL,
`hidden` tinyint(1) NOT NULL,
`ordinal_position` int(10) unsigned NOT NULL,
`comment` varchar(2048) COLLATE utf8_bin NOT NULL,
`options` mediumtext COLLATE utf8_bin,
`se_private_data` mediumtext COLLATE utf8_bin,
`tablespace_id` bigint(20) unsigned DEFAULT NULL,
`engine` varchar(64) COLLATE utf8_bin NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `table_id` (`table_id`,`name`),
KEY `tablespace_id` (`tablespace_id`),
CONSTRAINT `indexes_ibfk_1` FOREIGN KEY (`table_id`) REFERENCES `tables` (`id`),
CONSTRAINT `indexes_ibfk_2` FOREIGN KEY (`tablespace_id`) REFERENCES `tablespaces` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=235 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- index_stats
*************************** 1. row ***************************
Table: index_stats
Create Table: CREATE TABLE `index_stats` (
`schema_name` varchar(64) COLLATE utf8_bin NOT NULL,
`table_name` varchar(64) COLLATE utf8_bin NOT NULL,
`index_name` varchar(64) COLLATE utf8_bin NOT NULL,
`column_name` varchar(64) COLLATE utf8_bin NOT NULL,
`cardinality` bigint(20) unsigned DEFAULT NULL,
UNIQUE KEY `schema_name` (`schema_name`,`table_name`,`index_name`,`column_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- schemata
*************************** 1. row ***************************
Table: schemata
Create Table: CREATE TABLE `schemata` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`catalog_id` bigint(20) unsigned NOT NULL,
`name` varchar(64) COLLATE utf8_bin NOT NULL,
`default_collation_id` bigint(20) unsigned NOT NULL,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`last_altered` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `catalog_id` (`catalog_id`,`name`),
KEY `default_collation_id` (`default_collation_id`),
CONSTRAINT `schemata_ibfk_1` FOREIGN KEY (`catalog_id`) REFERENCES `catalogs` (`id`),
CONSTRAINT `schemata_ibfk_2` FOREIGN KEY (`default_collation_id`) REFERENCES `collations` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- tables
*************************** 1. row ***************************
Table: tables
Create Table: CREATE TABLE `tables` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`schema_id` bigint(20) unsigned NOT NULL,
`name` varchar(64) COLLATE utf8_bin NOT NULL,
`type` enum('BASE TABLE','VIEW','SYSTEM VIEW') COLLATE utf8_bin NOT NULL,
`engine` varchar(64) CHARACTER SET utf8 NOT NULL,
`mysql_version_id` int(10) unsigned NOT NULL,
`row_format` enum('Fixed','Dynamic','Compressed','Redundant','Compact','Paged') COLLATE utf8_bin DEFAULT NULL,
`collation_id` bigint(20) unsigned DEFAULT NULL,
`comment` varchar(2048) COLLATE utf8_bin NOT NULL,
`hidden` tinyint(1) NOT NULL,
`options` mediumblob,
`se_private_data` mediumtext COLLATE utf8_bin,
`se_private_id` bigint(20) unsigned DEFAULT NULL,
`tablespace_id` bigint(20) unsigned DEFAULT NULL,
`partition_type` enum(/*removed*/) COLLATE utf8_bin DEFAULT NULL,
`partition_expression` varchar(2048) COLLATE utf8_bin DEFAULT NULL,
`default_partitioning` enum('NO','YES','NUMBER') COLLATE utf8_bin DEFAULT NULL,
`subpartition_type` enum('HASH','KEY_51','KEY_55','LINEAR_HASH','LINEAR_KEY_51','LINEAR_KEY_55') COLLATE utf8_bin DEFAULT NULL,
`subpartition_expression` varchar(2048) COLLATE utf8_bin DEFAULT NULL,
`default_subpartitioning` enum('NO','YES','NUMBER') COLLATE utf8_bin DEFAULT NULL,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`last_altered` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`view_definition` longblob,
`view_definition_utf8` longtext COLLATE utf8_bin,
`view_check_option` enum('NONE','LOCAL','CASCADED') COLLATE utf8_bin DEFAULT NULL,
`view_is_updatable` enum('NO','YES') COLLATE utf8_bin DEFAULT NULL,
`view_algorithm` enum('UNDEFINED','TEMPTABLE','MERGE') COLLATE utf8_bin DEFAULT NULL,
`view_security_type` enum('DEFAULT','INVOKER','DEFINER') COLLATE utf8_bin DEFAULT NULL,
`view_definer` varchar(93) COLLATE utf8_bin DEFAULT NULL,
`view_client_collation_id` bigint(20) unsigned DEFAULT NULL,
`view_connection_collation_id` bigint(20) unsigned DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `schema_id` (`schema_id`,`name`),
UNIQUE KEY `engine` (`engine`,`se_private_id`),
KEY `engine_2` (`engine`),
KEY `collation_id` (`collation_id`),
KEY `tablespace_id` (`tablespace_id`),
CONSTRAINT `tables_ibfk_1` FOREIGN KEY (`schema_id`) REFERENCES `schemata` (`id`),
CONSTRAINT `tables_ibfk_2` FOREIGN KEY (`collation_id`) REFERENCES `collations` (`id`),
CONSTRAINT `tables_ibfk_3` FOREIGN KEY (`tablespace_id`) REFERENCES `tablespaces` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=322 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- table_stats
*************************** 1. row ***************************
Table: table_stats
Create Table: CREATE TABLE `table_stats` (
`schema_name` varchar(64) COLLATE utf8_bin NOT NULL,
`table_name` varchar(64) COLLATE utf8_bin NOT NULL,
`table_rows` bigint(20) unsigned DEFAULT NULL,
`avg_row_length` bigint(20) unsigned DEFAULT NULL,
`data_length` bigint(20) unsigned DEFAULT NULL,
`max_data_length` bigint(20) unsigned DEFAULT NULL,
`index_length` bigint(20) unsigned DEFAULT NULL,
`data_free` bigint(20) unsigned DEFAULT NULL,
`auto_increment` bigint(20) unsigned DEFAULT NULL,
`checksum` bigint(20) unsigned DEFAULT NULL,
`update_time` timestamp NULL DEFAULT NULL,
`check_time` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`schema_name`,`table_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC

In addition to the tables referred in views, there are three that are mentioned in the documentation but not implemented as a view in information_schema: triggers, events, and routines.


show create table triggers\G
*************************** 1. row ***************************
Table: triggers
Create Table: CREATE TABLE `triggers` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`schema_id` bigint(20) unsigned NOT NULL,
`name` varchar(64) CHARACTER SET utf8 NOT NULL,
`event_type` enum('INSERT','UPDATE','DELETE') COLLATE utf8_bin NOT NULL,
`table_id` bigint(20) unsigned NOT NULL,
`action_timing` enum('BEFORE','AFTER') COLLATE utf8_bin NOT NULL,
`action_order` int(10) unsigned NOT NULL,
`action_statement` longblob NOT NULL,
`action_statement_utf8` longtext COLLATE utf8_bin NOT NULL,
`created` timestamp(2) NOT NULL DEFAULT CURRENT_TIMESTAMP(2) ON UPDATE CURRENT_TIMESTAMP(2),
`last_altered` timestamp(2) NOT NULL DEFAULT CURRENT_TIMESTAMP(2),
`sql_mode` set(/*removed*/) COLLATE utf8_bin NOT NULL,
`definer` varchar(93) COLLATE utf8_bin NOT NULL,
`client_collation_id` bigint(20) unsigned NOT NULL,
`connection_collation_id` bigint(20) unsigned NOT NULL,
`schema_collation_id` bigint(20) unsigned NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `schema_id` (`schema_id`,`name`),
UNIQUE KEY `table_id` (`table_id`,`event_type`,`action_timing`,`action_order`),
KEY `client_collation_id` (`client_collation_id`),
KEY `connection_collation_id` (`connection_collation_id`),
KEY `schema_collation_id` (`schema_collation_id`),
CONSTRAINT `triggers_ibfk_1` FOREIGN KEY (`schema_id`) REFERENCES `schemata` (`id`),
CONSTRAINT `triggers_ibfk_2` FOREIGN KEY (`table_id`) REFERENCES `tables` (`id`),
CONSTRAINT `triggers_ibfk_3` FOREIGN KEY (`client_collation_id`) REFERENCES `collations` (`id`),
CONSTRAINT `triggers_ibfk_4` FOREIGN KEY (`connection_collation_id`) REFERENCES `collations` (`id`),
CONSTRAINT `triggers_ibfk_5` FOREIGN KEY (`schema_collation_id`) REFERENCES `collations` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
show create table events\G
*************************** 1. row ***************************
Table: events
Create Table: CREATE TABLE `events` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`schema_id` bigint(20) unsigned NOT NULL,
`name` varchar(64) CHARACTER SET utf8 NOT NULL,
`definer` varchar(93) COLLATE utf8_bin NOT NULL,
`time_zone` varchar(64) COLLATE utf8_bin NOT NULL,
`definition` longblob NOT NULL,
`definition_utf8` longtext COLLATE utf8_bin NOT NULL,
`execute_at` datetime DEFAULT NULL,
`interval_value` int(11) DEFAULT NULL,
`interval_field` enum('YEAR','QUARTER','MONTH','DAY','HOUR','MINUTE','WEEK','SECOND','MICROSECOND','YEAR_MONTH','DAY_HOUR','DAY_MINUTE','DAY_SECOND','HOUR_MINUTE','HOUR_SECOND','MINUTE_SECOND','DAY_MICROSECOND','HOUR_MICROSECOND','MINUTE_MICROSECOND','SECOND_MICROSECOND') COLLATE utf8_bin DEFAULT NULL,
`sql_mode` set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') COLLATE utf8_bin NOT NULL,
`starts` datetime DEFAULT NULL,
`ends` datetime DEFAULT NULL,
`status` enum('ENABLED','DISABLED','SLAVESIDE_DISABLED') COLLATE utf8_bin NOT NULL,
`on_completion` enum('DROP','PRESERVE') COLLATE utf8_bin NOT NULL,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`last_altered` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`last_executed` datetime DEFAULT NULL,
`comment` varchar(2048) COLLATE utf8_bin NOT NULL,
`originator` int(10) unsigned NOT NULL,
`client_collation_id` bigint(20) unsigned NOT NULL,
`connection_collation_id` bigint(20) unsigned NOT NULL,
`schema_collation_id` bigint(20) unsigned NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `schema_id` (`schema_id`,`name`),
KEY `client_collation_id` (`client_collation_id`),
KEY `connection_collation_id` (`connection_collation_id`),
KEY `schema_collation_id` (`schema_collation_id`),
CONSTRAINT `events_ibfk_1` FOREIGN KEY (`schema_id`) REFERENCES `schemata` (`id`),
CONSTRAINT `events_ibfk_2` FOREIGN KEY (`client_collation_id`) REFERENCES `collations` (`id`),
CONSTRAINT `events_ibfk_3` FOREIGN KEY (`connection_collation_id`) REFERENCES `collations` (`id`),
CONSTRAINT `events_ibfk_4` FOREIGN KEY (`schema_collation_id`) REFERENCES `collations` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC

show create table routines\G
*************************** 1. row ***************************
Table: ROUTINES
Create Table: CREATE TEMPORARY TABLE `ROUTINES` (
`SPECIFIC_NAME` varchar(64) NOT NULL DEFAULT '',
`ROUTINE_CATALOG` varchar(512) NOT NULL DEFAULT '',
`ROUTINE_SCHEMA` varchar(64) NOT NULL DEFAULT '',
`ROUTINE_NAME` varchar(64) NOT NULL DEFAULT '',
`ROUTINE_TYPE` varchar(9) NOT NULL DEFAULT '',
`DATA_TYPE` varchar(64) NOT NULL DEFAULT '',
`CHARACTER_MAXIMUM_LENGTH` int(21) DEFAULT NULL,
`CHARACTER_OCTET_LENGTH` int(21) DEFAULT NULL,
`NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
`NUMERIC_SCALE` int(21) DEFAULT NULL,
`DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
`CHARACTER_SET_NAME` varchar(64) DEFAULT NULL,
`COLLATION_NAME` varchar(64) DEFAULT NULL,
`DTD_IDENTIFIER` longtext,
`ROUTINE_BODY` varchar(8) NOT NULL DEFAULT '',
`ROUTINE_DEFINITION` longtext,
`EXTERNAL_NAME` varchar(64) DEFAULT NULL,
`EXTERNAL_LANGUAGE` varchar(64) DEFAULT NULL,
`PARAMETER_STYLE` varchar(8) NOT NULL DEFAULT '',
`IS_DETERMINISTIC` varchar(3) NOT NULL DEFAULT '',
`SQL_DATA_ACCESS` varchar(64) NOT NULL DEFAULT '',
`SQL_PATH` varchar(64) DEFAULT NULL,
`SECURITY_TYPE` varchar(7) NOT NULL DEFAULT '',
`CREATED` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`LAST_ALTERED` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`SQL_MODE` varchar(8192) NOT NULL DEFAULT '',
`ROUTINE_COMMENT` longtext NOT NULL,
`DEFINER` varchar(93) NOT NULL DEFAULT '',
`CHARACTER_SET_CLIENT` varchar(32) NOT NULL DEFAULT '',
`COLLATION_CONNECTION` varchar(32) NOT NULL DEFAULT '',
`DATABASE_COLLATION` varchar(32) NOT NULL DEFAULT ''
) ENGINE=InnoDB DEFAULT CHARSET=utf8

Looking at the code again, I see that there are also tables tablespaces and version:

show create table tablespaces\G
*************************** 1. row ***************************
Table: tablespaces
Create Table: CREATE TABLE `tablespaces` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(255) COLLATE utf8_bin NOT NULL,
`options` mediumtext COLLATE utf8_bin,
`se_private_data` mediumtext COLLATE utf8_bin,
`comment` varchar(2048) COLLATE utf8_bin NOT NULL,
`engine` varchar(64) COLLATE utf8_bin NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC

show create table version\G
*************************** 1. row ***************************
Table: version
Create Table: CREATE TABLE `version` (
`version` int(10) unsigned NOT NULL,
PRIMARY KEY (`version`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC

Now we can try the last part of our task, i.e., querying the data directory for some specific info.

mysql [localhost] {msandbox} (mysql) > SET SESSION debug='+d,skip_dd_table_access_check';
mysql [localhost] {msandbox} (mysql) > select * from version;
+---------+
| version |
+---------+
| 1 |
+---------+

mysql [localhost] {msandbox} (mysql) > select id, name from schemata;
+----+--------------------+
| id | name |
+----+--------------------+
| 2 | information_schema |
| 1 | mysql |
| 3 | performance_schema |
| 4 | sys |
| 5 | test |
+----+--------------------+


mysql [localhost] {msandbox} (mysql) > select id, name, type, engine, mysql_version_id, comment from tables where name = 'user' and schema_id=1;
+----+------+------------+--------+------------------+-----------------------------+
| id | name | type | engine | mysql_version_id | comment |
+----+------+------------+--------+------------------+-----------------------------+
| 84 | user | BASE TABLE | InnoDB | 80000 | Users and global privileges |
+----+------+------------+--------+------------------+-----------------------------+

Now the data dictionary is much more readable!

DISCLAIMER: there may be a simpler or more elegant solution to this problem. The method shown here is what I got by researching. But in fact, if there is a better method, short of recompiling the server, I'd like to know.

WARNING: Don't do what I do in the following paragraphs!

To complete the experiment, I am going to do what the MySQL team does not want me to do at all.

First, I create a directory inside the data directory. As shown in data dictionary limitations, this is not supported. But since we can access the data dictionary ...


$ mkdir ~/sandboxes/msb_full_8_0_0/data/db1

Now for the felony part:

mysql [localhost] {msandbox} (mysql) > SET SESSION debug= '+d,skip_dd_table_access_check';
Query OK, 0 rows affected (0.00 sec)

mysql [localhost] {msandbox} (mysql) > select * from schemata;
+----+------------+--------------------+----------------------+---------------------+---------------------+
| id | catalog_id | name | default_collation_id | created | last_altered |
+----+------------+--------------------+----------------------+---------------------+---------------------+
| 1 | 1 | mysql | 8 | 0000-00-00 00:00:00 | 0000-00-00 00:00:00 |
| 2 | 1 | information_schema | 33 | 2016-09-25 18:06:00 | 2016-09-25 18:06:00 |
| 3 | 1 | performance_schema | 33 | 0000-00-00 00:00:00 | 0000-00-00 00:00:00 |
| 4 | 1 | sys | 33 | 0000-00-00 00:00:00 | 0000-00-00 00:00:00 |
| 5 | 1 | test | 8 | 0000-00-00 00:00:00 | 0000-00-00 00:00:00 |
+----+------------+--------------------+----------------------+---------------------+---------------------+
5 rows in set (0.00 sec)

mysql [localhost] {msandbox} (mysql) > insert into schemata values (6, 1, 'db1', 8, now(), now());
Query OK, 1 row affected (0.01 sec)

mysql [localhost] {msandbox} (mysql) > show schemas;
+--------------------+
| Database |
+--------------------+
| db1 | ## TA-DA!
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
+--------------------+

Now, pay attention! This why the MySQL team don't want anyone to mess up with the data dictionary tables.

DOUBLE WARNING! Don't do the following!


mysql [localhost] {msandbox} (mysql) > insert into schemata values (7, 1, 'db2', 8, now(), now());
Query OK, 1 row affected (0.01 sec)

mysql [localhost] {msandbox} (mysql) > show schemas;
+--------------------+
| Database |
+--------------------+
| db1 |
| db2 |
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
+--------------------+
7 rows in set (0.00 sec)

mysql [localhost] {msandbox} (mysql) > use db2
ERROR 1049 (42000): Unknown database 'db2'

There! I broke the system. Lesson learned: read, don't write data dictionary tables.

MySQL at Oracle Open World 2016

MySQL is a growing presence at Oracle Open World. While most of the headlines belong to the main products, where Oracle services are aiming at world domination, MySQL shared the spotlight, as it was announced to be part of Oracle database cloud. It seems a logical move for Oracle: after all the effort to make MySQL 5.7 the biggest release ever, it stands to reason that it is offered as a competitive feature in its own database as a service.

With this offer, Oracle is applying enterprise pricing and methodologies to a target of emerging companies. MySQL in the Oracle cloud differs from the competition by a few key points:

  • It's only MySQL 5.7. While this is the most advanced MySQL server available, users with existing deployments may exercise caution before adopting this cloud service. Companies that are starting now, instead, may take advantage of using the latest and greatest.
  • It's MySQL Enterprise edition, with a wealth of additional monitoring and management tools that are missing in other cloud offerings.
  • Unlike some popular competitors, it includes access to the server host, allowing DBAs to enable plugins, fine tune the deployment, and explore the operating system in addition to the database.
  • It includes Oracle support.

Time will tell if this offering will be successful. It may not appeal to all the users, but there is surely a category of power users who can take advantage of these features.

The surprises of the MySQL keynote did not stop at the cloud. We had already seen, one week before the conference, that MySQL 8.0 was released, with many juicy features. What came unexpected are two announcements:

  • MySQL group replication, a product that has been living in MySQL Labs for quite a while, was declared "release candidate" despite lacking documentation and being released in a format that discourages adoption, except from diehard hackers.
  • Another product is been released, again in the labs, with an ambitious mission. The MySQL InnoDB cluster is based on group replication and wants to be a 15-minute deployment of high-availability and scalable system, thanks to an enhanced version of MySQL Shell (the same used for the document store. Its feature set are exciting, but what we have seen in the demos suggests that the product is still in the early stages of development.

With these moves, Oracle is showing two paths of MySQL development:

  • in the main path, which has produced the GA of MySQL 5.5, 5.6, and 5.7, the MySQL team is showing the positive influence of Oracle engineering, with focus on security, performance, and stability.
  • in a parallel course, which started last April with the announcement of MySQL document store and its related shell, the team wants to introduce new features to a GA release as plugins, with the reasoning that the main release will not be touched (thus avoiding the taboo of altering a stable product) but users are free to enable plugins and unleash new functionalities.

The mix of traditional and agile releases are provoking exciting thoughts, albeit moderated by the fear of using together experimental code in a GA deployment.

The methodology of these releases is also baffling. It is unclear how mature is the document store. The plugin comes with the server, and it is accompanied by an huge set of documentation, which implies that it has been designed extensively and tested internally for a while, but the main tool for the feature, mysql shell is labeled as development preview: not very encouraging. On the other hand, the latest plugin addition, the MySQL group replication, which has been declared of release candidate quality, is still in the labs (no plugin in the server release), and without documentation.

All considered, while it is clear that Oracle is putting an enormous engineering effort into growing MySQL, I have the feeling that the replication features have been neglected and the announcement of group replication mixed fortunes confirms me in this belief.

The conference was useful to me. I had the chance of meeting many Oracle engineers and users, and discuss technical and community matters at length. My own presentation, a two-hour tutorial on MySQL operations in Docker was highly satisfactory, as it proved to be an interesting topic that was actively discussed by the audience.

Introducing BakaSQL, the MySQL web-based query executor

BakaSQL: a bit of historyWhen I joined this new company I was already aware there was a lot of work to do. There was no DBA Team to speak of; any MySQL related task was handled by the sysadmins, and the company was growing as fast as you can imagine, with 15-20 new employees arriving each month, most of them in the IT department. For sure, there was much fun ahead.

During my first week in the new job  I immediately felt that something was not right. I was receiving too much DMLs to execute each day, spanning a wide set of servers, schemas an tables. This is not something I had seen before, so I started asking around what the reason behind it was.  It turned out that there were multiple reasons behind it:

  • there were known bugs in the applications that were never fixed
  • application configuration items were stored in MySQL, but there was no admin panel to modify them
  • accounting department had to apply corrections to the accounting records, which were also stored in MySQL, but again they had no tool to do this
  • lastly, most developers did not have write  privileges to production servers
You may ask yourself why, for example, bugs were not fixed, or admin panels were not built to allow for modification of the config items, or why the accounting department did not have a tool to apply the needed corrections.  Well, so did I. The answer lied in the start-up nature of the company, where all effort were spent in developing new features, and there was little or no time left for fixing bugs or building administrative tools.  Although I did not agree at all with this approach, I had to live with it.
But spending most of the day executing DMLs on behalf of other teams certainly was not something I was happy with. Mind you, executing a DML can be a quick task, but backing up old rows that were going to be changed by said DML was time consuming and rather frustrating. But of course having a rollback plan was mandatory, as it can happen that the DML you have executed is not what the requestor actually wanted to execute... In any case, in order to bring everything back under control, I had to revoke write privileges from the developers who had them,  and this of course added up to the amount of DMLs that I had to handle on a daily basis. Ouch!
So I am in my 3rd week at the new job and the DMLs keep coming my way,  while there was much more important stuff to work on, like for example setting up proper backups, apply retention to data, consolidate, review,  improve performances... you name it.  So I had to find a way to speed up this DML execution thing, in order to save my time for something more important and especially more interesting and challenging.
In the next days, I started working on a bash script that was meant to be called via the CGI interface, from a web browser. At the beginning it was a very basic script which accepted user and password for the database and allowed to execute the DML that you pasted in the web form.  The script would parse the DML and, before executing it, would compute SQL statements that, if executed, would roll back the changes applied by the DML, saving these statements on file. In case a rollback was needed, I could just open this file and execute the code contained herein to revert the change.
Week after week, the small script improved, mainly because every time I received a DML that was not properly handled by the script (read: the rollback code was  not computed properly), I worked to improve it by having it handle the new DML. 
After few months, the small script had become a large script, and was handling almost any DML that was sent to me just fine. Still it was a bit on the slow side: parsing and building SQL using a bash script was not be the best option, of course there are dozens of other languages that would be more suited for this task, but please bear with me, as I am not a dev, I am just a DBA and although my development skills are there, they are rather old school. Anyways, I was receiving tickets with  up to 2000 update statements, or 1500 inserts, or 3000 deletes. Crazy, I know, but still I had to do something in order to handle them without having to wait 20 minutes each time.
I decided to rewrite some of the functions of the script using C, in order to boost execution speed, which was especially important when there was a large number of statements like in the examples above. So I introduced a poor man's profiler in the script, which allowed me to see where the script was spending most of its time.  After a week, I looked at the profiling data and rewrote some of the most frequently used and/or time consuming routines in C,  so that invocation of such routines in the script were replaced by a call to an external executable, which contained the replacement routine. 
The results were amazing, as now the script was able to execute even a large amount of statements in short time.  Over the next months some bugs popped out here and there, most of them being edge cases, and more features were added, for example proper handling of variables.
Fast forward to present day, the small then large script has now become our everyday tool. I named it BakaSQL (from a notorious Indian demon who is well known for its insatiable appetite) and it is not only used by the DBA Team, but also by some developers which can modify their config tables independently and in a quick and safe manner without the need to raise a ticket to our team.
Welcome, BakaSQL!
So how does BakaSQL work? 
It connects to a MySQL instance using the supplied credentials, and executes the supplied DML statement(s), which should be delimited by the usual semicolon.
BakaSQL has a dry run feature built in, which should be used to check the DML statements before executing them and, especially, to verify that the generated rollback code is proper. You should first run your DML with the tick on the dry run checkbox, then if everything looks good, you remove the tick from the dry run checkbox and execute again. When running in dry-run mode, the statements are effectively sent to the MySQL server, however, a ROLLBACK statement is issued at the end, instead of a COMMIT; this ensures a proper validation of the entire DML, including duplicate keys, FK constraint violations, etcetera.
One word of caution: the DML statements are executed within a single transaction. This is to enforce an all-or-nothing modification of the data. If you have a large number of statements in your DML, e.g. 2000 updates, please keep in mind that all the affected rows will be locked for the entire duration of the execution, until the commit (or rollback), and this may have side effects if some other process wants to modify one of these rows in the meantime. This also applies when running in dry-run mode.
So, here is what BakaSQL can offer:
  • automatic generation of rollback scripts that will revert the changes applied by your DML when executed
  • confirmation that every query uses a proper index (avoid locking table scans!!)
  • dry run capability (check DML statements and compute rollback code without modifying the data)
  • logging of everything executed against the database, including reference ticket # if supplied
  • transaction consistency: your DML code is run in a single transaction so an all-or-nothing modification is enforced
Rollback code is generated and shown to you during dry run. Please see the screenshot below for reference. When you execute, the rollback code is saved to a file named after the username that is executing, the ticket information, and the BakaSQL instance pid number.
BakaSQL will only execute DMLs. No DDLs are supported (better use pt-osc for those!) and also you cannot run SELECTs through this tool, with the exception of variable initialization (see below).
BakaSQL enforces some restrictions on what you can pass as DML; some are intended to avoid situations that could cause harm to the database, while others are to avoid unnecessary complications and slowdowns in the script:
  • backticks (backquotes) are not allowed as these create troubles within bash, but there is a checkbox that you can use to wipe them out (check it when runnng in dry run mode)
  • USE statement is not allowed, table names should be qualified if more than one schema is in use in the DML
  • SET NAMES is not needed and should not be included, as the tool will run the transaction with SET NAMES utf8 by default
  • SHOW WARNINGS, COMMIT, ROLLBACK are not supported, we already run our own transaction anyways
  • comments (double dash at start of line) should be removed from the code 
  • strings should be enclosed in single quotes; double quotes are not supported as string delimitator
  • INSERT IGNORE is not supported so ask the requestor to make up his mind up and decide whether he really wants that row inserted or not
  • auto_increment columns cannot be loaded with specific values, these are meant to be  populated by MySQL
  • WHERE conditions that have both an equality and an IN(), or multiple IN()s, are not supported (e.g.: where a = 1 and b in (2, 3) ) - these should be converted to all equalities
  • statements that use NOT in the WHERE condition are not accepted as index is rarely used in that case, these need to be converted  to avoid the NOT operator
  • statements that use OR in the WHERE condition are not accepted for the same reason as above; these need to be converted to multiple statements instead
  • subqueries are not supported, so if there is need to update some rows with the contents of some other table, CONCAT() can be used to generate updates by primary key that can be submitted to the tool
  • setting values in UPDATE statements using functions is not supported
There are also a few important limitations that you should be aware of when using the tool, some are listed above already but I want to stress about their importance so...
  •  BakaSQL only recognizes the single quote as a string delimiter, this is to make parsing simpler and faster; therefore, you need to properly escape single quotes in  your strings if there are any
  • BakaSQL does not like backticks (backquotes), these create problems within the bash script, however  there is a checkbox that you can use to quickly remove all of them from the DML before execution. If the damn developer likes to use MySQL reserved words as column names, you can check the backtick removal checkbox, run a dry run to wipe them all, then re-add the backticks only around the reserved names used as column, that will work and allow you to proceed
  • sometimes, when you receive DML through a ticket, there is some weird character in it. I have verified that Microsoft Word, and a few other editors, like to allow the user to specify UTF-8 characters that look exactly the same as their ASCII counterparts, but actually they are different. Specifiically, the double quote, the single quote and even the blank (space) character may have been replaced by some UTF-8 bizarre counterpart that BakaSQL will not digest. If you see weird behaviour when executing BakaSQL, check your DML code for those weird substitutes by copying the DML code into a 7 bit editor first
  • if the last statement in the DML does not get executed (or if the only statement in a single statement DML does not get executed) it is likely that the semicolon that should delimit the query (or the queries) is not there; please check that out!
Also, it is worth mentioning that there are a couple of edge cases where BakaSQL is not able to properly create rollback code:
  • the tool will not support changing (part of) a primary key with an update statement, as rollback code will be incorrect in this case; you will get an error if your DML tries to do that
  • if, in an update statement, the WHERE condition contains one of the columns touched by the update, rollback may be incorrect; BakaSQL will try to fix simple cases for you, but remember to always check the rollback code.... if this situation is detected, you will see a warning right before the rollback code pertaining to the offending statement
Although these limitations may seem too much of a burden at first, I can assure you that once your dev team gets acquainted with them, everything will work quite smoothly.

Variables
BakaSQL supports variables in the DML that is passed to it. Variable substitution happens in a recursive way, so it is also possible to have variables that reference other variables.  
Syntax for variable initialization is similar to standard MySQL, but only a specific syntax is supported by BakaSQL.  See following examples:
set @myvar1 = (select val from rick.t1 where val=12121 and id = 1021);
set @myvar2 = (select val from rick.t1 where id = 1385);
set @myvar3 = (select 666);
insert into t1 (val) values (@myvar1);
insert into t1 (val) values (@last_insert_id), (@myvar2), (@myvar3);

Basically, the only way to initialize a variable will be :
set @variable = (select statement that returns one row);
but it is possible to assign one variable to another, using same syntax, as well to use a variable name in any place in a query that is part of your DML, as long as the variable has been initialized beforehand. Please note that you cannot reuse (reassign) variables, so you have to use a new variable name each time that you initialize a new one.
A special variable exists and is called @last_insert_id.  This variable is automatically initialized when you insert a row in a table which has an auto increment primary key, and you can use it as any other place in your DML. For this reason, LAST_INSERT_ID() function is not supported by the tool, as it is built in and automatically executed on your behalf if the tool detects that there is an auto increment column in the table that you are inserting into.
Installation
At this time, there is no packaging for BakaSQL (I think I already told you I'm a lazy guy), but its installation should be straightforward for anyone with basic Linux/UNIX skills. Basically, here is what you need to get it running:
  • a Linux/UNIX server (don't ask me about Windoze please)
  • Apache httpd (or nginx, or your preferred web server that supports CGI specs)
  • gcc compiler, to compile the BakaSQL utility
Installation requires that you know where your cgi-bin directory is. Place the bakasql.cgi script into it, make it executable by web server, and edit it to tailor the following variables that you find on top of the script:
  • HOSTFILE should point to a file that contains the list of the hostnames of your masters, one per line
  • BASE should point to a folder where BakaSQL will save logs, rollback code and other information; it should be writable by the user that the webserver runs under
  • MIN_REQ_CARDINALITY is the minimum cardinality, from 0 to 100, that you require for an index to be considered by BakaSQL good;  sometimes, especially for small tables, you will get a false negative about the cardinality of an index, please see Ninja Mode below for a workaround
  • BAKAUTILS is the path to the bakautils C program, see below
There are a few other variables that refer to the MySQL profiling instance, you can safely ignore these and comment out the profiling code that you will find in the profile_out() routine (just comment out the insert that you'll find there).
You also need to compile the bakautils.c program that contains the rewritten routines. You will need gcc for that purpose, just use the following command to compile the program:
      gcc -O3 -o bakautils bakautils.c
Once compiled, place the executable in the $BAKAUTILS path and ensure it is executable by the user your web server is running under.
Each user that wants to use BakaSQL needs MySQL privileges from the IP address of the web server that BakaSQL runs under.  The classic "INSERT,DELETE,UPDATE,SELECT" privileges are enough to use BakaSQL, no need to assign SUPER or other fancy privileges.
Make sure you create the $BASE/log and $BASE/rollback folders as I don't think that BakaSQL will create these for you :-)  These should be writable by the user the web server runs under. The former will contain log files (normal log and mysql debug log) and the latter will contain the rollback files for each DML successfully executed by BakaSQL. You will need to apply your own retention on these, as they are not deleted by BakaSQL.
When you are unsure what's going on (eg: BakaSQL seems to be hung during execution of a DML) the mysql debug log is  a good place to look into. It will contains a copy of anything that is sent by BakaSQL to your MySQL server, including server responses. If it seems stuck, check your MySQL server for locks, using SHOW ENGINE INNODB STATUS.
Ninja Mode
Sometimes you will see that BakaSQL  will refuse to execute your DML code and will say that there is no valid index, but an option called Ninja Mode will appear, allowing you to select its checkbox to override the limitation. This can be used in those situations where the cardinality of the index is low because the table is really small. Just ensure that the table is actually a small one, then click on the Ninja Mode checkbox and run execution again to override the cardinality check.


Where to get BakaSQLYou can always fetch the latest version of BakaSQL from my GitHub page here. Be sure to give me your feedback if you try it out!Caveats
Although this tool has become an everyday tool for us, and is therefore stable and well tested, standard disclaimer applies. Your mileage may vary. Bugs may be there. Be sure to always check the rollback code using dry run before executing the changes. If you find bugs and/or have questions, I will be glad to help, just contact me using the form on the right side. Also, your comments are welcome!  I really hope this tool will help you as much as it helped me.
Enjoy!!




Common Table Expressions in MySQL

In a recent labs release a new feature was introduced by Oracle, or actually two very related new features were introduced. The first new feature is Common Table Expressions (CTEs), which is also known as WITH. The second feature is recursive CTEs, also known as WITH RECURSIVE.

An example of WITH:

WITH non_root_users AS (SELECT User, Host FROM mysql.user WHERE User<>'root')
SELECT Host FROM non_root_users WHERE User = ?

The non-CTE equivalent is this:

SELECT Host FROM
(SELECT User, Host FROM mysql.user WHERE User<>'root') non_root_users
WHERE User = ?

This makes it easier to understand the query, especially if there are many subqueries.

Besides using regular subqueries or CTEs you could also put the subquery in a view, but this requires more privileges. It is also difficult to change the views later on as other quieries might have started to use them.

But views are still very useful. You can make it easier for others to query data or you can use views to restrict access to certain rows.

So CTEs are basically views which are bound to a query. This makes it easier to write complex queries in a way that they are easy to understand. So don't expect CTEs to replace views.

In the PostgreSQL world CTEs existed since version 8.4 (2009) and it is used a lot.

There are some cool things PostgreSQL allows you to do with CTEs and MySQL doesn't:

test=# create table t1 (id serial, name varchar(100));
CREATE TABLE
test=# insert into t1(name) values ('foo'),('bar');
INSERT 0 2
test=# with deleted_names as (delete from t1 where id = 2 returning name)
test-# select name from deleted_names;
name
------
bar
(1 row)

The blog post has more details and examples about recursive CTEs, the second new feature.

One of the examples is generating a range of numbers.

If you're familiar with PostgreSQL that will remind you of the generate_series function. This function can be used to generate a series of intergers or timestamps. So I tried to make a stored procedure which together with the recursive CTE support would emulate generate_series in MySQL, but no such luck as you can't return a table from a stored fuction yet.

In the PostgreSQL world CTEs are also used to trick the optimizer but note that this depends on the specific CTE implementation, so don't assume this trick will work in MySQL.

MariaDB has some support for the RETURNING keyword and in MariaDB 10.2 (not yet released) there is CTE support. Support for recursive CTEs is not yet present, see MDEV-9864 for the progress.

If you want to see the progress of MySQL and MariaDB on other modern SQL features check out this page.

Replication from MySQL 5.6 to MariaDB 10 may fail when using fractional seconds

One of the most interesting features  introduced in MariaDB 10 was without doubt multi source replication, or the ability for a slave to have multiple masters.

Tired of hearing complaints from developers who couldn't join tables because they were on different servers, I decided to give MariaDB a try to see if I could leverage this neat feature.
At the time, we had 5 main clusters, classic multi-master active/standby configuration, with some slaves under each of them. I wanted to create a "super slave" that would contain the dataset from all the five clusters, so that developers could connect to it and join at will.
The initial creation of the MariaDB superslave was easy. After installing the binaries, I just bootstrapped it with an xtrabackup copy of our main cluster and set up replication. All went just fine as expected.
Suddendly I realized that I couldn't use xtrabackup to bring the datasets from other masters in the superslave, as the neat xtrabackup tool (which I love) only works at tablespace level.  So, armed with lots of patience and time, I imported each dataset into the superslave using mysqldump (which can also save the master position) and after approximately two weeks, I had the superslave running with a impressive (at least for me) 7 TB dataset.
Everything ran just fine for some days, then suddendly I started getting replication errors for queries coming from a particular master. Basically, the slave couldn't find a row that on the master was deleted just fine.  My first aid response was to run a pt-table-checksum (yes, you can run that with a multisource slave if you are careful) and I was surprised to find hundreds  chunks with differences for that table, considering that we had no failures whatsoever since the superslave was bootstrapped. 
I decided to sync the table anyways. 
To my surprise, another checksum  ran shortly after the sync reported still a large amount  of differences.  This was really head scratching, especially considering that the problem only appeared on one specific table out of thousands we had imported  from the various clusters.I started to suspect that some odd bug was biting me and this table, for some reason, was the only one triggering it. It was your average small logging table, nothing fancy, but I noticed that there was a datetime column as part of the primary key.  In any case, I had no idea what could cause the inconsistency between the master and the slave for this particular table by the look of it.
I decided to peruse the binary logs to see what was passed from master to slave. At that time, we were using MIXED replication format, which is by the way the default. Using mysqlbinlog -v I printed out INSERT statements from one of the most recent binary log files from the master, with the intention of comparing the values passed via replication with the actual values as found on the MariaDB slave.
I grepped the mysqlbinlog output  for the INSERT statements on this table, and noticed that all of them contained fractional seconds. This is actually quite common with Java ORMs and libraries, they like to pass millisecond information, however I knew that the DATETIME and TIMESTAMP columns in MySQL do not save this information by default:  you have to create the column including the desired precision  for that, otherwise the fractional part is discarded.
At least, this is what I was believing before actually comparing rows of this table between the master and the slave. ...
Comparing the MySQL master and the MariaDB slave, I found that, for some of the rows, there was a one second difference in the datetime column, which was part of the multicolumn primary key. That explained the replication errors, as of course, the PK on master and slave were, in fact, different and as a result, delete statements on the master couldn't find the row on the slave!
It was easy enough to correlate this data drift to a different behaviour between MySQL master and MariaDB slave regarding the fractional seconds handling: by looking at some of the rows on both master and slave, it was clear that the difference only existed when the fractional part was greater than 500 - the master was rounding to next second, while the slave was always truncating.
A quick test confirmed the above:
master>insert into test_datetime  values ('2015-10-05 10:00:00.501');Query OK, 1 row affected (0.00 sec)master>select * from test_datetime;+---------------------+| dt                  |+---------------------+| 2015-10-05 10:00:01 |+---------------------+1 row in set (0.00 sec)
MariaDB slave [rick]> select * from test_datetime;+---------------------+| dt                  |+---------------------+| 2015-10-05 10:00:00 |+---------------------+1 row in set (0.00 sec)
D'oh!
I realized then that the problem, although only showing up in this unfortunate situation, was actually much bigger than that, as Java was the main technology used in the Company and I was pretty sure that almost all inserted datetimes and timestamps contained the damn milliseconds!
Some googling suggested that, although MariaDB 10 had some new and neat  features backported from 5.7, and almost all the features from 5.6, was actually based on MySQL 5.5 - and the fractional seconds rounding in MySQL was only introduced in 5.6.4. 
Anyways, I raised a bug to MariaDB back in 2015 when I first discovered this, as it should at least be clearly documented that replicating from MySQL 5.6 into MariaDB 10 will lead to inconsistent slaves if fractional seconds are used.  You can check the status of the bug yourself...
I have actually found a workaround to this bug, which basically consists in switching to ROW based replication. when using ROW based replication the fractional seconds thing is no longer in play, as the binary log will contain the actual row content from the master, instead of a statement to insert the row values. 
My experience is based on MariaDB 10.0.21, but judging by the attention that the above bug has received, I believe this may still be valid today. 
Of course, it goes without saying that I had to spend 2 weeks running table checksum/table sync on all 7 TB of data in order to bring the superslave back to consistency, after setting replication format to ROW.  But it has been running flawlessly since.

Moving binary logs to a different filesystem without downtime

Another day at the office...

"Whoa, the write workload on our statistical cluster has  suddendly increased by 20% and the filesystem that holds the binary logs is no longer large enough".

Of course, I had warned the boss about this possibility when I received those servers with that tiny 250G filesystem for binlogs, but my red flag was just ignored as usual.

So here we are, presto I get this new shiny 600G LUN, but we need to stop the damn MySQL server in order to repoint the log_bin variable to the new storage area.

Dunno about you, but the idea of waking up at 2am to just perform a variable change is not something that makes me particularly happy. Not to mention the maintenance period that is needed around it....

So, I decided to investigate a bit about the possibilities to do such change without stopping the service.

As we all know, the log_bin variable is read-only and to change it, you need to restart mysqld.  So I wanted to check whether there was a way to actually swap the current binlog directory with another without making the server mad and/or lose data and/or causing downtime.

First and foremost I wanted  to check how the server would react with a rename of the binary logs folder. So I just issued the following:

mv /db/binlog /db/binlog_orig

I was very happy to find out that mysqld kept  running, and writing to the currently open file without any complaints.  I decided then to push the test further by actually linking a new folder to the old path and then switching to the next binlog.

So I created a symlink to the new binlog area first:

mkdir /db/binlog2
ln -s /db/binlog2 /db/binlog

I then connected to MySQL and issued a FLUSH BINARY LOGS. To my surprise, MySQL happily closed the current binary log (still being written in the original folder) and started a new one in the new folder. Oh,  and the index file was recreated there automatically too.  Cool :-)

Still something was not good, as the operation resulted in the binary log file number being reset to 1. Not cool if you have slaves (which you likely have, since you have binary logging enabled).

So I had to  guess how mysqld decides what the  number of the next binlog file should be.  My  first guess was that it should be looking at other files already in place. So  I repeated the steps above, but before actually issuing the flush logs command, I created an empty file in the new binary log folder with the same number as the one currently being written to in the old folder.
Voilà, that did the trick!  The new binlog file was now correctly numbered :-)

But what about slaves?  I did all these tests with no slave connected.
So I quickly set up a slave instance and set up replication, then I retried my binary logs path exchange trick.

Right after the binary log file switch, the slave stopped, complaining that it couldn't read the next file. Basically, it had finished reading the log written in the old path, but could not "see" the next one written in the new path. I was able to verify that the old log was entirely read, and just the switch to the next one had failed:


[ERROR] Error reading packet from server: could not find next log; the first event 'dbstat01.014676' at 103351600, the last event read from '/storage/binlog/dbstat01.016653' at 136744026, the last byte read from '/storage/binlog/dbstat01.016653' at 136744026. (server_errno=1236)

That position was exactly where the file, in the old path, was closed due to the flush:
-rw-rw---- 1 mysql mysql 136744026 Sep 23 14:30 dbstat01.016653
Okay I thought, I can just repoint the slave to next binlog, position 4. But if the slave complained, that means the IO_Thread is probably just still looking in the old path.  What if I just stop the slave and start it again?  Pronto, STOP SLAVE; START SLAVE ...and the slave resumed replication from the new binlog  file in the new path just fine!
So happy that I have saved myself a nightly maint!This was tested on Percona 5.6.31-77.0-log and previous versions, but I don't see why it shouldn't work on stock Oracle's or even on Maria.
If you are lazy like me, you can use the simple script that I have crafted for this test, you can find it on my GitHub page .


Introducing the MySQL Cloud Service

The MySQL keynote at Oracle Open World 2016 announced the immediate availability of the MySQL Cloud Service, part of the larger Oracle Cloud offering. You can evaluate this now with a trial copy at cloud.oracle.com/mysql. MySQL server product manager Morgan Tocker gave two presentations at the event including a deep dive session.

This is the first release of the MySQL cloud service. As with all first releases there are some highlights and some pipeline features. All major cloud providers have MySQL offerings. AWS RDS (traditional, MAZ and Aurora) GCP Cloud SQL and Azure MySQL App Service. Users of OpenStack have Trove for comparison. I am not going to be evaluating features between cloud offerings in this post.

Highlights

The differentiating highlights as I see them from the presentation. I will provide a followup blog on actual usage at a later time.

  • MySQL 5.7
  • MySQL Enterprise Edition (a key difference with other cloud providers)
    • MySQL Enterprise features like Firewall, Thread Pool, Auditing etc
    • MySQL Enterprise support is included in price
    • MySQL Enterprise Monitor (MEM) is available and included in price
  • SSH access to machine
    • SSH access is a non-privileged user (opc). This shows and intention on security first policy.
  • Separated partitioning in OS/MySQL disk layout
  • ZFS. (Nice, I have missed using this)
  • Optimized partition workloads different for data and sequential logging
  • Two predefined backup policies, ZFS appliance (7 day retention) and cloud storage (30 day retention)
  • The managed backup philosophy is a weekly full backup, and daily incrementals
  • Sane default MySQL configuration (my.cnf)
  • Patching notification and capability. Automated backup before patching, and rollback capability
  • The Ksplice Oracle UEK functionality for improved host uptime with security vulnerabilities or kernel improvements

Overall an A effort on paper in V1 with the willingness to be useful, sane and flexible. In a future post I will evaluate the actual MySQL cloud capabilities with the available trial.

Observations

Features and functionality I see missing from V1 during this presentation. Some are features I would like to see, some are just observations, and some are likely present features but not discussed. I will leave it up the reader to decide which is which.

  • No MySQL 5.6. There was mention of supporting two versions in future moving forward (i.e. 5.7 and 8)
  • Separated MEM configuration and management. See my later thoughts on this.
  • MySQL topologies and easy to apply templates, including the future MySQL InnoDB Cluster
  • A longer archive storage retention capability policy for backups and/or binary logs (e.g. for compliance reasons)
  • The size of the pre-defined dedicated logging partition and binary logging may be problematic for longer retention capacity
  • Provisioned IOPS capacity or performance guarantees for Disk I/O
  • An ability to define MySQL configuration templates (e.g. dev, test, prod, master, slave etc) and be able to select these for a new instance deployment. You can of course manage this after the fact manually.
  • The compute workloads are more generic at present. There appears to be no optimized disk, network or CPU variants.
  • Improved key management being able to select an already defined SSH public key (e.g. with another instance)

Only offering MySQL 5.7 is an adoption impediment. This requires any organization with applications that are not greenfield to have already migrated to MySQL 5.7. I can understand the long-term rationale view here, but I see it as a clear limitation for more rapid adoption.

The details

The MySQL Cloud Service takes the hard parts out of managing MySQL. This is deployed in the Oracle Public Cloud, leveraging the fault-tolerant regional deployments in place. This Database as a Service (PaaS) helps to remove those annoying pieces of administration including backups, patches, monitoring etc. Powered by MySQL 5.7 Enterprise edition (the only cloud provider to offer this), the cloud system version in use is identical to the downloadable on-premise version. The Cloud service offers an initially optimized MySQL configuration of my.cnf to begin with, i.e. improvements on 5.7 defaults, and has variety of compute workload sizes to choose from. Storage is a ZFS appliance, but there is no information on provisioned IOPS for intensive workloads. You can use the web interface or REST API endpoints to create, deploy and manage your cloud instances. The REST API endpoints were not demonstrated in this session.

The predefined disk layout for storage is a very sane configuration. The Operating System (Oracle Unbreakable Linux 6 ) has a dedicated partition, (not part of sizing). There is a dedicated and throughput optimized ZFS LUN for data (what you size with the setup), a dedicated and latency optimized ZFS LUN for binary and InnoDB logs (which appears not initially sizable at present) and a dedicated ZFS LUN for backups. There is also a secondary backup storage capacity by default in Cloud Storage.

The UI interface provides the capability to configure a MEM server and a MEM client. To conserve presentation time Morgan consolidated these into his initial demo instance. I feel there is room here to optimize the initial setup and to separate out the “management” server capabilities, e.g. selecting your MEM configuration, and by default offering just the MEM client authentication (if MEM server is configured). For users not familiar with MySQL Enterprise features separating the definition and management in the initial creation stage is an optimization to remove complexity. There may even be an option for a getting started quick setup step that can provision your MEM setup in a dedicated instance when there is none detected or with a new account. Here is the flip side. An inexperienced user starting out may launch a MEM server with several test instances because the initial UI setup offers these as input fields, this is not the goal when managing multiple servers. The current version of MEM shown was 3.2, with 3.3 planned. Version 3.3. includes it’s own web interface for backup management.

Some things that are not in the initial release but I’m sure are on the roadmap. One is an upsize and downsize optimization. It would appear via the demo, that when a compute size modification occurs, the existing MySQL instance is shutdown and the VM is shutdown. A new VM is provisioned using the setup and disk partitions of the prior VM. An optimization is to provision a new VM, startup MySQL, then stop MySQL on new, stop on old, unmount on old, mount on new, and start MySQL. This removes the downtime in the VM provisioning step. Ideally I’d like to see the capability to perform this on a slave, and promote a slave more seamlessly. Practically however, this has many more moving pieces than in theory and so the future use MySQL router is a solution. The upcoming MySQL InnoDB cluster will also suffer from the complexity of resizing and uptime availability, especially when nodes are of varying compute sizes. As mentioned, I would like to see pre defined MySQL configurations. I would also like the option to pre-create multiple user authentications for instances, rather than having to specific one each time. I can see for a class of servers, e.g. a load test environment of a master/slave setup, and an application with several MySQL accounts, a means of bulk user and permission management.

Under the Hood Morgan talked about the InnoDB IO configuration optimizations, the number of IO Threads, use of O_DIRECT, the redo log size and buffer pool optimized to compute shape. The thread pool is enabled by default. The same considerations are in place for the operating system, Oracle Linux 6 UEK, MySQL task priority, memlock, and ext4 filesystem.

Again, those unfamiliar with MySQL Enterprise features will need greater help and UI help understanding the features, capabilities and configuration of Firewall, Encryption, Authentication, Audit, Monitor, Backup and Thread Pool.

The SSH access is what gives this first release control to be flexible. You can modify the MySQL configuration, incorporate configuration management processes. You can utilize on system database restore capabilities. You can monitor physical resource utilizations. I am unsure of the total control of changing (or breaking the system and the kernel).

There was a lot to digest in the 45 minute practical demonstration session. I am sure as with more reading and evaluation there will be more to share. As the roadmap for MySQL InnoDB cluster develops I can see there being a good cadence on new features and functionality released for the MySQL Cloud Service.

My Live Tweets (as the presentation was happening)

First look at MySQL 8.0.0 Milestone

So, about ten days ago the MySQL Server Team released MySQL 8.0.0 Milestone to the world. One of the most unfortunate things about MySQL development is that it’s done behind closed doors, with the only hints of what’s to come arriving in maybe a note on a bug or such milestone releases that contain a lot of code changes. How much code change? Well, according to the text up on github for the 8.0 branch “This branch is 5714 commits ahead, 4 commits behind 5.7. ”

Way back in 2013, I looked at MySQL Code Size over releases, which I can again revisit and include both MySQL 5.7 and 8.0.0.

While 5.7 was a big jump again, we seem to be somewhat leveling off, which is a good thing. Managing to add features and fix long standing problems without bloating code size is good for software maintenance. Honestly, hats off to the MySQL team for keeping it to around a 130kLOC code size increase over 5.7 (that’s around 5%).

These days I’m mostly just a user of MySQL, pointing others in the right direction when it comes to some issues around it and being the resident MySQL grey(ing)beard(well, if I don’t shave for a few days) inside IBM as a very much side project to my day job of OPAL firmware.

So, personally, I’m thrilled about no more FRM, better Unicode, SET PERSIST and performance work. With my IBM hat on, I’m thrilled about the fact that it compiled on POWER out of the box and managed to work (I haven’t managed to crash it yet). There seems to be a possible performance issue, but hey, this is a huge improvement over the 5.7 developer milestones when run on POWER.

A lot of the changes are focused around usability, making it easier to manage and easier to run at at least a medium amount of scale. This is long overdue and it’s great to see even seemingly trivial things like SET PERSIST coming (I cannot tell you how many times that has tripped me up).

In a future post, I’ll talk about the FRM removal!

Percona XtraDB Cluster 5.5.41-25.11.1 is now available

Percona announces the new release of Percona XtraDB Cluster 5.5.41-25.11.1 (rev. 855) on September 22, 2016. Binaries are available from the downloads area or our software repositories.

Bugs Fixed:
  • Due to security reasons ld_preload libraries can now only be loaded from the system directories (/usr/lib64, /usr/lib) and the MySQL installation base directory. This fix also addresses issue with where limiting didn’t work correctly for relative paths. Bug fixed #1624247.
  • Fixed possible privilege escalation that could be used when running REPAIR TABLE on a MyISAM table. Bug fixed #1624397.
  • The general query log and slow query log cannot be written to files ending in .ini and .cnf anymore. Bug fixed #1624400.
  • Implemented restrictions on symlinked files (error_log, pid_file) that can’t be used with mysqld_safe. Bug fixed #1624449.

Other bugs fixed: #1553938.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Lesson 124 in why scales on a graph matter…

The original article presented two graphs: one of MariaDB searches (which are increasing) and the other showing MySQL searches (decreasing or leveling out). It turns out that the y axis REALLY matters.

I honestly expected better….

@mariadb that Trends graph is misleading. Steady over 12months, flattening from peak 12+y ago, well before MariaDB. pic.twitter.com/sP6H65FEvR

— Stewart Smith (@stewartsmith) September 22, 2016

Try “Will @mariadb ever replace @postgresql “. At current rate, equal on Google Trends in 12-20 years if no pg growth. pic.twitter.com/6BP9FuU1RU

— Stewart Smith (@stewartsmith) September 22, 2016

MySQL 8.0 Labs: [Recursive] Common Table Expressions in MySQL (CTEs)

I realize that these last months I have not published anything... it's because I was quite busy developing a prominent feature: [Recursive] Common Table Expressions, also known as
  • [recursive] CTE,
  • [recursive] subquery factoring,
  • WITH [RECURSIVE] clause.
This feature is available today, in a Labs release of the MySQL Server.
In my post here, you will find more information: syntax, capacities, examples...
Feels good to have this big piece of work finally out!

MySQL team: make it easy to give you feedback!

There was a bold announcement during the MySQL Keynote at Oracle Open World. A new product that will mix up with the existing GA server, called MySQL InnoDB Cluster. This is an evolution of MySQL group replication, which has been in the labs for long time, and the MySQL shell, which was introduced as a side feature last April. The boldness I mentioned before is on account of wanting to add to a GA server something that was defined as release candidate despite never having been out of the labs. The product is interesting as it promises to be a quick and painless cluster deployment, with built-in high availability and scalability.

What surprised me most was a heartfelt and urgent request to test this new product and provide feedback, hinting that it would be GA soon.

Here are some thoughts on this matter:

  • A product in the labs is perceived as pre-release, i.e. less than beta quality. This is what happened with previous releases on labs: GTID, multi-source replication, and data dictionary were all released in labs before eventually being integrated in the main project.
  • Putting a product in labs again and declaring it release candidate feels odd.
  • The problem with labs is that the previews are distributed with a limited set of packages, and without documentation. The brave souls that test these packages need to find information about the new software in blog posts or dig in the source code, without any assurance that this package would ever become officially supported.

There is some confusion about which package is of which quality. From the keynote it looked like MySQL InnoDB Cluster (MIC) was the one being RC, but when I asked for clarifications it seems that group replication is RC (from its niche in the labs) while MIC is still of unknown quality. From what I saw in the demos it seems quite alpha to me.

Back to the main topic. MySQL want feedback, but provides software in the labs, in a way that is not easy to use. Specifically:

  • There is an OSX package that contains .dmg files, implying that I should install those in my main computer. Given that the perceived quality is low, I'd say "No, thanks," as I don't want to risk my laptop with alpha quality installed as root. Besides, this is cluster software, so I would need at least three nodes to make it work. There is a "sandbox mode" that allows you to simulate three nodes on a single server, but this still requires a main installation, with all the risks involved. No, thanks, again.
  • There are only .rpm files for Linux, which means that I need to have either servers or VMs where to install software as root. I have the same concerns as I have for the Mac: while VMs can be thrown away and remade, it is still a big investment in time and resources to test something new.
  • Missing are generic .tar.gz binaries, which would allow users to install in user space, without affecting the operating system or other MySQL servers.
  • Missing are also Docker packages, which would allow users to test quickly and painlessly without any risk.
  • Finally, and probably most importantly, there is no documentation. If this is RC software, there should be at least a couple of workloads that could be included in the labs packages for reference.

Summing up, I have a message for the MySQL team product managers and developers: if the software is meant to be usable, i.e. more than a proof of concept as other things in the labs, move it to the downloads section, same as it happened with the MySQL Shell and the document store early this year. Also, provide Docker images early on, so that people can test without many risks. This exercise alone would discover bugs just while you are doing it. And please add documentation for the feature you want feedback for. If the manual is not ready, don't limit the docs to a skinny blog post, but add the specifications used to create the feature (workloads) or even an alpha version of the docs. In short, if the software is worth giving feedback, it should be treated with more respect than it is shown right now. And the same respect goes for the users whom you are asking feedback from.

Pages