Planet MySQL

5 Essential Practices for Database Security

5 Essential Practices for Database Security Shane Johnson Thu, 08/24/2017 - 01:02

Data breaches are expensive. Between business disruption, loss of customer confidence, legal costs, regulatory fines and any direct losses that may result from a ransomware attack, for instance, the effects can add up to millions. The best defense is a good offense, so let’s look at five key practices to keep your database secure: protect, audit, manage, update and encrypt.

1. Protect against attacks with a database proxy

A database proxy, or gateway proxy, sits between your application and your database, accepting connections from applications and then, on behalf of those applications, connecting to the database. An intelligent database proxy such as our own MaxScale provides filters and modules to deliver security, reliability, scalability, and performance benefits.

The MaxScale Database Firewall Filter parses queries as they come through the filter, and can block things that don't match the whitelist of query types that you want to let through. For example, you might say that a given connection can only do updates and inserts, and that another connection must match certain regular expressions, etc.

Proxies like MaxScale also protect you against DDoS attacks: When too many connections are coming directly into the database server, it can become overloaded. But a proxy absorbs some of that load to limit the effects of such an attack.

2. Set up auditing and robust logging

Auditing and logging go hand-in-hand, but audit logs are much more sophisticated than the general log. Audit logs give you all the information you need to investigate suspicious activities and to conduct root-cause analysis if you do experience a breach. Furthermore, audit logs help ensure compliance with regulations such as GDPR, PCI, HIPPA and SOX. (Learn more about addressing the GDPR with MariaDB TX.)

The MariaDB audit plugin can log a lot of information: all incoming connections, all query executions, and even all accesses of individual tables. You can see who has accessed a table at a given time, and who has inserted or deleted data. The audit plugin can log to a file or the syslog, so if you have existing workflows that rely on the syslog, you can tie straight into those.  

3. Practice stringent user account management

It’s vital that you manage your database user accounts carefully. This is true for nearly all aspects of your IT ecosystem, so we won’t go into detail here. Instead, we’ll simply remind you of the key aspects of user account management:

  • Allow root access only from local clients.

  • Always use strong passwords.

  • Have a separate database user account for each of your applications.

  • Restrict the number of IP addresses that can access your database server.

4. Keep your database software and OS up-to-date

We all know the reasons to keep your software up-to-date, but that doesn’t stop a great many of us from running legacy operating systems and several-versions-old database servers. Let this serve as a reminder that keeping everything current is the only way to protect your data from all the latest threats.

This applies not only to your server software, but to your OS. The WannaCry ransomware attack was made possible by lackadaisical application of Windows OS security patches, after all.

5. Encrypt sensitive data – in your app, in transit, and at rest

We’ve saved the least commonly implemented practice for last. Many organizations give encryption short shrift, but it can be quite valuable. After all, it reduces the incentive for hackers if the work of attempting to break a cipher awaits them after they gain access.

The first phase of encryption happens in the application, before the data gets to your database. If the data is encrypted in the application, then a hacker who compromises your database can’t see what the data is. (This works only for data that is not a key, however.)

Next is encryption of data in transit. That means the data is encrypted over the network as it moves from the client onto your database server (or onto a proxy such as MaxScale). This is basically the same concept as using HTTPS in your web browser. Obviously the server can see the information because it needs to read the form you filled out, and you can read the information because you typed it into the form, but no one in between you and the server should be able to read it.

Finally we come to encryption of data at rest. You can use this to encrypt InnoDB table spaces, the InnoDB redo log and the binary log. This means that you can encrypt almost everything written to disc on a MariaDB server (10.1 and later – remember what we said about keeping your software up to date?).

Now that you know these five essential security practices, how about delving deeper? Download the new white paper: MariaDB TX: Security Overview.

Data breaches are expensive. Between business disruption, loss of customer confidence, legal costs, regulatory fines and any direct losses that may result from a ransomware attack, for instance, the effects can add up to millions. The best defense is a good offense. In this blog, we'll look at five key practices to keep your database secure: protect, audit, manage, update and encrypt.

Login or Register to post comments

Percona Server for MySQL 5.5.57-38.9 Is Now Available

Percona announces the release of Percona Server for MySQL 5.5.57-38.9 on August 23, 2017. Based on MySQL 5.5.57, including all of its bug fixes, Percona Server for MySQL 5.5.57-38.9 is now the current stable release in the 5.5 series.

Percona Server for MySQL is open-source and free. You can find release details in the 5.5.57-38.9 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

NOTE: Red Hat Enterprise Linux 5 (including CentOS 5 and other derivatives), Ubuntu 12.04 and older versions are no longer supported by Percona software. The reason for this is that these platforms reached the end of life, will not receive updates and are not recommended for use in production.

New Features

  • #1702903: Added support of OpenSSL 1.1.

Platform Support

  • Added support and packages for Debian 9 (stretch). Covers only the amd64 architecture.
  • Stopped providing packages for RHEL 5 (CentOS 5) and Ubuntu 12.04.

Bugs Fixed

  • #1622985: Downgraded diagnostic severity from warning to normal note for successful doublewrite buffer recovery.
  • #1661488: Fixed crash of debug server build when two clients connected, one of them enabled userstat and ran either FLUSH CLIENT_STATISTICS or FLUSH USER_STATISTICS, and then both clients exited.
  • #1673656: Added support for wildcards and Subject Alternative Names (SAN) in SSL certificates for --ssl-verify-server-cert. For more information, see the compatibility matrix at the end of this post.
  • #1705729: Fixed the postinst script to correctly locate the datadir.
  • #1709834: Fixed the mysqld_safe script to correctly locate the basedir.
  • Minor fixes: #1160986#1684601#1689998#1690012.

 

Compatibility Matrix Feature YaSSL OpenSSL < 1.0.2 OpenSSL >= 1.0.2 ‘commonName’ validation Yes Yes Yes SAN validation No Yes Yes Wildcards support No No Yes

Find the release notes for Percona Server 5.5.57-38.9 in our online documentation. Report bugs on the launchpad bug tracker.

Percona Monitoring and Management 1.2.2 is Now Available

Percona announces the release of Percona Monitoring and Management 1.2.2 on August 23, 2017.

For install and upgrade instructions, see Deploying Percona Monitoring and Management.

This release contains bug fixes related to performance and introduces various improvements. It also contains an updated version of Grafana.

Changes in PMM Server

We introduced the following changes in PMM Server 1.2.2:

Bug fixes
  • PMM-927: The error “Cannot read property ‘hasOwnProperty’ of undefined” was displayed on the QAN page for MongoDB.

    After enabling monitoring and generating data for MongoDB, the PMM client showed the following error message on the QAN page: “Cannot read property ‘hasOwnProperty’ of undefined”. This bug is now fixed.

  • PMM-949: Percona Server was not detected properly, the log_slow_* variables were not properly detected.

  • PMM-1081: Performance Schema Monitor treated queries that didn’t show up in every snapshot as new queries reporting a wrong number of counts between snapshots.

  • PMM-1272: MongoDB: the query empty abstract. This bug is now fixed.

  • PMM-1277: The QPS Graph had inappropriate Prometheus query. This bug is now fixed.

  • PMM-1279: The MongoDB summary did not work in QAN2 if mongodb authentication was activated. This bug is now fixed.

  • PMM-1284: Dashboards pointed to QAN2 instead of QAN. This bug is now fixed.

Improvements
  • PMM-586: The wsrep_evs_repl_latency parameter is now monitored in Grafana dashboards

  • PMM-624: The Grafana User ID remains the same in the pmm-server docker image

  • PMM-1209: OpenStack support is now enabled during the OVA image creation

  • PMM-1211: It is now possible to configure a static IP for an OVA image

    The root password can only be set from the console. If the root password is not changed from the default, a warning message appears on the console requesting the user to change the root password on the root first login from the console. Web/SSH users can neither use the root account password nor detect if the root password is set to the default value.

  • PMM-1221: Grafana updated to version 4.4.3

About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

We’re always happy to help! Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, please use the PMM project in JIRA.

Migrating Data from an Encrypted Amazon MySQL RDS Instance to an Encrypted Amazon Aurora Instance

In this blog post, we’ll discuss migrating data from encrypted Amazon MySQL RDS to encrypted Amazon Aurora.

One of my customers wanted to migrate from an encrypted MySQL RDS instance to an encrypted Aurora instance. They have a pretty large database, therefore using mysqldump or a similar tool was not suitable for them. They also wanted to setup replication between old MySQL RDS and new Aurora instances.

Spoiler: this is possible without any logical dump.

At first, I checked Amazon’s documentation on encryption and found nothing about this type of migration. Even more, if I trust the documentation it looks like they don’t support replication or migration between encrypted MySQL RDS and encrypted Aurora. All instructions are for either “MySQL RDS to MySQL RDS” or “Aurora to Aurora” setups. For example, the documentation says here:

You can create Read Replicas of both encrypted and unencrypted DB clusters. The Read Replica must be encrypted if the source DB cluster is encrypted.

When I tried to create an Aurora read replica of my encrypted MySQL RDS instance, however, the “Enable Encryption” select control was grayed out and I could not change “No” to “Yes”.

I had to find a workaround.

Another idea was creating an encrypted MySQL RDS replica and migrating it to Aurora. While creating encrypted MySQL replica is certainly possible (actually all replicas of encrypted instances must be encrypted) it was not possible to migrate it to any other instance using the standard “Migrate Latest Snapshot” option:

However, the documentation specified that Aurora and MySQL RDS use the same AWS KMS key. As a result, both kinds of encryption should be compatible (if not practically the same). Amazon also has the “AWS Database Migration Service“, which has this promising section in its FAQ:

Q. Can I replicate data from encrypted data sources?

Yes, AWS Database Migration Service can read and write from and to encrypted databases. AWS Database Migration Service connects to your database endpoints on the SQL interface layer. If you use the Transparent Data Encryption features of Oracle or SQL Server, AWS Database Migration Service will be able to extract decrypted data from such sources and replicate it to the target. The same applies to storage-level encryption. As long as AWS Database Migration Service has the correct credentials to the database source, it will be able to connect to the source and propagate data (in decrypted form) to the target. We recommend using encryption-at-rest on the target to maintain the confidentiality of your information. If you use application-level encryption, the data will be transmitted through AWS Database Migration Service as is, in encrypted format, and then inserted into the target database.

I decided to give it a try. And it worked!

The next step was to make this newly migrated Aurora encrypted instance a read replica of the original MySQL RDS instance. This is easy in part with the help of great how-to on migration by Adrian Cantrill. As suggested, you only need to find the master’s binary log file, current position and supply them to the stored routine

mysql.rds_set_external_master. Then start replication using the stored routine mysql.rds_start_replication.

Conclusion: While AWS Database Migration Service has limitations for both source and target databases, this solution allows you to migrate encrypted instances easily and securely.

Save

Save

Save

Save

Automatically Dropping Old Partitions in MySQL and MariaDB

A MariaDB Support customer recently asked how they could automatically drop old partitions after 6 months. MariaDB and MySQL do not have a mechanism to do this automatically out-of-the-box, but it is not too difficult to create a custom stored procedure and an event to call the procedure on the desired schedule. In this blog post, I will show one ... Read More

Connecting PHP to MySQL on Bluemix

Most of the PHP I write runs on Bluemix - it's IBM self-service cloud, and since I work there, they pay for my accounts :) There are a bunch of databases you can use there, mostly open source offerings, and of course with PHP I like to use MySQL. Someone asked me for my connection code since it's a bit tricky to grab the credentials that you need, so here it is.

Bluemix Environment Variables

In Bluemix, you can simply create a MySQL database (look for "compose-for-mysql" in the catalog), create a set of credentials, and then bind it to your app. I should blog a few more of my PHP-on-Bluemix tricks but you can run a selection of PHP versions and it's also possible to add extensions that you need, I have found it does have what I need once I figure out how to configure it!

Once the database is bound to the application, then your PHP code running on Bluemix will have an environment variable called VCAP_SERVICES. The variable contains a JSON string, with top-level elements for each of the services that are bound to your application. Services will usually be the databases you are using but could also be some of the APIs for example.

I like to decode VCAP_SERVICES to an array so I can work with it, like this:

$vcap_services = json_decode($_ENV['VCAP_SERVICES'], true); Get PDO Connected

To connect to PDO with MySQL, we need to supply a few different values:

  • host
  • port
  • username
  • password
  • database name

The VCAP_SERVICES supplies a URL containing all those elements, but not splitting them out. The PHP function parse_url() can help us with this. Here's the full code block that I use to connect in my applications:

$vcap_services = json_decode($_ENV['VCAP_SERVICES'], true); $uri = $vcap_services['compose-for-mysql'][0]['credentials']['uri']; $db_creds = parse_url($uri); $dbname = "your_database_name"; $dsn = "mysql:host=" . $db_creds['host'] . ";port=" . $db_creds['port'] . ";dbname=" . $dbname; $db = new PDO($dsn, $db_creds['user'], $db_creds['pass']);

Hopefully you can just borrow the code above and quickly get started with your own PHP/MySQL applications.

Before you go: was this helpful? confusing? annoying and you'd rather have it as a library you can just pull in with Composer (I can do that if I know you'd find it useful)? Please leave me a comment so I know!

Building BLOBs in MariaDB ColumnStore

My team and I are working on finalizing the feature set for MariaDB ColumnStore 1.1 right now and I wanted to take a bit of time to talk about one of the features I created for ColumnStore 1.1: BLOB/TEXT support.

For those who don’t know, MariaDB ColumnStore is a fork of InfiniDB which has been brought up to date by making it work with MariaDB 10.1 instead of MySQL 5.1 and has many new feature and bug fixes.

ColumnStore’s storage works by having columns of a fixed size of 1, 2, 4 or 8 bytes. These are then stored in 8KB blocks (everything in ColumnStore is accessed using logical block IDs) inside extents of ~8M rows. This is fine until you want to store some data that is longer than 8 bytes such as CHAR/VARCHAR.

To solve this for columns greater than VARCHAR(7) and CHAR(8) we have the concept of a dictionary column. That is a column that has 8KB blocks containing CHAR/VARCHAR data and has an additional 8 byte wide column that stores the pointer to the values called a “token”.

The token has the following format (link to code):

struct Token { uint64_t op : 10; // ordinal position within a block uint64_t fbo : 36; // file block number uint64_t spare : 18; // spare

So, we have a 10 bit “op” which contains a pointer to the offset inside a block, the 36 bit “fbo” block number (a logical block ID) which points to which block the data is stored in and the spare 18 bits are used as a bitmask when the block is read an processed.

This is great, but it limits us to 8KB of data (minus some block header information) per block. ColumnStore is designed to read and write at the block level so trying to read something larger than 1 block would be a disaster. This is why ColumnStore limits CHAR/VARCHAR to 8000 bytes.

So, for 1.1 one of the first things I did was to change the token structure to this:

struct Token { uint64_t op : 10; // ordinal position within a block uint64_t fbo : 36; // file block number uint64_t bc : 18; // block count

You can see here that the spare bits have been changed to a block count, this is the number of blocks an entry is consuming. This means that an entry can now use roughly 2^18 * 8 KB = 2 GB. The actual figure is a little less than that because the blocks contain header information such as length of the data in the block. Although this doesn’t meet the LONGBLOB/LONGTEXT specification of 4 GB it is actually a lot more than the 1 GB the MySQL/MariaDB protocol maximum for transferring a row.

With a bit if modification I was able to make sure this didn’t collide with the bitmasking when the blocks were read and wrote code to read more blocks and stitch them together when a block count was encountered.

A lot of the parts of ColumnStore needed modification so that the new data types were understood and passed correctly. For example when a storage engine returns BLOB/TEXT data to the MariaDB server it actually returns a pointer in memory to where the data is stored and the length of the data.

In addition when serializing a row to send it to the various parts of ColumnStore our string data storage could only cope with a maximum of 64 KB per entry. There has been two passes at improving this code. The first worked well but had a performance penalty for small strings. The second attempt is much better and does not have the performance penalty.

It is hard to estimate how much code had to be modified to add support for this feature because it was split over several pull requests but the initial pull request was a modification of 45 source code files.

So far the feature appears to work very well and has had a lot of testing internally. We have a remaining issue/feature I’m working on for BLOB/TEXT support so that ORDER BY works correctly for longer entries. This will hopefully make the first beta release.

If you want to try the BLOB/TEXT feature you can check out develop branch of the ColumnStore Engine on GitHub. Please note that the develop branch should be currently considered Alpha quality.

With 1.1 we really will be able to store Big Data!

Image Credit: Eindhoven: Building of the “Blob” by harry_nl, used under a Creative Commons license


No more mysql.proc in MySQL 8.0

MySQL has thrown away the mysql.proc table for version 8.0 Development Releases.

The best explanation that I've seen is the one that Dmitry Lenev gave at a conference last October.

To summarize it: mysql.proc and its ilk are non-transactional, redundancy is bad, and MySQL can fix some known bugs by moving over to information_schema tables backed by InnoDB. Of course I approve for a separate reason: mysql.proc is non-standard and therefore it is a mistake.

On the other hand, programmers that have invested some time in using mysql.proc will have some trouble changing them to use information_schema.routines instead.

Table definition differences

I did a complex left join of the information_schema.columns for
mysql.proc (P) and for information_schema.routines (R) in MySQL 5.7, and saw this.

P_column_name P_column_type P_collation_name R_column_name R_column_type R_collation_name db char(64) utf8_bin ROUTINE_SCHEMA varchar(64) utf8_general_ci name char(64) utf8_general_ci ROUTINE_NAME varchar(64) utf8_general_ci type enum('FUNCTION','PRO utf8_general_ci ROUTINE_TYPE varchar(9) utf8_general_ci specific_name char(64) utf8_general_ci SPECIFIC_NAME varchar(64) utf8_general_ci language enum('SQL') utf8_general_ci EXTERNAL_LANGUAGE varchar(64) utf8_general_ci sql_data_access enum('CONTAINS_SQL', utf8_general_ci SQL_DATA_ACCESS varchar(64) utf8_general_ci is_deterministic enum('YES','NO') utf8_general_ci IS_DETERMINISTIC varchar(3) utf8_general_ci security_type enum('INVOKER','DEFI utf8_general_ci SECURITY_TYPE varchar(7) utf8_general_ci param_list blob NULL NULL NULL NULL returns longblob NULL NULL NULL NULL body longblob NULL NULL NULL NULL definer char(77) utf8_bin DEFINER varchar(77) utf8_general_ci created timestamp NULL CREATED datetime NULL modified timestamp NULL LAST_ALTERED datetime NULL sql_mode set('REAL_AS_FLOAT', utf8_general_ci SQL_MODE varchar(8192) utf8_general_ci comment text utf8_bin ROUTINE_COMMENT longtext utf8_general_ci character_set_client char(32) utf8_bin CHARACTER_SET_CLIENT varchar(32) utf8_general_ci collation_connection char(32) utf8_bin COLLATION_CONNECTION varchar(32) utf8_general_ci db_collation char(32) utf8_bin DATABASE_COLLATION varchar(32) utf8_general_ci body_utf8 longblob NULL ROUTINE_DEFINITION longtext utf8_general_ci

Remember the above chart is for MySQL version 5.7.
For MySQL 8.0.2 these column definitions have changed:
ROUTINE_SCHEMA: was varchar(64) utf8_general_ci, will be varchar(64) utf8_tolower_ci
ROUTINE_TYPE: was varchar(9) utf8_general_ci, will be enum
DATA_TYPE: was varchar(9), will be longtext
EXTERNAL_LANGUAGE: was varchar(64), will be binary(0)
SQL_DATA_ACCESS: was varchar(64), will be enum
SECURITY_TYPE: was varchar(7), will be enum
CREATED: was datetime, will be timestamp
LAST_ALTERED: was datetime, will be timestamp
DEFINER: was varchar(77) utf8_general_ci, will be varchar(93) utf8_bin
CHARACTER_SET_CLIENT: was varchar(32), will be varchar(64)
COLLATION_CONNECTION: was varchar(32), will be varchar(64)
DATABASE_COLLATION: was varchar(32), will be varchar(64)
... and more changes are possible.

I have included the dirt about column data type and collation so that it's clear they are never exactly the same. This might affect applications that depend on exact size allocations and precise ordering. But usually it will make no difference to either programmers or end users.

What the chart shows is that mysql.proc.db corresponds to information_schema.routines.ROUTINE_SCHEMA, mysql.proc.name corresponds to information_schema.routines.ROUTINE_NAME, and so on. So if I had a simple SQL statement like

SELECT db, name FROM mysql.proc;

I could convert with ease to

SELECT ROUTINE_SCHEMA, ROUTINE_NAME FROM information_schema.routines;

(By the way I used name rather than specific_name because it's in the primary key; the value is the same.)

However, three mysql.proc columns -- param_list, returns, body -- have no corresponding columns in information_schema.routines. Converting them will be more work.

param_list and returns

Let's try

CREATE FUNCTION fx1(paramx1 INT, paramx2 DOUBLE) RETURNS CHAR(5) RETURN 'xxxxx';

Let's look at it via mysql.proc (whenever I show mysql.proc I'm using MySQL 5.7):

SELECT param_list, returns FROM mysql.proc WHERE name='fx1';

Result:

+-------------+------------------------+ | param_list | returns | +-------------+------------------------+ | paramx1 int | char(5) CHARSET latin1 | +-------------+------------------------+

Now let's look at it with

SELECT DATA_TYPE, CHARACTER_MAXIMUM_LENGTH, CHARACTER_SET_NAME, DTD_IDENTIFIER FROM information_schema.routines WHERE routine_name='fx1';

Result:

+-----------+--------------------------+--------------------+----------------+ | DATA_TYPE | CHARACTER_MAXIMUM_LENGTH | CHARACTER_SET_NAME | DTD_IDENTIFIER | +-----------+--------------------------+--------------------+----------------+ | char | 5 | latin1 | char(5) | +-----------+--------------------------+--------------------+----------------+

This isn't too bad -- all we have to do, (with sql_mode='pipes_as_concat') is concatenate
DATA_TYPE || '(' || CHARACTER_MAXIMUM_LENGTH || ')' || ' CHARSET || CHARACTER_SET_NAME
or, even simpler,
DTD_IDENTIFIER || ' CHARSET ' || CHARACTER_SET_NAME
and we've got "char(5) CHARSET latin1", the same as what's in mysql.proc.returns. Using DTD_IDENTIFIER avoids complications with other data types so I'll always go with it.

It's trickier to find a substitute for param_list, because parameters aren't in information_schema.routines at all. We'd find them by saying

SELECT ORDINAL_POSITION, PARAMETER_MODE, PARAMETER_NAME, DTD_IDENTIFIER FROM information_schema.parameters WHERE specific_name = 'fx1'; +------------------+----------------+----------------+----------------+ | ORDINAL_POSITION | PARAMETER_MODE | PARAMETER_NAME | DTD_IDENTIFIER | +------------------+----------------+----------------+----------------+ | 0 | NULL | NULL | char(5) | | 1 | IN | paramx1 | int(11) | | 2 | IN | paramx2 | double | +------------------+----------------+----------------+----------------+

We don't need to do anything with parameter #0 (it's just a copy of what's in information_schema.routines.returns); we only need to merge parameter #1 and parameter #2 into the main query. Like this (with sql_mode='pipes_as_concat'), but skipping the details we've already seen:

SELECT routine_name, routine_body, (SELECT group_concat(parameter_name || ' ' || dtd_identifier) FROM information_schema.parameters p WHERE p.specific_name = outertable.routine_name AND ordinal_position > 0) AS param_list FROM information_schema.routines outertable WHERE routine_name = 'fx1';

Result:

+--------------+--------------+--------------------------------+ | ROUTINE_NAME | ROUTINE_BODY | param_list | +--------------+--------------+--------------------------------+ | fx1 | SQL | paramx1 int(11),paramx2 double | +--------------+--------------+--------------------------------+

In other words, we can get param_list from information_schema.routines by adding a subquery that accesses information_schema.parameters. Notice the assumption that the list will be ordered, I'm depending on a quirk.

body

Let's try:

CREATE PROCEDURE px1() SELECT _latin1 0xe3; SELECT body, body_utf8, _latin1 0xe3 FROM mysql.proc WHERE name = 'px1';

Result:

+---------------------+--------------+--------------+ | body | body_utf8 | _latin1 0xe3 | +---------------------+--------------+--------------+ | SELECT _latin1 0xe3 | SELECT 0xe3 | ã | +---------------------+--------------+--------------+

Now let's try:

CREATE PROCEDURE px2() SELECT 'abc''def'; SELECT body, body_utf8 FROM mysql.proc WHERE name = 'px2';

Result:

+-------------------+------------------+ | body | body_utf8 | +-------------------+------------------+ | SELECT 'abc''def' | SELECT 'abc'def' | +-------------------+------------------+

So you can see that body and body_utf8 are different. In a sense, both are correct -- body is what you want if you are going to make a copy of the routine, body_utf8 is what you want if you want to see what the output would look like if you invoked the routine. So it's pretty useful that mysql.proc has both.

Unfortunately, information_schema.routines does not. It has no equivalent of body. It only has an equivalent of body_utf8.

Loyal readers may recall that I've talked before about the possible information losses when making everything UTF8 but this is even worse. Without an equivalent of mysql.proc.body, you cannot reliably make exact copies of routines even if they are in UTF8.

Privilege differences

Actually the change amounts to more than just the differences between the definitions of the tables. There's also the fact that PRIVILEGES are different -- you can GRANT on mysql.* tables, you cannot GRANT on information_schema.* tables. And what users can see differs depending on how granting is done.

Therefore, for all users who currently hold a SELECT privilege on mysql.proc, we are going to have to work around the problem that there is no way to grant the exact same privilege on information_schema.routines. In other words, if the old (MySQL 5.7) statement was

GRANT SELECT ON mysql.proc TO 'jean'@'localhost';

To be the same as that, you need a way to let jean see all the columns in all the routines, but not anything other than the routines. This is possible with the DEFINER clause in routines and views. For example, assuming 'root'@'localhost' is a powerful user:

CREATE DEFINER = 'root'@'localhost' SQL SECURITY DEFINER VIEW v_routines AS SELECT * FROM information_schema.routines; GRANT SELECT ON v_routines TO 'jean'@'localhost'; SHOW

Quote from MySQL 5.7 manual describing SHOW CREATE PROCEDURE and SHOW CREATE FUNCTION:
"To use either statement, you must be the user named in the routine DEFINER clause or have SELECT access to the mysql.proc table."

Quote from MySQL 8.0 manual describing SHOW CREATE PROCEDURE and SHOW CREATE FUNCTION:
"To use either statement, you must have the global SELECT privilege."

Essentially, the privilege requirement in 5.7 is what's needed for looking at mysql.proc, but the privilege requirement in 8.0 is what's needed for looking at information_schema.routines.

But this time we can't work around by creating a view. If I'm interpreting "global SELECT privilege" correctly, the expectation is that if you want to say SHOW CREATE PROCEDURE, you need

GRANT SELECT ON *.* TO user_who_wants_to_show_create_procedure;

Either I'm misinterpreting, or MySQL is demanding that you grant a very broad privilege for a very narrow requirement.

This is too bad because, though SHOW statements are junk, this one will be necessary in MySQL 8.0. That's because it has the body value right: it does not turn SELECT 'abc''def' into SELECT 'abc'def' and so on. Thus, it is the only way to get the equivalent of MySQL 5.7's mysql.proc.body value. Using a connector, you can put this correct value into another table with something like this, leaving out details:

mysql_real_query("SHOW PROCEDURE p;") mysql_fetch_row() /* there's only going to be one row at most */ mysql_fetch_field() /* the third field is `Create procedure` */ mysql_real_query("UPDATE copy_of_routines") /* set body value */ Other ways to get lists of routines

Although MySQL 8.0 is missing mysql.proc, it isn't missing all the grant-related tables (yet). So you can still say

SELECT * FROM mysql.procs_priv;

Not only that, you can still use SHOW PROCEDURE STATUS -- and then put the results in a @variable! As far as I know this is undocumented, but it's been around for years and nobody has bothered to disallow it.
Try this:

SET @procedurelist = ''; SHOW PROCEDURE STATUS WHERE (@procedurelist := CONCAT(@procedurelist, `Name`, ',')); SELECT @procedurelist;

Result:

+----------------------------------------------------- ... | create_synonym_db,diagnostics,execute_prepared_stmt, ... +----------------------------------------------------- ...

Demonstrably, the @procedurelist variable now has a list of all procedures. Its only recommended use is to show unbelievers that with MySQL all things are possible.

Effect on ocelotgui debugger

As you might have guessed by now, we have a program that uses mysql.proc, namely the Ocelot open source GUI client for MySQL and MariaDB (ocelotgui). Well, one of its feature components is a debugger for MySQL stored routines, and you can see all the "mysql.proc" references in our source code for that module.

Our plan is: wait and see if MySQL makes the transition easier. But if that doesn't happen, within a few months we'll change ocelotgui to implement all the methods that I've described. So if you want to see how it's done in detail, with an actual working application, just watch the code related to mysql.proc until it changes. It's on github.

ConFoo Montreal 2018: Call for Papers is Now Open

The ConFoo Team is pleased to announce the opening of its call for paper for its Montreal event.

To celebrate its 15th anniversary, CooFoo will be focusing its 2018 Montreal edition on “Experiment & Learn”. Indeed, as Montreal is becoming one of the world emerging AI advanced research centers, disrupting the global innovation scene, we are particularly interested in gathering and exchanging with developers in the AI community.

If you live in Montreal you can be proud ambassadors of this innovative mindset that pervades the developers’ ecosystem. If you come from elsewhere you obviously have a passion for data learning, model development, and the various industries to apply your knowledge (healthcare, finance, automotive…)

Don’t miss this unique opportunity to share your experience with passionate developers. Whether in artificial intelligence or other development areas, you can be part of a gratifying experience.

Talks will be 45 minutes, including 35 minutes for the main topics followed by 10 minutes for Q&A.

You have until September 20thto submit your ideas. Don’t wait to the last minute.

If you prefer to attend and learn from field experts, it’s the perfect time to register and save $320. Our blind bird rate is valid until October 13th.

How to store IP (internet protocol) address in MySQL?

Internet Protocol address is a special kind of data. This data does not correspond directly to available MySQL built-in data types. I have seen in many servers, the IP(Internet Protocol) address is stored in CHAR(12), VARCHAR(15), TEXT and so on. 

The dotted-quad notation is just a way of writing for better readability, it is not the original form of raw data. The IP address is a UNSIGNED INTEGERS they are not strings. 

So the question is how we can store the IP addresses with dot in integer columns? The MySQL provides the built-it function to do this operation. The functions are given below:







For IPv4 addresses:

INET_ATON() mysql> select inet_aton('127.0.0.1');

+------------------------+
| inet_aton('127.0.0.1') |
+------------------------+
| 2130706433 |
+------------------------+
1 row in set (0.11 sec) INET_NTOA() SELECT inet_ntoa('2130706433');

+-------------------------+
| inet_ntoa('2130706433') |
+-------------------------+
| 127.0.0.1 |
+-------------------------+
1 row in set (0.02 sec)






For IPv6 addresses:
The MySQL provides the following built-in functions to store and retrieve the IPv6 addresses. Here, IPv6 addresses are not stored as integers, since numeric-format IPv6 addresses require more bytes than the UNSIGNED BIGINT. So the below functions returns the VARBINARY(16) data types. Let us see with an example.
mysql> select hex(inet6_aton('127.0.0.1')); +---------------------------------+ | hex(inet6_aton('127.0.0.1')) | +---------------------------------+ | 7F000001                             | +---------------------------------+ 1 row in set (0.00 sec)
mysql> select hex(inet6_aton('2001:0db8:85a3:0000:0000:8a2e:0370:7334')); +---------------------------------------------------------------------------+ | hex(inet6_aton('2001:0db8:85a3:0000:0000:8a2e:0370:7334')) | +---------------------------------------------------------------------------+ | 20010DB885A3000000008A2E03707334                                 | +---------------------------------------------------------------------------+ 1 row in set (0.00 sec)
mysql> select inet6_ntoa(unhex('20010DB885A3000000008A2E03707334')); +----------------------------------------------------------------------------+ | inet6_ntoa(unhex('20010DB885A3000000008A2E03707334')) | +----------------------------------------------------------------------------+ | 2001:db8:85a3::8a2e:370:7334                                                    | +----------------------------------------------------------------------------+ 1 row in set (0.00 sec)

Note:
Assume that, you are writing a query to find out the users, who are connected from the internet protocol address 127.0.0.1. So the following query can be derived.
SELECT name FROM user WHERE inet_ntoa(ipaddress) = '127.0.0.1';
Be aware that, this query will not use the index created on the ipaddress column, since, we are modifying the index column during the execution, also, it will convert the integer into IP adress row by row. So the work around will be:
SET @ip = inet_aton('127.0.0.1'); SELECT name FROM user WHERE ipaddress = @ip; (or)


SELECT name FROM user WHERE ipaddress = inet_aton('127.0.0.1');


Now if you check the query execution plan, you will see that, the query using index on ipaddress column. I hope this post may help you. Please share your views on comment section and if you have other better methods, please share it on comment section.

Galera Cluster: All the Severalnines Resources

Galera Cluster is a true multi-master cluster solution for MySQL and MariaDB, based on synchronous replication. Galera Cluster is easy-to-use, provides high-availability, as well as scalability for certain workloads.

ClusterControl provides advanced deployment, management, monitoring, and scaling functionality to get your Galera clusters up-and-running using proven methodologies.

Here are just some of the great resources we’ve developed for Galera Cluster over the last few years...

Tutorials Galera Cluster for MySQL

Galera allows applications to read and write from any MySQL Server. Galera enables synchronous replication for InnoDB, creating a true multi-master cluster of MySQL servers. Allows for synchronous replication between data centers. Our tutorial covers MySQL Galera concepts and explains how to deploy and manage a Galera cluster.

Read the Tutorial

Deploying a Galera Cluster for MySQL on Amazon VPC

This tutorial shows you how to deploy a multi-master synchronous Galera Cluster for MySQL with Amazon's Virtual Private Cloud (Amazon VPC) service.

Read the Tutorial

Training: Galera Cluster For System Administrators, DBAs And DevOps

The course is designed for system administrators & database administrators looking to gain more in depth expertise in the automation and management of Galera Clusters.

Book Your Seat

On-Demand Webinars MySQL Tutorial - Backup Tips for MySQL, MariaDB & Galera Cluster

In this webinar, Krzysztof Książek, Senior Support Engineer at Severalnines, discusses backup strategies and best practices for MySQL, MariaDB and Galera clusters; including a live demo on how to do this with ClusterControl.

Watch the replay

9 DevOps Tips for Going in Production with Galera Cluster for MySQL / MariaDB

In this webinar replay, we guide you through 9 key tips to consider before taking Galera Cluster for MySQL / MariaDB into production.

Watch the replay

Deep Dive Into How To Monitor MySQL or MariaDB Galera Cluster / Percona XtraDB Cluster

Our colleague Krzysztof Książek provided a deep-dive session on what to monitor in Galera Cluster for MySQL & MariaDB. Krzysztof is a MySQL DBA with experience in managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.

Watch the replay

Become a MySQL DBA - webinar series: Schema Changes for MySQL Replication & Galera Cluster

In this webinar, we discuss how to implement schema changes in the least impacting way to your operations and ensure availability of your database. We also cover some real-life examples and discuss how to handle them.

Watch the replay

Migrating to MySQL, MariaDB Galera and/or Percona XtraDB Cluster

In this webinar, we walk you through what you need to know in order to migrate from standalone or a master-slave MySQL / MariaDB setup to Galera Cluster.

Watch the replay

Introducing Galera 3.0

In this webinar you'll learn all about the new Galera Cluster capabilities in version 3.0.

Watch the replay

Top Blogs MySQL on Docker: Running Galera Cluster on Kubernetes

In our previous posts, we showed how one can run Galera Cluster on Docker Swarm, and discussed some of the limitations with regards to production environments. Kubernetes is widely used as orchestration tool, and we’ll see whether we can leverage it to achieve production-grade Galera Cluster on Docker.

Read More

ClusterControl for Galera Cluster for MySQL

Galera Cluster is widely supported by ClusterControl. With over four thousand deployments and more than sixteen thousand configurations, you can be assured that ClusterControl is more than capable of helping you manage your Galera setup.

Read More

How Galera Cluster Enables High Availability for High Traffic Websites

This post gives an insight into how Galera can help to build HA websites.

Read More

How to Set Up Asynchronous Replication from Galera Cluster to Standalone MySQL server with GTID

Hybrid replication, i.e. combining Galera and asynchronous MySQL replication in the same setup, became much easier since GTID got introduced in MySQL 5.6. In this blog post, we will show you how to replicate a Galera Cluster to a MySQL server with GTID, and how to failover the replication in case the master node fails.

Read More

Full Restore of a MySQL or MariaDB Galera Cluster from Backup

Performing regular backups of your database cluster is imperative for high availability and disaster recovery. This blog post provides a series of best practices on how to fully restore a MySQL or MariaDB Galera Cluster from backup.

Read More

How to Bootstrap MySQL or MariaDB Galera Cluster

Unlike standard MySQL server and MySQL Cluster, the way to start a MySQL or MariaDB Galera Cluster is a bit different. Galera requires you to start a node in a cluster as a reference point, before the remaining nodes are able to join and form the cluster. This process is known as cluster bootstrap. Bootstrapping is an initial step to introduce a database node as primary component, before others see it as a reference point to sync up data.

Read More

Schema changes in Galera cluster for MySQL and MariaDB - how to avoid RSU locks

This post shows you how to avoid locking existing queries when performing rolling schema upgrades in Galera Cluster for MySQL and MariaDB.

Read More

Deploy an asynchronous slave to Galera Cluster for MySQL - The Easy Way

Due to its synchronous nature, Galera performance can be limited by the slowest node in the cluster. So running heavy reporting queries or making frequent backups on one node, or putting a node across a slow WAN link to a remote data center might indirectly affect cluster performance. Combining Galera and asynchronous MySQL replication in the same setup, aka Hybrid Replication, can help

Read More

Top Videos ClusterControl for Galera Cluster - All Inclusive Database Management System

Watch the Video

Galera Cluster - ClusterControl Product Demonstration

Watch the Video

ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE ClusterControl for Galera

ClusterControl makes it easy for those new to Galera to use the technology and deploy their first clusters. It centralizes the database management into a single interface. ClusterControl automation ensures DBAs and SysAdmins make critical changes to the cluster efficiently with minimal risks.

ClusterControl delivers on an array of features to help manage and monitor your open source database environments:

  • Deploy Database Clusters
  • Add Node, Load Balancer (HAProxy, ProxySQL) or Replication Slave
  • Backup Management
  • Configuration Management
  • Full stack monitoring (DB/LB/Host)
  • Query Monitoring
  • Enable SSL Encryption Galera Replication
  • Node Management
  • Developer Studio with Advisors

Learn more about how ClusterControl can help you drive high availability with Galera Cluster here.

We hope that these resources prove useful!

Happy Clustering!

Tags:  galera mariadb cluster galera cluster MySQL percona xtradb cluster

How ProxySQL deal with schema (and schemaname)

I think that very often we are so focus in analyzing internals, or specific behaviours/situations/anomalies that we tend to forget the simple things.

It happened to me that last week a couple of customers raise the same question:  "How ProxySQL manage the default schema, or the ones declared inside a FROM/JOIN?"
I was a bit surprise because I was given that for granted, and my first thought was, 'well read the documentation', but then I realize we do not have a clear section in the documentation about this.

Given that and also because I realize I had not done a full and extensive test on how the SCHEMA is actually managed.
I decide to do a simple set of tests and write down few lines.


This blog is to answer that very simple question:"How ProxySQL manage the default schema, or the ones declared inside a FROM/JOIN?"
The blog is split in two parts, part 1 simple declaration and summary of what happen. Part 2 all the details and tests, in case you want to see them.

 

Schemaname and ProxySQL


In ProxySQL we can specify the schema in two different places and with different scope.

  1. In the mysql_user table as property of the USER, where it will represent the DEFAULT schema.
  2. In the mysql_query_rules as filter for which a query can be capture. The filter is valid only for the explicitly set default SCHEMA, (ie with -D mysql command line or USE). ProxySQL will NOT analyze the FROM SQL clausole. Given the limitation in the above point 2, it is not 100% safe to trust the SCHEMANAME as filter unless you are 200% sure the code do not contains commands to change default schema like USE.

On the other hand if I want to filter by a schemaname (in general) it is safer and more flexible to use regular expression and query_rules, as I will show later.

 

For the scope of this article I want to answer these simple sub-questions:

  1. How ProxySQL differes from MySQL in managing explicit default schema declaration? does it respect the -D or USE
  2. How proxy respect/follow security agains schema
  3. How schemaname filter acts in the query rules?
  4. How can I transparently redirect using schema name?

To test the above I have created:

two servers:

Master

Slave

two schemas:

world

City

Country

myworld

CityM

CountryM

three users:

uallworld, can access all the schemas (including test)

uworld, can access world in write/read on Master, read on slave. Can access myworld in read on slave.

umyworld, can access myworld in write/read on Master, read on slave. Can access world in read on slave.

Queries used during the tests:

1 2 3 4 5 6 7 8 9 10 11 select database(); update world.City set Population=10500000 where ID=1024; update world.Country set LifeExpectancy=62.5 where Code='IND'; update myworld.CityM set Population=10500001 where ID=1024; update myworld.CountryM set LifeExpectancy=0 where Code='IND'; Select * from world.City order by Population DESC limit 5 ; Select * from myworld.CityM order by Population DESC limit 5 ; Select City.*, Country.name, Country.LifeExpectancy from world.City as City join world.Country as Country on City.CountryCode=Country.code order by Population DESC limit 5; Select City.*, Country.name, Country.LifeExpectancy from myworld.CityM as City join myworld.CountryM as Country on City.CountryCode=Country.code order by Population DESC limit 5; Select City.*, Country.name, Country.LifeExpectancy from world.City as City join myworld.CountryM as Country on City.CountryCode=Country.code order by Population DESC limit 5;  

 

To setup the environment see instructions at Annex 1.

Short story

 

  1. How ProxySQL differes from MySQL in managing explicit default schema declaration? Does it respect the -D or USE?
    MySQL and ProxySQL will behave the same when passing the default schema, setting it as default.
    MySQL
    mysql -uuallworld -ptest -h192.168.1.107 -P 3306 -D test

    ProxySQL

    mysql -uuallworld -ptest -h127.0.0.1 -P 6033 -D test

    If a default schema is set in ProxySQL the schema coming from command line or connection (like in java:"connUrl=jdbc:mysql://192.168.1.50:6033/test"), will override the ProxySQL default.
    In case a default schema is not pass during the connection MySQL and ProxySQL will differs on how the behave:
           MySQL will set the current schema to NULL. It is to be noted that MySQL accept a NULL schema when u connect but then once SET it with USE you cannot set it back to NULL.
           ProxySQL will set it as the one declared default in the mysql_user table. If no schema is declared as default, ProxySQL will elect information_schema as the default. In short ProxySQL cannot have a default schema set to NULL.

     

  2. How proxy respect/follow security agains schema
    MySQL
    mysql -uuworld -ptest -h192.168.1.107 -P 3306 -D test ERROR 1044 (42000):
    Access denied for user 'uworld'@'%' to database 'test'

    ProxySQL

    mysql -uuworld -ptest -h127.0.0.1 -P 6033 -D test
    Welcome to the MySQL monitor. Commands end with ; or \g.
    Your MySQL connection id is 483902 Server version: 5.5.30 (ProxySQL)

    On connection the behaviour is different between MySQL and ProxySQL.
    Why? Because while you directly connect to MySQL, when you connect to ProxySQL you do not actually touch the final server.

    ProxySQL is NOT a forward proxy, but a reverse proxy, so its scope is act as an intermediary for its associated servers to be contacted by any client. Backend connection that will serve your client is establish at the monent you actually send a query, which will be comunicated to the relevent host group.


    As such when you connect, you only open a connection to the ProxySQL. While issue a query will USE a connection to the backend and if the user do not have the right grants an error will be returned.
    But ProxySQL will not known until you submit the query and it can decide where this query should go (which HG to point to).
    mysql> select database(); ERROR 1044 (42000): Access denied for user 'uworld'@'%' to database 'test'

    Aside from this all the GRANTS defined in MySQL are transparent and followed by ProxySQL

  3. How schemaname filter acts in the query rules?

    In MySQL we can easily change the default schema with USE , this action is fully supported by ProxySQL.
    But it may have some side effects when using "schemaname" as filter in the query_rules.

    If you define a rule that include the default schemaname and the default schema is changed with USE, the rule will not apply, and unpredictable results may happen.
    To prevent that ProxySQL has another option in mysql_user "schema_locked" which will prevent the schema to be changed by USE.

    This feature is present but not fully implemented, and after a brief conversation with Rene (https://github.com/sysown/proxysql/issues/1133), I hope it will be soon.

    Given that, when designing Query rules using the Default schema, you must to take in consideration the possibility to have the application or user changing the default schema and invalidating that rule.


  4. how can I transparently redirect using schema name?

    This is not a Schema feature, more one of the things that in ProxySQL are quite easy to set, while close to be impossible if done in plain MySQL.
    When connecting directly with MySQL there is no option for you to "play" with GRANTS and schema such that you will transparently allow a user to do an action on a schema/server pair
    and another on a different schema/server pair.
    When using ProxySQL to filter by schemaname is quite trivial.
    For instance assuming we have 3 users one is admin of the platform which include 2 separate schemas (or more), each user can access one schema for write (but that can be table as well),
    and a final slave with reporting information, where all the users that needs to read from other schema except their own can read cross schemas. While all the select not cross schema mus still got to the Master;
    This is not so uncommon, actually with few variant is exactly what one of the customer I spoke last week needs to do.
    Now in MySQL this is impossible while in ProxySQL is just a matter of 3 rules:
        insert into mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply) values(10,'uworld',10,1,3,'^SELECT.*FOR UPDATE',1);
        insert into mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply,FlagOUT,FlagIN) values(11,'uworld',10,1,3,'^SELECT ',0,50,0);
        insert into mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply,FlagOUT,FlagIN) values(12,'uworld',11,1,3,'\smyworld.',1,50,50);
    Simply applying the rules above will allow the application to transparently access the data from different servers without errors,
    following the GRANTS given on the MySQL side. This for ONE user (uworld), but it can eventually extended to any, and the rule chain is minimal just 2 rules, so minimal overhead also with high traffic.



Summarizing

ProxySQL is following the MySQL model to access the schema, in most parts. There are a couple of differences though.

     ProxySQL will require to set a default schema when connecting, implicitly or explicitly.

     ProxySQL will not return an error at connection time, if a user is not authorized to connect to the given schema. Error will raise at the first query, moment when ProxySQL will actually establish the connection.

 

Finally using ProxySQL, will allow administrator to play with GRANTS and HG/servers to provide transparent access to data in a more granular way, choosing an HG where user may have read (or other specific) access, against one where user is not allow at all .

 

Long  Story ... click me

 

 

Annex 1

Create environment for test

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 create schema myworld; create table myworld.CityM like City; create table myworld.CountryM like Country; create table myworld.CountryLanguageM like CountryLanguage; insert into myworld.CityM select * from City; insert into myworld.CountryM select * from Country; insert into myworld.CountryLanguageM select * from CountryLanguage;   delete from mysql_users where username like '%world'; insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) values ('uworld','test',1,10,'world',1); insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) values ('umyworld','test',1,10,'myworld',1); insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) values ('uallworld','test',1,10,'test',1);   LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK;   delete from mysql_servers where hostgroup_id in (10,11,20,21); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,comment) VALUES ('192.168.1.107',10,3306,100,'master'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,comment) VALUES ('192.168.1.109',11,3307,100,'slave');   LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;   delete from mysql_replication_hostgroups; INSERT INTO mysql_replication_hostgroups VALUES (10,11,'world-myworld replication hgroup'); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;   DO NOT RUN AT THE beginning this is for test 3 !! delete from mysql_query_rules where rule_id in (10,11,12,13,14,15); insert into mysql_query_rules (rule_id,username,schemaname,destination_hostgroup,active,retries,match_digest,apply) values(10,'uworld','world',10,1,3,'^SELECT.*FOR UPDATE',1); insert into mysql_query_rules (rule_id,username,schemaname,destination_hostgroup,active,retries,match_digest,apply) values(11,'uworld','world',11,1,3,'^SELECT ',1); LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;   DO NOT RUN AT THE beginning this is for test 4 !! delete from mysql_query_rules where rule_id in (10,11,12,13,14,15); Let see what we need and how to do it: 1) user(s) uworld & umyworld need to go to their default schema on Master for Writes. 2) user(s) uworld & umyworld should go to their default schema on master for direct reads 3) user(s) uworld & umyworld should go to the slave for reads when the other schema is used   To do this we will need the following rules: insert into mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply) values(10,'uworld',10,1,3,'^SELECT.*FOR UPDATE',1); insert into mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply,FlagOUT,FlagIN) values(11,'uworld',10,1,3,'^SELECT ',0,50,0); insert into mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply,FlagOUT,FlagIN) values(12,'uworld',11,1,3,'myworld.',1,50,50);     insert into mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply) values(13,'umyworld',10,1,3,'^SELECT.*FOR UPDATE',1); insert into mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply,FlagOUT,FlagIN) values(14,'umyworld',10,1,3,'^SELECT ',0,50,0); insert into mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply,FlagOUT,FlagIN) values(15,'umyworld',11,1,3,'\sworld.',1,50,50);   LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;   GRANTS ------- grant all on *.* to uallworld@'%' identified by 'test'; grant all on world.* to uworld@'%' identified by 'test'; grant all on myworld.* to umyworld@'%' identified by 'test';   on the slave REVOKE ALL ON *.* FROM 'uworld'@'%'; REVOKE ALL ON *.* FROM 'umyworld'@'%'; grant select on myworld.* to uworld@'%' identified by 'test'; grant select on world.* to umyworld@'%' identified by 'test';     To monitor what is happening --------------------------------- watch -n 1 'mysql -h 127.0.0.1 -P 6032 -uadmin -padmin -t -e "select * from stats_mysql_connection_pool where hostgroup < 30 order by hostgroup,srv_host desc;" -e " select srv_host,command,avg(time_ms), count(ThreadID) from stats_mysql_processlist group by srv_host,command;" -e "select * from stats_mysql_users;"; mysql -h 127.0.0.1 -P 6032 -uadmin -padmin -t -e "select * from stats_mysql_global "|egrep -i "(mirror|memory|stmt)"'   select active,hits,destination_hostgroup, mysql_query_rules.rule_id, match_digest, match_pattern, replace_pattern, cache_ttl, apply,flagIn,flagOUT FROM mysql_query_rules NATURAL JOIN stats.stats_mysql_query_rules where destination_hostgroup < 30 ORDER BY mysql_query_rules.rule_id; select * from stats_mysql_query_digest; select * from stats_mysql_query_digest_reset;

Using MySQL Connector/Python X (mysqlx module)

This post is about simple usage of mysqlx module i.e X Plugin with latest Connector/Python DMR.
The version of Python Connector is 8.0.4.

Installing:

wget https://dev.mysql.com/get/Downloads/Connector-Python/mysql-connector-python-8.0.4-0.1.dmr.el7.x86_64.rpm sudo yum install mysql-connector-python-8.0.4-0.1.dmr.el7.x86_64.rpm

Sample Python code:

# Connecting to MySQL and working with a Session import mysqlx # Connect to a dedicated MySQL server session = mysqlx.get_session({ 'host': 'localhost', 'port': 33060, 'user': 'bakux', 'password': 'Baku12345', 'ssl-mode': mysqlx.SSLMode.DISABLED }) schema = session.get_schema('generated_columns_test') # Create 'my_collection' in schema schema.create_collection('my_collection') # Get 'my_collection' from schema collection = schema.get_collection('my_collection') assert(True == collection.exists_in_database()) # You can also add multiple documents at once collection.add({'_id': '2', 'name': 'Sakila', 'age': 15}, {'_id': '3', 'name': 'Jack', 'age': 15}, {'_id': '4', 'name': 'Clare', 'age': 37}).execute() collection.remove('_id = 1').execute() assert(3 == collection.count())

The code is quite clean so basically we have created a collection and inserted some data.

From MySQL client side the structure of this “table”==collection:

CREATE TABLE `my_collection` ( `doc` json DEFAULT NULL, `_id` varchar(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$._id'))) STORED NOT NULL, PRIMARY KEY (`_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

The collection has “stored generated column” _id which is extracted from doc column.

select * from my_collection; +-------------------------------------------+-----+ | doc | _id | +-------------------------------------------+-----+ | {"_id": "2", "age": 15, "name": "Sakila"} | 2 | | {"_id": "3", "age": 15, "name": "Jack"} | 3 | | {"_id": "4", "age": 37, "name": "Clare"} | 4 | +-------------------------------------------+-----+ 3 rows in set (0.00 sec)

This default behavior restricted my tests on other storage engines. Because it is impossible to alter the engine of this default “collection”.

ERROR 3106 (HY000): 'Specified storage engine' is not supported for generated columns.

BTW, while testing, found some bugs and reported:

https://bugs.mysql.com/bug.php?id=87472
https://bugs.mysql.com/bug.php?id=87475
https://bugs.mysql.com/bug.php?id=87492
https://bugs.mysql.com/bug.php?id=87493


Fun with Bugs #55 - On Some Public Bugs Fixed in MySQL 8.0.2

I do not care much about MySQL 8.0.x at the moment, as it's far from being GA and is work in progress. It is not yet used by customers whom I have to support. But I know about many interesting changes and improvements there that, eventually, are going to influence all main forks and kinds of MySQL. So, it would not be wise to ignore MySQL 8.0.c entirely even for me.

For this post I decided to briefly check what community reported bugs were fixed in the recent release, 8.0.2, based on release notes. For me it's a measure of community interest in MySQL 8.0.x and Oracle's interest in further working with MySQL Community. I ended up with the following, short enough list of bug fixes in the categories I usually care about (InnoDB, partitioning, replication and optimizer):
  • The very first InnoDB bug mentioned in the release notes, Bug #85043, is private. I fail to see any valid reason for a bug in the version currently under development and not declared GA to remain private after the fix is released. If only it affects GA versions, and this is the case. The bug is fixed in 5.7.19 as well, as you can see in my previous post.
  • Another bug that is related to InnoDB and optimizer, is Bug #81031. It was also fixed in MySQL 5.6.37 and 5.7.19.
  • Bug #84038 - "Errors when restarting MySQL after FLUSH TABLES FOR EXPORT, RENAME and DROP", was also fixed in MySQL 5.7.19. I am actually surprised that as this stage we still have older InnoDB internal data dictionary tables in MySQL 8.0.x.
  • Group replication related Bug #85667, Bug #85047, Bug #84728  and Bug #84733 were also listed as fixed in MySQL 5.7.19.
  • Same situation with normal async replication bugs: Bug #83184, Bug #82283, Bug #81232, Bug #77406 etc. It's expected to see fixes applied to the oldest version affected and then fixes merged to newer versions.
  • The first really unique fix in 8.0.2 that I found was Bug #85639 - "XA transactions are 'unsafe' for RPL using SBR". It was reported by João Gramacho (who probably works for Oracle) originally for MySQL 5.7 and is going to be fixed in MySQL 5.7.20 also.
  • Replication-related Bug #85739 is still private. Release notes say:
    "Issuing SHOW SLAVE STATUS FOR CHANNEL 'group_replication_recovery' following a restart of a server using group replication led to an unplanned shutdown."
  • Yet another private replication bugs fixed in 8.0.2 are: Bug #85405, Bug #85084, Bug #84646Bug #84471, Bug #82467 and Bug #80368. I do not know who reported them, what versions are affected (but I suspect .5.7.x also) and why are they remaining private after being fixed in 8.0.2.
  • The first 8.0.x specific public bug report I've found in the release notes was reported by Andrey Hristov and verified by Umesh Shastry. It is Bug #85937 - "Unchecked read after allocated buffer".
  • Bug #86120 - "Upgrading a MySQL instance with a table name of >64 chars results in breakage", was reported by Daniël van Eeden and verified by Umesh Shastry. I expect a lot of problem reports when users starts to upgrade to 8.0... See also Bug #84889 - "MYSQL 8.0.1 - MYSQLD ERRORLOG UPGRADE ERRORS AT SERVER START LIVE UPGRADE", by Susan Koerner.
  • Bug #85994 - "Out-of-bounds read in mysqld.cc fix_paths", was reported by Laurynas Biveinis and verified by Umesh Shastry. Percona seems to care a lot about improving MySQL 8.0 (as well as other Oracle MySQL GA versions). See also Bug #85678 by Laurynas and Bug #85059 by Roel Van de Paar.
  • Jon Olav Hauglid seems to care about the new data dictionary code quality, so he had reported related public bugs: Bug #85811, Bug #85800, and Bug #83473.
  • Bug #85704 - "mysql 8.0.x crashes when old-style trigger misses the "created" line in .TRG", was reported by Shane Bester. Workaround was also suggested by Jesper Krogh. Shane also reported this nice regression Bug #83019 - "queries in "show processlist" oscillate with constant times higher each day", that is now fixed in 8.0.2.
  • Bug #85614 - "alter table fails when default character set changes to utf8mb4", was reported by Tor Didriksen. He had also reported Bug #85224 - "Illegal mix of collations for time/varchar".
  • Bug #85518 - "Distinct operations on temp tables allocate too little memory for sort keys", was reported by Steinar Gunderson. He had also reported Bug #85487 - "num_tmp_files in filesort optimizer trace is nonsensical".
  • Bug #85179 - "Assert in sql/field.cc:... virtual String* Field_varstring::val_str", was reported by Matthias Leich.
I skipped several bugs that are fixed also in older versions. Many of them were already discussed in my posts. I also skipped all build/compilation/packaging bugs for now.

To summarize, while total number of public bug reports fixed in MySQL 8.0.2 is notable, many of these bugs were reported by few Oracle engineers who are still brave enough to report bugs in public. From Community, it seems only mostly Percona and Booking.com engineers do care to check MySQL 8.0.x at this early stage and report bugs. I am especially concerned with the number of private bug reports mentioned in the release notes of 8.0.2...

Migrating/importing NDB to Cluster Manager w/ version upgrade.

I’ve had some questions from people using MySQL Cluster GPL and wanting to move to using MySQL Cluster Carrier Grade Edition, i.e., they want to use MySQL Cluster Manager, MCM, to make their lives much easier, in particular, upgrading (as well as config change ease and backup history).

All I want to do here is to share with you my personal experience on migrating what’s considered a ‘wild’ NDB Cluster to a MCM managed cluster. It’s just as simple to follow the manual chapter Importing a Cluster into MySQL Cluster Manager so at least you can see how I did it, and it might help someone.

[ If you’re not migrating but just looking for further information on NDB Cluster, and came across this post, please please PLEASE look at the MySQL Cluster: Getting Started page, in particular the Evaluation Guide. ]

So, what’s the scenario:

  • NDB 7.3 (migrating to 7.4)
  • Centos 6.5
  • 4 vm’s / 4 servers: 2x datanodes (dn1 & dn2), 2x sqlnodes (sn1 & sn2) & 1x mgmt node (sn1).

Technical objective:

  • To upgrade NDB to 7.4 (or follow the procedure to go from/to any ‘consecutive’ versions).

Personal objective:

  • Share that this process can be done, how it’s done, and share the issues that I came across in my non-production environment.

Let’s take for granted that your wild cluster is installed and working with a similar configuration to:

  • Installed as o.s. root user (and all files are owned:grouped as such…).
  • mysql o.s. user exists.
  • tarball install / binaries are all found at /usr/local
  • Datadirs are at /opt/mysql/738: mgm_data (for management node logs), ndbd_data (for datanode info) & data (for sqlnodes).
  • config.ini & my.cnf are found at $BASEDIR/conf (created this dir myself ‘cos I’m like that).
  • All other things are default, i.e. sqlnode / datanode / mgmtnode ports, sockets are located in datadirs, etc.
  • There are some changes in the config files, as would be in a prod env.

And the cluster setup looks something like this

# ndb_mgm -e show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=3 @10.0.0.12 (mysql-5.6.22 ndb-7.3.8, Nodegroup: 0, *) id=4 @10.0.0.13 (mysql-5.6.22 ndb-7.3.8, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @10.0.0.10 (mysql-5.6.22 ndb-7.3.8) [mysqld(API)] 5 node(s) id=10 @10.0.0.10 (mysql-5.6.22 ndb-7.3.8) id=11 @10.0.0.11 (mysql-5.6.22 ndb-7.3.8) id=12 (not connected, accepting connect from any host) id=13 (not connected, accepting connect from any host) id=14 (not connected, accepting connect from any host)

Getting ready:

Remember, all I’m doing here is sharing my experience of following Importing a Cluster into MySQL Cluster Manager so hopefully it will help someone see it in a different env, and for a specific use (upgrade, go CGE, etc.).

Preparing wild cluster:

In order to import the wild cluster into MCM, we need to ‘shape’or ‘tame’ the wild cluster, that is, to adjust certain configuration not normally found nor done in a cluster environment, in order to allow the import process to be successful.
On both of the sqlnodes, we need a mcmd user, other wise it’s going to be impossible for MCM to manage them:

mysql> CREATE USER 'mcmd'@'localhost' IDENTIFIED BY 'super'; mysql> GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'localhost' WITH GRANT OPTION;

Every node of the wild cluster has been started with its node ID specified with the –ndb-nodeid option at the command line:

# ps -ef | grep ndb root 3025 1 0 Aug16 ? 00:00:25 ndbmtd -c 10.0.0.10 --initial root 3026 3025 8 Aug16 ? 01:24:30 ndbmtd -c 10.0.0.10 --initial

And also management node has to be started without caching the config. Changing / restarting each process (both sqlnodes & dnodes) can be done in this point making both changes at the same time / same restart:

# ndb_mgmd --configdir=/usr/local/mysql-cluster-gpl-7.3.8-linux-glibc2.5-x86_64/conf \ -f /usr/local/mysql-cluster-gpl-7.3.8-linux-glibc2.5-x86_64/conf/config.ini \ --config-cache=false --ndb-nodeid=1 MySQL Cluster Management Server mysql-5.6.22 ndb-7.3.8 2017-08-17 08:35:57 [MgmtSrvr] INFO -- Skipping check of config directory since config cache is disabled. # ps -ef | grep ndb root 3560 1 0 08:42 ? 00:00:00 ndbmtd --ndb-nodeid=3 -c 10.0.0.10 root 3561 3560 8 08:42 ? 00:00:48 ndbmtd --ndb-nodeid=3 -c 10.0.0.10 root 3694 1 0 08:38 ? 00:00:00 ndbmtd --ndb-nodeid=4 -c 10.0.0.10 root 3695 3694 8 08:38 ? 00:00:16 ndbmtd --ndb-nodeid=4 -c 10.0.0.10 # ps -ef | grep mysqld mysql 3755 2975 7 08:49 pts/0 00:00:00 mysqld --defaults-file=/usr/local/mysql/conf/my.cnf --user=mysql --ndb-nodeid=10 mysql 3694 2984 2 08:50 pts/0 00:00:01 mysqld --defaults-file=/usr/local/mysql/conf/my.cnf --user=mysql --ndb-nodeid=11

Ok, all well and good up to here.

Just a word of warning, the MCM daemon, mcmd, that runs on all servers of a cluster platform, can not be run as the os user ‘root’, hence, normally is run as ‘mysql’. This means that all the other processes of our wild cluster should be run as ‘mysql’ too. Other wise, the mcmd daemon can’t bring under control these processes (stop/start, etc.) This also impacts the file permissions (data files, pid files, logs, directories) so maybe it’s a good time to do that now, as we’re talking about a production environment. SOOOOOOO, when we stop the management node to restart it, let’s make that change, first file & directory perms, and then starting the process itself. As you probably won’t have separate binaries for each process, and the management node(s) will/should be sharing binaries with the sqlnodes, changing ownership of the files / dir’s shouln’t be too much of a problem.

eg. On sn1 (for the management node restart only):

# cd /opt/mysql/738 # chown -R mysql:mysql . # ls -lrt total 16 drwxr-xr-x. 7 mysql mysql 4096 ago 17 11:40 data drwxr-xr-x. 2 mysql mysql 4096 ago 17 13:01 mgm_data # chown -R mysql:mysql . # ls -lrt total 180 -rw-r--r--. 1 mysql mysql 2496 ene 9 2015 README -rw-r--r--. 1 mysql mysql 17987 ene 9 2015 COPYING -rw-r--r--. 1 mysql mysql 101821 ene 9 2015 INSTALL-BINARY drwxr-xr-x. 4 mysql mysql 4096 ago 16 15:25 include drwxr-xr-x. 2 mysql mysql 4096 ago 16 15:26 bin drwxr-xr-x. 3 mysql mysql 4096 ago 16 15:26 lib drwxr-xr-x. 32 mysql mysql 4096 ago 16 15:26 share drwxr-xr-x. 2 mysql mysql 4096 ago 16 15:26 support-files drwxr-xr-x. 2 mysql mysql 4096 ago 16 15:26 docs drwxr-xr-x. 3 mysql mysql 4096 ago 16 15:26 data drwxr-xr-x. 4 mysql mysql 4096 ago 16 15:26 man drwxr-xr-x. 4 mysql mysql 4096 ago 16 15:26 sql-bench drwxr-xr-x. 2 mysql mysql 4096 ago 16 15:26 scripts drwxr-xr-x. 10 mysql mysql 4096 ago 16 15:26 mysql-test -rw-r--r--. 1 mysql mysql 943 ago 16 16:04 my.cnf -rw-r--r--. 1 mysql mysql 943 ago 16 16:04 my-new.cnf drwxr-xr-x. 2 mysql mysql 4096 ago 16 16:19 conf # pwd /usr/local/mysql

Now it’s time to kill the dnode angel process in preparation for MCM to control the processes (otherwise when MCM stops that process, the angel process tries to restart it, out of MCM’s control and that’s when we hit problems):

# ps -ef | grep ndb root 3560 1 0 08:42 ? 00:00:02 ndbmtd --ndb-nodeid=3 -c 10.0.0.10 root 3561 3560 8 08:42 ? 00:08:32 ndbmtd --ndb-nodeid=3 -c 10.0.0.10 # kill -9 3560 # ps -ef | grep ndb root 3561 1 8 08:42 ? 00:08:38 ndbmtd --ndb-nodeid=3 -c 10.0.0.10 # ps aux | grep ndb root 3561 8.6 63.9 1007296 474120 ? SLl 08:42 8:38 ndbmtd --ndb-nodeid=3 -c 10.0.0.10

Do on both dnodes.

Now we have to adust the .pid files within the dnode’s datadir as it holds the angel process ID:

# cd /opt/mysql/738/ndbd_data/ # ls -lrt -rw-r--r--. 1 mysql mysql 4 Aug 17 13:20 ndb_3.pid # sed -i 's/3560/3561/' ndb_3.pid # more ndb_3.pid 3561

Again, to be done on both dnodes.

Now for the sqlnodes, we have to rename the actual name of the pid file, not the contents :

# cd /opt/mysql/738/data/ # ls -lrt *.pid -rw-rw----. 1 mysql mysql 5 ago 17 08:49 sn1.pid # more *.pid 3755 # ps -ef | grep mysqld | grep -v grep mysql 3755 2975 1 08:49 pts/0 00:02:27 mysqld --defaults-file=/usr/local/mysql/conf/my.cnf --user=mysql --ndb-nodeid=10

Time to copy, not replace, the .pid file to have a cluster-ready naming convention (ndb_.pid):

# cp sn1.pid ndb_10.pid

On the other sqlnode:

# cd /opt/mysql/738/data/ # cp sn2.pid ndb_11.pid

Now time to create the MCM-managed Cluster to import into.

First up, we’ll need an MCM datadir, to store the datadirs, logs, etc. (if you want to change this later, it’s so much easier from MCM, using the “set” command, so just do it then):

# mkdir /opt/mysql/748/mcm_data # cd /opt/mysql/748/mcm_data # chown -R mysql:mysql .

As mcmd needs to run as mysql, change permissions of the binaries and also add the manager-directory of your choice:

# cd /usr/local/mcm-1.4.0-cluster-7.4.8-linux-glibc2.5-x86-64bit/mcm1.4.0/etc # chgrp -R mysql . # vi mcmd.ini    ..  manager-directory = /opt/mysql/748/mcm_data

Let’s make it easier in our env to execute all this:

# su - mysql # vi .bash_profile .. export PATH=$PATH:/usr/local/mcm-1.4.0-cluster-7.4.8-linux-glibc2.5-x86-64bit/mcm1.4.0/bin

And let’s start MySQL Cluster Manager, i.e. mcmd, as the o.s. user ‘mysql’:

# mcmd --defaults-file=/usr/local/mcm-1.4.0-cluster-7.4.8-linux-glibc2.5-x86-64bit/mcm1.4.0/etc/mcmd.ini --daemon

Ok. All working fine.

Let’s create the cluster to import into.

Ok, so I used the o.s. root user to create everything. I suppose I’m used to it, but feel free to do it however you see fit. It won’t make any difference to mcmd as that’s running as mysql so carry on:

# sudo -i # export PATH=$PATH:/usr/local/mcm-1.4.0-cluster-7.4.8-linux-glibc2.5-x86-64bit/mcm1.4.0/bin # mcm

MCM needs a site with all the hosts that make up the cluster. By the way, if you forget a command, or want to see what else is there, “list commands;”:

mcm> create site --hosts=10.0.0.10,10.0.0.11,10.0.0.12,10.0.0.13 mysite;

Add the path to the cluster binaries that we’re using now:

mcm> add package --basedir=/usr/local/mysql-cluster-gpl-7.3.8-linux-glibc2.5-x86_64 cluster738;

A little test here, thinking, “if mcm is so clever, maybe it can detect that we’re in 7.3.8 and we can use 7.4.8 and create the cluster in that version to import into, and we’ve upgraded and imported in one foul swoop!“. Alas, although that’s a nice idea, but after creating the cluster with the 748 package for import, and adding processes / nodes to the cluster, upon running the dryrun config check, it errors out :

mcm> add package --basedir=/usr/local/mcm-1.4.0-cluster-7.4.8-linux-glibc2.5-x86-64bit/cluster cluster748; mcm> import config --dryrun mycluster; ERROR 5307 (00MGR): Imported process version 7.3.8 does not match configured process mysqld 11 version 7.4.8 for cluster mycluster

So, back to the 7.3.8 binaries / package:

mcm> add package --basedir=/usr/local/mysql-cluster-gpl-7.3.8-linux-glibc2.5-x86_64 cluster738; mcm> create cluster --import --package=cluster738 --processhosts=ndb_mgmd:1@10.0.0.10,ndbmtd:3@10.0.0.12,ndbmtd:4@10.0.0.13, mysqld:10@10.0.0.10,mysqld:11@10.0.0.11,ndbapi:12@*,ndbapi:13@*,ndbapi:14@* mycluster;

Now, this seems simple right? Ok, well it is, but you have to match the processhosts to those that appear in the “ndb_mgm -e show” out put, i.e. ndbapi/mysqld api entries that all appear there. So if you have 8 rows returned from that, you’ll need 8 entries in the –processhosts option. It will complain otherwise.

show status -r mycluster; +--------+----------+-----------+--------+-----------+------------+ | NodeId | Process | Host | Status | Nodegroup | Package | +--------+----------+-----------+--------+-----------+------------+ | 1 | ndb_mgmd | 10.0.0.10 | import | | cluster738 | | 3 | ndbmtd | 10.0.0.12 | import | n/a | cluster738 | | 4 | ndbmtd | 10.0.0.13 | import | n/a | cluster738 | | 10 | mysqld | 10.0.0.10 | import | | cluster738 | | 11 | mysqld | 10.0.0.11 | import | | cluster738 | | 12 | ndbapi | * | import | | | | 13 | ndbapi | * | import | | | | 14 | ndbapi | * | import | | | +--------+----------+-----------+--------+-----------+------------+ 8 rows in set (0,05 sec)

Let’s do a test now we’ve got the 7.3.8 binaries assigned to the site and a shell cluster created:

mcm> import config --dryrun mycluster; ERROR 5321 (00MGR): No permissions for pid 3700 running on sn1

This means that processes are being executed / run by someone who isn’t the mcmd user, eg. root.
Now I need to go to each process and restart it as mysql (kill angel processes, etc.).
Also remember that mcmd can’t be run by root. As we fixed that at the beginning of this post (DIDN’T WE?) Well, I hope you don’t get that one.

+---------------------------------------------------------------------------+ | Command result | +---------------------------------------------------------------------------+ | Import checks passed. Please check log for settings that will be applied. | +---------------------------------------------------------------------------+ 1 row in set (5.55 sec)

Now that all the checks are passed, it leads me to think: what about all the personalized config that I have in the config.ini & my.cnf files. Well, we could run:

mcm> set DataMemory:ndbmtd=20M, IndexMemory:ndbmtd=10M, DiskPageBufferMemory:ndbmtd=4M, StringMemory:ndbmtd=5, MaxNoOfConcurrentOperations:ndbmtd=2K, MaxNoOfConcurrentTransactions:ndbmtd=2K, SharedGlobalMemory:ndbmtd=500K, MaxParallelScansPerFragment:ndbmtd=16, MaxNoOfAttributes:ndbmtd=100, MaxNoOfTables:ndbmtd=20, MaxNoOfOrderedIndexes:ndbmtd=20, HeartbeatIntervalDbDb:ndbmtd=500, HeartbeatIntervalDbApi:ndbmtd=500, TransactionInactiveTimeout:ndbmtd=500, LockPagesInMainMemory:ndbmtd=1, ODirect:ndbmtd=1, MaxNoOfExecutionThreads:ndbmtd=4, RedoBuffer:ndbmtd=32M mycluster ; +-----------------------------------+ | Command result | +-----------------------------------+ | Cluster reconfigured successfully | +-----------------------------------+ 1 row in set (0.60 sec)

But there is no need, because MCM will do all that for you. As the refman mentions, we’ll need to go to the MCM manager directory and it’s in there:

# cd /opt/mysql/748/mcm_data/clusters/mycluster/tmp # ls -lrt -rw-rw-rw-. 1 mysql mysql 3284 Aug 17 13:43 import_config.a6905b23_225_3.mcm mcm> import config mycluster; +--------------------------------------------------------------------------------------------+ | Command result | +--------------------------------------------------------------------------------------------+ | Configuration imported successfully. Please manually verify the settings before proceeding | +--------------------------------------------------------------------------------------------+ 1 row in set (5.58 sec)

Now to import:

mcm> import cluster mycluster; +-------------------------------+ | Command result | +-------------------------------+ | Cluster imported successfully | +-------------------------------+ 1 row in set (3.04 sec)

Let’s make sure the Status has changed from import:

mcm> show status -r mycluster; +--------+----------+-----------+---------+-----------+------------+ | NodeId | Process | Host | Status | Nodegroup | Package | +--------+----------+-----------+---------+-----------+------------+ | 1 | ndb_mgmd | 10.0.0.10 | running | | cluster738 | | 3 | ndbmtd | 10.0.0.12 | running | 0 | cluster738 | | 4 | ndbmtd | 10.0.0.13 | running | 0 | cluster738 | | 10 | mysqld | 10.0.0.10 | running | | cluster738 | | 11 | mysqld | 10.0.0.11 | running | | cluster738 | | 12 | ndbapi | * | added | | | | 13 | ndbapi | * | added | | | | 14 | ndbapi | * | added | | | +--------+----------+-----------+---------+-----------+------------+ 8 rows in set (0.06 sec)

I know you’re probably eager to see if MCM 1.4 autotune works for 7.3 NDB’s. Well, it doesn’t I’m afraid:

mcm> autotune --dryrun --writeload=low realtime mycluster; ERROR 5402 (00MGR): Autotuning is not supported for cluster version 7.3.8

Upgrade time:

mcm> list packages mysite; +------------+---------------------------------------------------------------------+-----------------------------------------+ | Package | Path | Hosts | +------------+---------------------------------------------------------------------+-----------------------------------------+ | cluster738 | /usr/local/mysql-cluster-gpl-7.3.8-linux-glibc2.5-x86_64 | 10.0.0.10,10.0.0.11,10.0.0.12,10.0.0.13 | +------------+---------------------------------------------------------------------+-----------------------------------------+ mcm> add package --basedir=/usr/local/mcm-1.4.0-cluster-7.4.8-linux-glibc2.5-x86-64bit/cluster cluster748; mcm> list packages mysite; +------------+---------------------------------------------------------------------+-----------------------------------------+ | Package | Path | Hosts | +------------+---------------------------------------------------------------------+-----------------------------------------+ | cluster738 | /usr/local/mysql-cluster-gpl-7.3.8-linux-glibc2.5-x86_64 | 10.0.0.10,10.0.0.11,10.0.0.12,10.0.0.13 | | cluster748 | /usr/local/mcm-1.4.0-cluster-7.4.8-linux-glibc2.5-x86-64bit/cluster | 10.0.0.10,10.0.0.11,10.0.0.12,10.0.0.13 | +------------+---------------------------------------------------------------------+-----------------------------------------+ mcm> upgrade cluster --package=cluster748 mycluster;

In another window:

mcm> show status -r mycluster; +--------+----------+-----------+---------+-----------+------------+ | NodeId | Process | Host | Status | Nodegroup | Package | +--------+----------+-----------+---------+-----------+------------+ | 1 | ndb_mgmd | 10.0.0.10 | running | | cluster748 | | 3 | ndbmtd | 10.0.0.12 | stopped | 0 | cluster738 | | 4 | ndbmtd | 10.0.0.13 | running | 0 | cluster738 | | 10 | mysqld | 10.0.0.10 | running | | cluster738 | | 11 | mysqld | 10.0.0.11 | running | | cluster738 | | 12 | ndbapi | * | added | | | | 13 | ndbapi | * | added | | | | 14 | ndbapi | * | added | | | +--------+----------+-----------+---------+-----------+------------+ 8 rows in set (0.05 sec)

Ok, all going well:

mcm> show status -r mycluster; +--------+----------+-----------+----------+-----------+------------+ | NodeId | Process | Host | Status | Nodegroup | Package | +--------+----------+-----------+----------+-----------+------------+ | 1 | ndb_mgmd | 10.0.0.10 | running | | cluster748 | | 3 | ndbmtd | 10.0.0.12 | running | 0 | cluster748 | | 4 | ndbmtd | 10.0.0.13 | starting | 0 | cluster748 | | 10 | mysqld | 10.0.0.10 | running | | cluster738 | | 11 | mysqld | 10.0.0.11 | running | | cluster738 | | 12 | ndbapi | * | added | | | | 13 | ndbapi | * | added | | | | 14 | ndbapi | * | added | | | +--------+----------+-----------+----------+-----------+------------+ 8 rows in set (0.08 sec)

But, life is never as good as it is in fairy tales:

ERROR 7006 (00MGR): Process error: Node 10 : 17-08-17 17:18:13 4449 [Note] NDB Util: Starting... 2017-08-17 17:18:13 4449 [Note] NDB Util: Wait for server start completed 2017-08-17 17:18:13 4449 [ERROR] Aborting 2017-08-17 17:18:13 4449 [Note] Binlog end 2017-08-17 17:18:13 4449 [Note] NDB Util: Stop 2017-08-17 17:18:13 4449 [Note] NDB Util: Wakeup 2017-08-17 17:18:13 4449 [Note] NDB Index Stat: Starting... 2017-08-17 17:18:13 4449 [Note] NDB Index Stat: Wait for server start completed mcm> show status -r mycluster; +--------+----------+-----------+---------+-----------+------------+ | NodeId | Process | Host | Status | Nodegroup | Package | +--------+----------+-----------+---------+-----------+------------+ | 1 | ndb_mgmd | 10.0.0.10 | running | | cluster748 | | 3 | ndbmtd | 10.0.0.12 | running | 0 | cluster748 | | 4 | ndbmtd | 10.0.0.13 | running | 0 | cluster748 | | 10 | mysqld | 10.0.0.10 | failed | | cluster748 | | 11 | mysqld | 10.0.0.11 | running | | cluster738 | | 12 | ndbapi | * | added | | | | 13 | ndbapi | * | added | | | | 14 | ndbapi | * | added | | | +--------+----------+-----------+---------+-----------+------------+ 8 rows in set (0.05 sec)

On sn1, the error log mysqld_738.err reads:

2017-08-17 17:25:20 4518 [Note] NDB Util: Wait for server start completed 2017-08-17 17:25:20 4518 [ERROR] Aborting

The mcmd.log:

2017-08-17 17:25:20 4518 [Note] NDB Util: Stop 2017-08-17 17:25:20 4518 [Note] NDB Util: Wakeup 2017-08-17 17:25:20 4518 [Note] NDB Index Stat: Starting... 2017-08-17 17:25:20 4518 [Note] NDB Index Stat: Wait for server start completed 2017-08-17 17:25:24.452: (message) [T0x1b1a050 CMGR ]: Got new message mgr_cluster_process_status {a6905b23 396 0} 10 failed 2017-08-17 17:25:24.457: (message) [T0x1b1a050 CMGR ]: Got new message x_trans {a6905b23 397 0} abort_trans pc=19 2017-08-17 17:25:24.459: (message) [T0x1b1a050 CMGR ]: Got new message mgr_process_operationstatus {0 0 0} 10 failed 2017-08-17 17:25:24.461: (message) [T0x1b1a050 CMGR ]: req_id 80 Operation finished with failure for configversion {a6905b23 385 3} 2017-08-17 17:25:24.466: (warning) [T0x1b1a050 CMGR ]: Operation failed : 7006 Process error: Node 10 : 17-08-17 17:25:20 4518 [Note] NDB Util: Starting... 2017-08-17 17:25:20 4518 [Note] NDB Util: Wait for server start completed 2017-08-17 17:25:20 4518 [ERROR] Aborting

And reviewing the my.cnf, the following needed to be changed as they reference the old binaries.

But most importantly, StopOnError=0 is required. That was my gotcha!

set lc_messages_dir:mysqld:10=/usr/local/mcm-1.4.0-cluster-7.4.8-linux-glibc2.5-x86-64bit/cluster/share mycluster; set lc_messages_dir:mysqld:11=/usr/local/mcm-1.4.0-cluster-7.4.8-linux-glibc2.5-x86-64bit/cluster/share mycluster; set StopOnError:ndbmtd=0 mycluster;

This last command restarts the cluster, without upgrading it, leaving us:

mcm> show status -r mycluster; +--------+----------+-----------+---------+-----------+------------+ | NodeId | Process | Host | Status | Nodegroup | Package | +--------+----------+-----------+---------+-----------+------------+ | 1 | ndb_mgmd | 10.0.0.10 | running | | cluster748 | | 3 | ndbmtd | 10.0.0.12 | running | 0 | cluster748 | | 4 | ndbmtd | 10.0.0.13 | running | 0 | cluster748 | | 10 | mysqld | 10.0.0.10 | running | | cluster748 | | 11 | mysqld | 10.0.0.11 | running | | cluster748 | | 12 | ndbapi | * | added | | | | 13 | ndbapi | * | added | | | | 14 | ndbapi | * | added | | | +--------+----------+-----------+---------+-----------+------------+ 8 rows in set (0.05 sec)

Looks like it’s upgraded. We know it hasn’t been able to run the checks and upgrade process, so let’s do it properly (and remember in the future that we need to review our config.ini and params):

mcm> upgrade cluster --package=cluster748 mycluster; +-------------------------------+ | Command result | +-------------------------------+ | Cluster upgraded successfully | +-------------------------------+ 1 row in set (1 min 53.72 sec)

Whilst MCM upgrades, it normally restarts each process in it’s correct order and one by one, on it’s own, without any need for human intervention.

However, as the upgrade process had previously been run, and failed at the sqlnode step, this still needed to be carried out, hence, when it stopped and started each sqlnode, it did it twice, ensuring that the changed configuration we adjusted is reflected into the MCM config.

mcm> show status -r mycluster; +--------+----------+-----------+---------+-----------+------------+ | NodeId | Process | Host | Status | Nodegroup | Package | +--------+----------+-----------+---------+-----------+------------+ | 1 | ndb_mgmd | 10.0.0.10 | running | | cluster748 | | 3 | ndbmtd | 10.0.0.12 | running | 0 | cluster748 | | 4 | ndbmtd | 10.0.0.13 | running | 0 | cluster748 | | 10 | mysqld | 10.0.0.10 | running | | cluster748 | | 11 | mysqld | 10.0.0.11 | running | | cluster748 | | 12 | ndbapi | * | added | | | | 13 | ndbapi | * | added | | | | 14 | ndbapi | * | added | | | +--------+----------+-----------+---------+-----------+------------+ 8 rows in set (0.01 sec)

And we’re done. Imported from the wild into an mcm-managed env, and upgraded with mcm. So much simpler.

Happy mcm’ing!


MySQL Connector/NET 6.10.3 rc has been released

Dear MySQL users,

MySQL Connector/Net 6.10.3 rc is the fourth release which supports
Scaffold-DbContext, that enables the creation of corresponding model
classes from an existing database that are compatible with Entity
Framework (EF) Core 1.1.

To download MySQL Connector/Net 6.10.3 rc, see the “Development
Releases” tab at http://dev.mysql.com/downloads/connector/net/

Changes in MySQL Connector/Net 6.10.3 (2017-08-18, Release Candidate) Known limitation of this release: The use of the SSL protocol is restricted to TCP connections. Connections using Unix sockets, named pipes, and shared memory do not support SSL mode. Functionality Added or Changed * The following methods are available for use with EF Core in asynchronous command and connection operations: + Microsoft.EntityFrameworkCore.DbContext.AddAsync + Microsoft.EntityFrameworkCore.DbContext.AddRangeAsync + Microsoft.EntityFrameworkCore.DbContext.FindAsync + Microsoft.EntityFrameworkCore.DbContext.SaveChangesAsync + Microsoft.EntityFrameworkCore.Infrastructure. DatabaseFacade.EnsureDeletedAsync + Microsoft.EntityFrameworkCore.Infrastructure. DatabaseFacade.EnsureCreatedAsync + Microsoft.EntityFrameworkCore.DbContext.ToListAsync * Support for connections using Unix sockets was extended to include MySQL servers deployed on the Linux host operating system in .NET Core scenarios. To specify a Unix socket connection, set the value of the Server connection-string option to the path of the socket file and the Protocol option to unix. For example: "server=/path/to/socket;protocol=unix;user=root;password=mypass" * The AutoEnlist and IncludeSecurityAsserts connection-string options are not appropriate for use by applications that target .NET Core and now return an error when used. * EF Core: Support for explicit loading was added. Explicit loading is an object-relational mapper (O/RM) pattern introduced in EF Core 1.1.0, which enables .NET developers to explicitly load related data from the database at a later time. * EF Core: Support for scaffolding a DbContext from multiple databases was added. With multiple databases (or schemas of tables) specified, the resulting entity classes are created within a single context. EF Core CLI usage example: dotnet ef dbcontext scaffold "connection-string" MySql.Data.EntityFrameworkCore --schema world --schema sakila Package Manager Console (Visual Studio) usage example: Scaffold-DbContext "connection-string" MySql.Data.EntityFrameworkCore -Schemas world,sakila * The following connection-string options are not currently supported for use by applications that target .NET Core and now return an error when used: + SharedMemoryName + IntegratedSecurity + PipeName + Logging + UseUsageAdvisor + UsePerformanceMonitor + InteractiveSession + Replication Bugs Fixed * EF Core: Some methods in the DbContext class were not supported for use with asynchronous operations. (Bug #26448321, Bug #84814)

Nuget packages are available at:

https://www.nuget.org/packages/MySql.Data/6.10.3-rc
https://www.nuget.org/packages/MySql.Web/6.10.3-rc
https://www.nuget.org/packages/MySql.Data.Entity/6.10.3-rc
https://www.nuget.org/packages/MySql.Data.EntityFrameworkCore/6.10.3-rc
https://www.nuget.org/packages/MySql.Data.EntityFrameworkCore.Design/6.10.3-rc

Enjoy and thanks for the support!

On behalf of the MySQL Release Team,
Nawaz Nazeer Ahamed

This Week in Data with Colin Charles: Percona Live Europe!

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Has a week passed already? Welcome back to the second column. A lot of time has been spent neck deep in getting speakers accepted and scheduled for Percona Live Open Source Database Conference Europe 2017 in Dublin, as well organizing the conference sponsors.

Percona Live Europe Dublin

At the time of writing, we are six weeks away from the conference, so a little over a month! Have you registered yet?

We have 12 tutorials that cover a wide range of topics: ProxySQL (from the author Rene Cannao), Orchestrator (from the author Shlomi Noach), practical Couchbase (to name a few). If we did a technology word cloud, the coverage includes MongoDB, Docker, Elastic, Percona Monitoring and Management (PMM), Percona XtraDB Cluster 5.7, MySQL InnoDB Cluster and Group Replication.

In addition to that, if you’re a MySQL beginner (or thinking of a career change) there is a six-hour boot camp titled MySQL in a Nutshell (Part 1 and Part 2)!. Come prepared with your laptop, and leave a MySQL DBA!

Sessions are scheduled, and most of the content is already online: check out day 1, and day 2. We have 104 sessions scheduled, so there’s plenty to choose from.

Remember that you have till 7:00 a.m. UTC-1, August 16th, 2017 to book the group rate at the event venue for €250/night. Use code PERCON.

Releases
  • orchestrator/raft: Pre-release 3.0 is available. I’m a huge fan of Orchestrator, and now you can setup high availability for orchestrator via the Raft consensus protocol.
  • MariaDB 10.0.32 is out, and it comes with a new Percona XtraDB, Percona TokuDB and a new InnoDB. You’ll want this release if you’re using TokuDB, as it merges from TokuDB 5.6.36-82.1 (which fixes the two issues problem).
  • If you encountered the TokuDB problems above, you’ll want to look at MariaDB 10.1.26. One surprise hidden in the release notes: MariaDB Backup is now a stable/GA release. Have you used it yet?
Link List

I look forward to feedback/tips via e-mail at colin.charles@percona.com or I’m @bytebot on Twitter.

MySQL Connector/Python 2.1.7 GA has been released

Dear MySQL users,

MySQL Connector/Python 2.1.7 GA is a fifth GA version of 2.1 release series of the pure Python database driver for MySQL. It can be used for production environments.

MySQL Connector/Python version 2.1.7 GA is compatible with MySQL Server versions 5.5 and greater. Python 2.6 and greater as well as Python 3.4 and greater are supported. Python 2.4, 2.5, and 3.1, 3.2, 3.3 are not supported.

MySQL Connector/Python 2.1.7 is available for download from:

http://dev.mysql.com/downloads/connector/python/#downloads MySQL Connector/Python 2.1.7 (Commercial) will be available for download on the My Oracle Support (MOS) website. This release will be available on eDelivery (OSDC) in next month’s upload cycle.

The ChangeLog file included in the distribution contains a brief summary of changes in MySQL Connector/Python 2.1.7. For a more complete list of changes, see below or online at:

http://dev.mysql.com/doc/relnotes/connector-python/en/

Enjoy!

Changes in MySQL Connector/Python 2.1.7 (2017-08-18, General Availability) * Functionality Added or Changed * Bugs Fixed Functionality Added or Changed * Connector/Python is now compatible with Django 1.11. Thanks to Brian Tyndall for the patch for some of the work. (Bug #25965922, Bug #86105, Bug #26257499, Bug#86652) * Connector/Python now attempts to connect using TLS by default if the server supports secure connections. (Bug#21947091) Bugs Fixed * Prepared statements did not work with a MySQL 8.0 server. (Bug #26376334) * With a connection character set of utf8mb4, multiple-row insert operations failed with an error of LookupError: unknown encoding: utf8mb4. (Bug #24659561, Bug #82948) * Creating a Connection instance with use_pure=True could lead to the underlying socket not being closed if the user name or password were invalid. Thanks to Vilnis Termanis for the patch. (Bug #24342757, Bug #82324) * For cursors created with named_tuple=True, MySQLCursorNamedTuple objects could leak memory. Thanks to Piotr Jurkiewicz for the patch on which this fix was based. (Bug #22880163, Bug #80621) * The C Extension leaked memory if used to execute INSERT statements that inserted Unicode strings. (Bug #22825962, Bug #79887) * The escape_string() method leaked memory. (Bug #22810810, Bug #79903) * With Python 2.x, for a call to encode('utf8') on a bytestring that was serialized from unicode, Python attempted to decode the string using the 'ascii' codec and encode it back using 'utf8'. The result was encoding failure for bytestrings that contain non-ASCII characters. (Bug #22564149, Bug #79993)

Documentation
——————–
Online:http://dev.mysql.com/doc/connector-python/en/index.html
The source distribution includes the manual in various formats under
the docs/ folder.

Reporting Bugs
——————–
We welcome and appreciate your feedback and bug reports:
http://bugs.mysql.com/

On Behalf of the MySQL/ORACLE RE Team,
Gipson Pulla

Log Buffer #518: A Carnival of the Vanities for DBAs

This Log Buffer Edition covers from various blogs covering the technologies and news of Oracle, SQL Server and MySQL.

Oracle:

Storage Server: datasets, snapshots and performance

A Different Cause for enq: TM – contention

ODA X6-2M – How to create your own ACFS file system

Oracle New Version Numbering

KBHS-00600: Internal Error, Arguments [1] [kbhshtCreateDataBucket] Error During Backup To Oracle Cloud

SQL Server:

Using Database Master Keys in SQL Server

SQL Server Management Studio Tips

Sqlcmd is not just a script execution tool. It gives us access to internal variables and commands that we can use to automate code execution.

Additional SERVERPROPERTY properties in SQL Server

Simple SQL: Random Thoughts

My SQL:

The danger of no Primary Key when replicating in RBR (and a partial protection with MariaDB 10.1)

What’s New With MySQL Replication in MySQL 8.0

Docker Secrets and MySQL Password Management

More Details about InnoDB Compression Levels (innodb_compression_level)

Extending the Power of MariaDB ColumnStore with User Defined Functions

MySQL Community team @Conferences in Aug-Sep 2017

It's our pleasure to announce that MySQL Community team is supporting multiple shows around the world. Please find a list of conferences & events where you can find us in August & September below:

NA&LAD
  • UbuCon LA,Lima, Peru, Aug 18-19, 2017
    • MySQL Community team is supporting this small Ubuntu conference in LAD as Platinum sponsor.  
  • Pacific Northwest PHP, Seattle, US, Sep 9-11, 2017
    • You can find our team at the MySQL Community booth in the expo area.
  • Madison PHP, Madison, US, Sep 22-23, 2017
    • MySQL Community team is supporting this PHP conference as Community partner.
EMEA
  • FrOSCon, Sankt Augustin, Germany, Aug 19-20, 2017
    • MySQL Community team is as in previous years Silver sponsor of FrOSCon conference. Our colleague Carsten Thalheimer, the Technical Sales Consultant will have a talk on MySQL 5.7 - InnoDB Cluster [HA built in], scheduled for Saturday, Aug 19@14:00.
    • Do not miss to come to MySQL booth at the expo area to talk to us!
  • WebExpo, Prague, Czech republic, Sep 22-23, 2017
    • MySQL Community team is supporting this technology conference as Conference Partner. 
  • SwanseaCon, Swansea, UK, Sep 25-26, 2017
    • You can find us at the MySQL booth in the expo area as well as you can find MySQL talk scheduled. David Stokes, the MySQL Community Manager will be talking about "Making MySQL Agile-ish". His talk is scheduled for Sep 26 @11:45-12:30pm.
APAC
  • Open Developer Conference, Tokyo, Japan, Aug 19-20, 2017
    • MySQL Community with a help of the local MySQL team is staffing and supporting this conference. You can find us at our MySQL booth in the expo area as well as you find the MySQL talks in the agenda. Our colleague Ryusuke Kajiyama will be talking about development team and model of MySQL and about "New way of developing MySQL app with Document Store". Check agenda for more details.
  • Open-Source Developer Forum, Beijing, China, Aug 24-25, 2017
    • MySQL Community team in cooperation with the local MySQL User Group (ACMUG) are organizing a OS Developer Forum for more than 300 MySQL (and not just MySQL) users. 
  • db tech showcase, Tokyo, Japan, Sep 5-7, 2017
    • As a tradition we are again part of the db tech showcase. You can find our talks in the schedule as follows:
      • Mikiya Okuno, MySQL Support Engineer on "What's new MySQL Cluster 7.5 and beyond"  
      • Ryusuke Kajiyama, MySQL Sales Consulting Senior Manager on "New Unicode Collations in MySQL 8.0"  
      • Takanori Sejima, Senior Lead Engineer: Talk topic will be determined later on.
      • Yoku0825, Oracle ACE MySQL: Talk topic will be determined later on.
  • Open Source Conference 2017 Tokyo/Fall, Tokyo, Japan, Sep 9-10, 2017
    • MySQL Community team is supporting this event as Gold sponsor.
    • You can find our team on the MySQL booth in the expo area as well as attend MySQL talks. Our colleagues Daisuke Inagaki, MySQL Sales Consultant and Ryusuke Kajiyama, MySQL Sales Consulting Senior Manager will be available at our MySQL booth and Ryusuke will be talking about news in MySQL 8.0-New feature (window function, Japanese collation...), MySQL InnoDB Cluster, MySQL Cloud Services etc. 
  • JCConf, Taipei, Taiwan, Sep 29, 2017
    • Our pleasure to announce that MySQL Community team with a help of the local MySQL teams are going to attend and support JCConf this year again. You will be able to find us at the MySQL booth as well as find MySQL talks. For more details please check the agenda.

Much more conferences & events will come up soon. 

 

Pages