Planet MySQL

MySQL Cluster 7.6 includes new Cluster Configurator

After dealing with Windows performance, I switched to MySQL Cluster Configurator (MCC for short) project. This was quite a change for me having to deal with Python back-end and JavaScript front-end, languages I was not so familiar with.

For the history of the project, please refer to Andrew's blog post for I will concentrate on changes made in new version.

There are many exciting new features in MySQL Cluster 7.6.4DMR including new MCC. To download MySQL Cluster 7.6.4DMR, please go to Development Releases tab. To see what's new in 7.6.4DMR, please follow this link.

MySQL Cluster Configurator in shortWith a single command launch the web-based wizard which then steps you through configuring, deploying configuration and starting the cluster; to keep things even simpler, it will automatically detect the resources on your target machines and use these results together with the type of workload you specify in order to determine values for the key configuration parameters.
The software is part of the MySQL Cluster package (NDB). To run on Windows, just double click setup.bat – note that if you installed from the MSI and didn’t change the install directory then this will be located somewhere like C:\Program Files...\MySQL\MySQL Cluster 7.6. On Linux, just run ndb_setup from the bin folder.
On Windows, we provide all necessary libraries and the Python while on Linux, you should have Python 2.7 (latest) installed as well as Paramiko 2 or newer. Paramiko will install Cryptography and other required libraries.
If you launch the installer from a desktop environment shell then the first page of the wizard will automatically open in your web browser, if not then just browse to the URL that is displayed on the command line.
Recap: Start the Python web server from shell using setup.bat (Windows) or bin/ndb_setup (*nix), JavaScript front-end will be opened for you in default browser. If it's not, copy the link from console to your favorite browser. Make sure you have administrator rights on all the hosts you will use in cluster. Also, make sure to have MySQL Cluster installed on all the boxes making up cluster.
There is an extensive help file in /hlp/html directory, please read if you're new to product.

Who is this software forTypically, MCC would be used by developers working on Cluster software and people looking for quick evaluation of MySQL Cluster on various hardware configurations including cloud instances.

New version highlights
  • Multiple reusable configurations. Take your configuration with you to work on different boxes.
  • Much safer configuration files allowing for saving passwords/passphrases resulting in full restoration of NDB Cluster from configuration file.
  • Many new options to configure both for NDB Cluster processes as well as for mysqld. Multiple choice options have proper drop-down lists associated.
  • Many new members in Cluster and Host objects. GUI changes to present them.
  • Nodes can have external IP address (to gain access from MCC for example) as well as internal IP address (for faster and safer communication inside cloud deployment or other VPN environment).
  • Extended ways of authentication such as using different username, private key on per host basis, using keys with passphrases and so on.
  • Better detection of running cluster processes and option to stop them even if startup fails.

Language choiceJavaScript was obvious choice since it's common to have GUI run in browser. Due to it's limitations (none likes browser sniffing around file system), we have back-end web server in Python handling various requests related to files, remote hosts etc. Front and back end communicate via POST messages so it is really important that you protect this communication. For a start, we provide self signed certificates for securing it. If you are testing things in sand-box, you may switch https for http to speed up things.
Recap: Python back end manipulates encryption/decryption, files, connections to local/remote hosts and so on while JavaScript front-end provides GUI in your browser for convenient presentation of configuration options.

Changes Changes in configuration files and authentication methods:First change tackled is the configuration. Versions shipped with previous Cluster releases had two major drawbacks; they kept configuration in cookies and could handle just one Cluster configuration which was locked to box where you created it. Keeping Cluster configuration in cookies comes with two major drawbacks; cookies are size-limited and very insecure.
To remedy this situation, we opted for external configuration file saved in current user's HOME directory (this is done through Python web server that's running MCC); one configuration per file. The file itself is AES encrypted using passphrase provided in front-end and passed down to Python web server via POST message. Passphrase is then kept in memory for the duration of session. With provided passphrase and file name, Python server sends decrypted configuration back to front-end. Configuration is kept in global.windowname variable for the duration of the session.

This has solved several issues:
  • Configuration now stores all your passwords, passphrases for keys and such allowing for quick and full recreation of Cluster.
  • Extending configuration size allows for per-host credentials. I.e. each host in Cluster can have it's own way of authentication and set of credentials.
Essentially, when looking for quick and dirty deployment over hosts that use same credentials, it is enough to provide them on Page 2, Cluster Configuration. When working with hosts that have their own separate credentials, you can define them on Host level (Page 3, Add Hosts). We prefer adding hosts on Page 3 via Add Host button to defining them on Page 2 "Host list" box.
Credentials can be:
  • Username/password.
  • Username/Key name.
  • Username/Key name/Passphrase.
  • Key name/Passphrase.
  • Key name.
  • Nothing.
If PATH is not provided along with Key name, ~/.ssh is assumed. If using keys and no key name is provided, ~/.ssh/id_rsa is assumed. So, for authentication using standard private key without passphrase stored in default place, you just need to check "Key based SSH" (Page 2, Cluster level) or "Key-based auth" for particular host (Page 3, Add/Edit Host) checkbox and nothing more. If there are absolutely no credentials at host level, program behaves as in old versions meaning whatever was provided on Cluster level is used.

WARNING: Although asynchronous, call to save configuration changes does take some time to complete. The saving of configuration takes place after pressing "Save&Next" button:

No changes are saved if you close the tab or browser abruptly! After making extensive changes and pressing Save&Next I like to allow for some time for configuration to be saved. If you open debug console, you will see the notification.
If you Cluster is all set up and ready to go and you just want to take one more look at various configuration options, you can use breadcrumb buttons as they do not trigger save method:

Since all credentials are saved, you can skip looking into loaded configuration altogether and go directly to "Deploy configuration" page.
Gotcha: In order to make configurations portable, we had to limit the usage of "localhost". If you include localhost in your Cluster, you will not be able to add any more remote hosts.
Gotcha: List with available configurations is provided to front-end upon welcome page load so if you add more configurations to your HOME using external tools they will not be shown (unless you reload page or restart entire program). However, if you choose "New Configuration" and provide the name of existing one, the existing configuration will be loaded.
Gotcha: "New Configuration" requires you to provide a passphrase and a confirmation. Reading existing configuration just requires passphrase. Please keep your passphrase(s) safe as there is no way to reverse engineer contents of configuration file without it! Each file/configuration can have it's own passphrase.
Recap: There can be any number of portable, encrypted configurations now which you can find in your HOME on box running MCC. Each host can have it's own way of authentication and a set of credentials. We did our best to guess which auth method and credentials to use based on input provided.

Changes in Cluster object (Page 2):Cluster object has two new attributes:
  • Install MySQL Cluster: Option for installing Cluster on hosts
    NONE: No installation of Cluster will take place (default).
    BOTH: Both DOCKER and installation from REPO will be considered depending on OS and available images. Should both installation mechanisms be available on certain host, we will prefer REPO over DOCKER.
    REPO: For every host in Cluster, the attempt will be made to install Cluster SW from the repository URL.
    DOCKER: For every host in Cluster, the attempt will be made to install Cluster SW DOCKER image.
  • Open FW ports: Check if you need the tool to try to open necessary ports for Cluster processes on every hosts.
Gotcha: In this version "Install MySQL Cluster" is not functional so you need Cluster installed on all hosts beforehand.

Changes in Host object and GUI (Page 3):Host object has undergone major rework. Host name is now used for external host IP address i.e. IP address at which MCC web server instance can reach that particular host. Host Internal IP refers to that particular host IP address inside VPN. If there is no VPN in play, both IP addresses are the same. We strongly encourage using IP addresses here to skip potential problems with resolving host names.
Authentication, as per above, could be a) using keys or b) ordinary username/password. If you check Key-based auth when Adding/Editing host and provide nothing, ~/.ssh/id_rsa key will be used without passphrase. You can also define alternate user (to one starting MCC) which comes handy when logging in to domains. Each key can have passphrase and you can provide the path to and name of specific key to use for that host.
If you check Configure installation, you will be presented with additional fields relating to repository and Docker image. We do our best to provide you with default values based on OS running on particular host you're Adding/Editing.

We have also added Show/Hide extended host info toggle button switching between single and double line host info representation.

Changes to Define parameters (Page 5):We are constantly adding more configurable parameters to keep up with MySQL server and Cluster evolution but this is a moving target so we ask for patience if certain parameter you'd like to configure is missing.
In addition to many new configurable parameters, we have extended a GUI so that, for appropriate options, you get drop-down list of allowed values.

Changes to Deploy configuration (Page 6, last):The Start and Stop Cluster buttons now behave more intelligently in terms of determining if the Cluster or any of its processes is running and enabling/disabling appropriate buttons.
Previously, if Cluster was stuck in any of the startup phases, you had to terminate MCC and kill all the processes on all of the hosts manually. Now, if you think something is wrong, you can close the progress window and it will give control back to MCC enabling Stop Cluster button which you can then press to stop the stray processes properly. It will also provide you with list of log files which you can then check for problems.
Stopping mysqld process(es) might not work if you changed the credentials from command line.


These changes are available in version 7.6.4DMR and up. We encourage you to try MCC and MySQL Cluster in your environment!

My MySQL Linux Tuning Checklist


Things I look for when optimising or debugging a Linux OS:
  • IOschedular (noop or deadline)
  • Linux Kernel > 3.18 (multi queuing)
  • IRQbalance > 1.0.8
  • File System: noatime, nobarrier
    • ext4: data=ordered
    • xfs: 64k
    • logfiles in different partition (if possible)
  • Swapiness
  • Jemalloc (if needed)
  • Transparent hugepages
  • Ulimit (open files)
  • Security
    • IPtables
    • PAM security

Making 30x performance improvements on Yelp’s MySQLStreamer

Introduction MySQLStreamer is an important application in Yelp’s Data Pipeline infrastructure. It’s responsible for streaming high-volume, business-critical data from our MySQL clusters into our Kafka-powered Data Pipeline. When we rolled out the first test version of MySQLStreamer, the system operated at under 100 messages/sec. But for it to keep up with our production traffic, the system needed to process upwards of thousands of messages/sec (MySQL databases at Yelp on an average receive over hundreds of millions of data manipulation requests per day, and tens of thousands of queries per second). In order to make that happen, we used a variety...

Oracle Enterprise Manager for MySQL Database 13.2.1.0.0 has been released

The MySQL development team is pleased to announce that the fifth release of Oracle Enterprise Manager for MySQL Database is now Generally Available (GA).

Oracle Enterprise Manager for MySQL Database is the official MySQL plug-in that provides comprehensive performance, availability, and configuration information for Oracle's integrated enterprise IT management product line, Oracle Enterprise Manager (13c or later).

This is a maintenance release that includes a few enhancements and fixes a number of bugs. You can find more information on the contents of this release in the change log.

Use Self-Update to deploy Oracle Enterprise Manager for MySQL Database 13.2.1.0.0

Or use My Oracle Support to download and install manually. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet.

You will also find binaries on the Oracle Software Delivery Cloud.  Type "MySQL Enterprise Monitor" in the search box, or enter a license name to find Oracle Enterprise Monitor for MySQL Database along with other MySQL products: "MySQL Enterprise Edition" or "MySQL Cluster Carrier Edition".  Then select your platform.

Please open a ticket on My Oracle Support to report problems, request features, or give us general feedback about how this release meets your needs.

Thanks and Happy Monitoring!

- The MySQL Enterprise Tools Development Team

Useful URLs

Preparing your Community Connector for MySQL 8 – part 1 – SHA256

As some of you are by now aware we have shipped MySQL version 8.0.4 and with it delivered a change to the default authentication plugin that is used by the server when new users are created and is announced by the server to the client.  This was done to further tighten the security of MySQL.  Please refer to this article for a good explanation of this new authentication plugin and why it is important.

If you are an application user or application developer and you want to use MySQL 8.0.3 or 8.0.4 and make use of this new authentication plugin then you need to make sure that connector you use supports it.  As this is a new plugin, support for it in community connectors is still being developed.  I would encourage you to reach out to the communities that create the connector you use and let them  know you need this support.

This article (and the one coming after it) is for the connector developers.  There are two types of connectors out there.  The first type uses the libmysqlclient C library to implement the protocol.  The second type implements the MySQL client server protocol natively.  This article gives you important information for using libmysqclient in your connector.  A followup article will include relevant information for native implementations.

Which Version Should You Use?

Even though your application may currently link against the libmysqlclient library that comes with MySQL 5.7, this one will only work for authenticating users who are using other plugins such as mysql_native_password or sha256_password.  For all situations where new users would be created using the caching_sha2_password plugin, an updated connector that supports this plugin is required.  MySQL 8.0.4-rc contains libmysqlclient version 21 which fully understands this new plugin and can work with any user accounts.  We are exploring the possibility of backporting support for caching_sha2_password to previous versions of libmysqlclient.

It’s important to understand that the libmysqlclient library that comes with MySQL 8 is backward compatible and can connect to previous versions of the server so there is no significant need to support building against 5.7 and 8.0 versions at the same time.

How Do I Use It?

Exactly the way you have been using it.  The only change to the API related to this new authentication plugin is a new connection option to retrieve the server public key.  Here are the two scenarios and how they might be coded.

Using an SSL connection (same as for 5.7)
In this scenario the users credentials are passed to the server in plain text however they are passed via the SSL connection and are therefore passed securely.

MYSQL mysql; mysql_init(&mysql); mysql_options(&mysql, MYSQL_OPT_SSL_MODE, SSL_MODE_REQUIRED); if (!mysql_real_connect(&mysql,"host","user","passwd","database",0,NULL,0)) { fprintf(stderr, "Failed to connect to database: Error: %s\n", mysql_error(&mysql)); }

Not using an SSL connection
In this scenario we are not using an SSL connection and so it is vital that the users credentials not get passed “in the clear”.  To facilitate this the servers public key is retrieved and is used for an RSA key exchange.

MYSQL mysql; mysql_init(&mysql); mysql_options(&mysql, MYSQL_OPT_SSL_MODE, SSL_MODE_DISABLED); mysql_options(&mysql, MYSQL_OPT_GET_SERVER_PUBLIC_KEY, true); if (!mysql_real_connect(&mysql,"host","user","passwd","database",0,NULL,0)) { fprintf(stderr, "Failed to connect to database: Error: %s\n", mysql_error(&mysql)); }

For for information on creating encrypted connections to the server, please see this page.

Conclusion

Connectors based on libmysqlclient can be updated very easily by simply updating the version of libmysqlclient they link against.

This Week in Data with Colin Charles 26: Percona Live Schedule is Near Completion, FOSDEM Underway and a Percona Toolkit Use Case

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Percona Live Santa Clara 2018 update: tutorials have been picked, and the schedule/press release should be announced by next week. We’ve (the committee) rated over 300+ talks, and easily 70% of the schedule should go live next week as well.

There’s a lot happening for FOSDEM this week — so expect a longer report of some sort next week.

A friend, Yanwei Zhou, DBA at Qunar in China, gave an excellent presentation in Chinese on how they use Percona Toolkit. Check it out:

Are you on Twitter? Hope you’re following the @planetmysql account.

Releases Link List Upcoming appearances Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

MySQL Connector/Python 8.0.6-rc has been released

Dear MySQL users,

MySQL Connector/Python 8.0.6-rc is the first release candidate version
of the MySQL Connector Python 8.0 series. This series adds support for
the new X DevAPI. The X DevAPI enables application developers to write
code that combines the strengths of the relational and document models
using a modern, NoSQL-like syntax that does not assume previous
experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information
about how the X DevAPI is implemented in MySQL Connector/Python, and its
usage, see http://dev.mysql.com/doc/dev/connector-python.

Please note that the X DevAPI requires MySQL Server version 5.7.12 or
higher with the X Plugin enabled. For general documentation about how to
get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/5.7/en/document-store.html.

To download MySQL Connector/Python 8.0.6-rc, see the “Development
Releases” tab at http://dev.mysql.com/downloads/connector/python/

Enjoy!

Changes in MySQL Connector/Python 8.0.6 (2018-02-01, Release
Candidate)

Functionality Added or Changed

* A new bdist_wheel distutils command was added to build a
Connector/Python wheel package.
A new –static option was added that enables static
linking for the C extension variant.

* X DevAPI: In the process of refining the definition of
the X DevAPI to cover the most relevant usage scenarios,
the following API components have been removed from the X
DevAPI implementation for Connector/Python:

+ API components that support session configurations.
The mysqlx.config namespace and all members of the
namespace.

+ The create_table, drop_table, create_view,
drop_view, and alter_view methods from the Schema
class.

* A Pylint test was added for the mysqlx module.

* A new Modify.patch() method was added to the X DevAPI as
a way to change several document attributes in one
operation; otherwise known as a JSON Merge Patch via RFC
7386.

* The create_index() method was added to the Collection
API.

* The transaction API was extended to allow setting
savepoints. The following methods have been added to the
Session object:

+ set_savepoint([name]): executes the SAVEPOINT name
SQL statement to generate a savepoint. If a name is
not provided (or None), one is generated.
The SAVEPOINT statement sets a named transaction
savepoint with a name of identifier. If the current
transaction has a savepoint with the same name, the
old savepoint is deleted and a new one is set.

+ release_savepoint(name): executes the RELEASE name
SQL statement to release a savepoint.
The RELEASE SAVEPOINT statement removes the named
savepoint from the set of savepoints of the current
transaction. No commit or rollback occurs. It
returns an error if the savepoint does not exist.

+ rollback_to(name): executes the ROLLBACK TO name SQL
statement to rollback a savepoint.
The ROLLBACK TO identifier command reverts the state
of the transaction back to what was when executed
the command SAVEPOINT identifier.
Names passed to these functions are checked to make sure
that the name is not null or an empty string. Names such
as ”, “”, , and so on, are not allowed even though
they are allowed by the server. For more information, see
SAVEPOINT, ROLLBACK TO SAVEPOINT, and RELEASE SAVEPOINT
Syntax
(http://dev.mysql.com/doc/refman/5.7/en/savepoint.html).

Bugs Fixed

* On Enterprise Linux 7, SSL connections could fail due to
the Python 2.7.9 or higher requirement. Since EL7
backported the SSL module from Python 3 (PEP466) into its
default Python 2.7.5, SSL connections are now enabled on
EL7. (Bug #27368032)

* MySQL Server 8.0 utf8mb4 collations were missing from
Connector/Python. (Bug #27277964)

* The LICENSE and README files were missing from the C
extension ( “cext”) builds. (Bug #26912787)

* Python 3.6 is now officially supported and tested.

On Behalf of Oracle/MySQL Release Engineering Team
Prashant Tekriwal

More P_S Instrumentation for Group Replication

In order to improve observability and manageability of MySQL Group Replication (GR), we enhanced its instrumentation in MySQL 8.0.4. With these goals in mind, we have extended the usage of Performance Schema (P_S) from this plugin to the included Group Communication System (GCS) module as well as its submodule eXtended Communications (XCom) which are responsible for all communication involved in the dissemination of transactions to other MySQL instances.…

Going to Oracle MySQL Cloud via Enterprise Backup.

First of all, if you want a guide on how to deploy MySQL in Oracle Cloud, then the best place to start is here.

So, what are we going to do here?

  • Create an Oracle Storage container to keep our backups safe.
  • Run a MySQL Enterprise Backup from on-premise 5.7 MySQL (GPL or Enterprise) streaming to our Oracle Storage container.
  • Create an Oracle Cloud MySQL instance initiated from the previously backed up instance.

What data do we have:

  • An e-mail address that you signed up to Oracle MySQL Cloud Service with.
  • An IdentityDomain.
  • You might also have a Data Centre assigned, as this can be chosen when you sign up.

And now, let’s go through how to get our data into the cloud and into a running MySQL instance:

To make things simpler, let’s use curl to create the Oracle Storage container (all this can be done via the Storage console page if needed):

We need to get authenticated first, for security reasons (the Auth token lasts 30 minutes so expect to revisit this part if we take long between steps):

$ curl -v -X GET \   -H 'X-Storage-User: Storage-mysqltest:keith.hollman@oracle.com' \   -H 'X-Storage-Pass: password' \   https://mysqltest.storage.oraclecloud.com/auth/v1.0 .. < HTTP/1.1 200 OK .. < X-Auth-Token: AUTH_tk16d300f6b511956328c6a66dc2921267 ..

Create a container called “5718_upload”:

curl -v -X PUT \ -H "X-Auth-Token: AUTH_tk16d300f6b511956328c6a66dc2921267" \ https://mysqltest.storage.oraclecloud.com/v1/Storage-mysqltest/5718_upload .. < HTTP/1.1 201 Created ..

List contents:

$ curl -v -s -X GET \   -H 'X-Auth-Token: AUTH_tk16d300f6b511956328c6a66dc2921267' \   https://mysqltest.storage.oraclecloud.com/v1/Storage-mysqltest/5718_upload .. < HTTP/1.1 204 No Content ..

 

Run an online backup (MEB version 4) of our 5.7.18 instance to cloud storage:

mysqlbackup --user=root --socket=/opt/mysql/5718/mysql_5718.sock \ --cloud-service=openstack --cloud-container=5718_upload \ --cloud-ca-info=/etc/ssl/certs/ca-certificates.crt \ --cloud-object=5718_full_meb_backup.mbi \ --cloud-user-id=Storage-mysqltest:keith.hollman@oracle.com \ --cloud-password=password \ --cloud-tempauth-url=https://mysqltest.storage.oraclecloud.com \ --backup-dir=/tmp/5718backup/cloud \ --compress=true --backup-image=- backup-to-image

(depending on your distro, the cloud-ca-info path might be different)

List contents again:

curl -v -s -X GET \ -H 'X-Auth-Token: AUTH_tk16d300f6b511956328c6a66dc2921267' \ https://mysqltest.storage.oraclecloud.com/v1/Storage-mysqltest/5718_upload .. 5718_full_meb_backup.mbi 5718_full_meb_backup.mbi_part_1

 

Now to create the cloud instance:

http://cloud.oracle.com -> “Sign In”

My account is what’s known as a Oracle Cloud Infrastructure Classic (OCI Classic) account. But it all depends on when you’ve signed up, what other cloud services you may have, etc.
All the following info you will get in an e-mail when you sign up for Oracle Cloud:

select account type -> “Traditional Cloud Account
select Data Center -> “US Commercial 2 (us2)
click on “My Services” button.

Now to enter some specific info:

Enter your Identity Domain -> “mysqltest
click on “Go

User Name -> “keith.hollman@oracle.com
Password -> “password
click on “Sign in

Now you’ll see the Dashboard with all your cloud resources available.

There are some helpful rectangles / buttons “Guided Journey“, “Create Instance“, “Account Management” & “Customize Dashboard“.
Below that, you’ll see a summary of your “Cloud Services“.
Then, beneath that, you’ll see the services themselves, i.e. “MySQL“, “Compute Classic“, etc.
If you click on the “MySQL” wording, it will take you to an overview of all your information. This can be useful, eg. for the Public IP Address needed later on for ssh.
So, either on that screen, we can click on “Open Service Console“, or on the previous screen, click on the “Action Menu” which is the little button of 4 horizontal lines, and then choose the same: “Open Service Console“.

Now, we’ll see the different options available for our “Oracle MySQL Cloud Service“.
First you’ll see the “Instances“, “Activity“, “SSH Access” & “IP Reservations“. I’ll let you look into each one of those at your own pace, as what we want to do now is go direct to the “Create Instance” button on the right.

Create New Instance” page.                              Instance – Details – Confirm

Here we have to specify the Instance Name and Region. The others should be automatically selected or are optional:
Instance Name -> “MySQL57
Description
Notification Email
Region -> “uscom-east-1” (chosen from the options in the dropdown)
IP Network -> “No Preference” (this option appears once we’ve chosen the Region.)

click on “Next

“Service Details”                              Instance – Details – Confirm

Here we have to specify how to create our instance from the backup we previously did.
Configuration

First, remember how large the original environment was and this will help us to choose the “Compute Shape” that we need.

Then, we need a “SSH Public Key”.

Edit -> (a window opens up “SSH Public Key for VM Access”. Here we have different options, where I’m choosing “Create New Key”
click on “Enter”
Once the key pair is created we can now download the resulting zip file with the private & public keys. KEEP THESE SAFE. We will be needing these.
click on “Download”
save them somewhere safe.
click on “Done”
This closes the small pop up window, and now we can carry on sizing the environment according to our needs.

MySQL Configuration
Usable Database Storage (GB) -> Set the size of the storage needed for the database instance.
Password -> (We need to provide a good password, eg. “Oracle20!8” but it doesn’t matter as we’ll be using the existing passwords for our instance)

Backup and Recovery Configuration
Cloud Storage Container -> (this should be automatically filled out with something like “https://mysqltest.storage.oraclecloud.com/v1/Storage-mysqltest/MySQLCS&#8221; . We will want to change this to: “https://mysqltest.storage.oraclecloud.com/v1/Storage-mysqltest/5718_upload“)
Username -> “keith.hollman@oracle.com
Password -> “password
Create Cloud Storage Container -> UNCHECK THIS! We’ve already created ours.

Initialize Data From Backup
Create Instance from Existing Backup -> “Yes
Backup File Cloud Storage Path -> “https://mysqltest.storage.oraclecloud.com/v1/Storage-mysqltest/5718_upload/5718_full_meb_backup.mbi
Username -> “keith.hollman@oracle.com
Password -> “password

click on “Next

                             Instance – Details – Confirm

Here we get a summary of what we’ve entered.
It has all been validated, i.e. if there’s an error somewhere, i.e. the password isn’t strong enough, the backup path is wrong, or a username doesn’t exist, then you’ll know so.

click on “Create

You might get a window saying “Create Service Request” with just one option saying “OK”.
This is ok.

If you try to click on “Create” again, it will fail saying that the service already exists.
You can click on “Cancel” and it will take us back to the Summary page.
We can see that for the instance we’ve just created it will say “Status: Creating service …
Then click on the name of the instance you gave, in my case “MySQL57“.
On the next page we’ll see the “Instance Overview” page.
Now click on the small arrow in the box under “In-Progress Operation Messages” to view the creation log.

Now, we have to get the IP address from the instance overview.

Connecting & confirming the instance has been restored

Let’s see what’s in the cloud instance:

$ ssh -i /path/2/private/key/previously/downloaded opc@123.456.78.90

Once connected:

$ sudo su - oracle $ mysql -uroot -p (use the password from your backed up instance)

And now for our data:

mysql> show databases; mysql> select user, host from mysql.user;

It should be self-explanatory, me thinks.

You could also connect in with MySQL Workbench. Here are the instructions on how to connect and remotely administer your cloud instance.

Hope this has helped!

 

Speaking at FOSDEM

I will be speaking at FOSDEM the coming Sunday (February 4) on Histogram support in MySQL 8.0. If you are at FOSDEM, and want to learn about how you can use histograms to improve your query execution plans, visit the MySQL and Friends devroom at 11:10.

Also, please, checkout the entire program for the MySQL devroom.  It is full of interesting talks.

MyRocks Engine: Things to Know Before You Start

Percona recently released Percona Server with MyRocks as GA. You can see how Facebook explains wins they see in production with MyRocks. Now if you use Percona repositories, you can simply install MyRocks plugin and enable it with ps-admin --enable-rocksdb.

There are some major and minor differences when comparing it to typical InnoDB deployments, and I want to highlight them here. The first important difference is that MyRocks (based on RocksDB) uses Log Structured Merge Tree data structure, not a B+ tree like InnoDB.

You learn more about the LSM engine in my article for DZone.The summary is that an LSM data structure is good for write-intensive workloads, with the expense that reads might slow down (both point reads and especially range reads) and full table scans might be too heavy for the engine. This is important to keep in mind when designing applications for MyRocks. MyRocks is not an enhanced InnoDB, nor a one-size-fits-all replacement for InnoDB. It has its own pros/cons just like InnoDB. You need to decide which engine to use based on your applications data access patterns.

What other differences should you be aware of?
  • Let’s look at the directory layout. Right now, all tables and all databases are stored in a hidden .rocksdb directory inside mysqldir. The name and location can be changed, but still all tables from all databases are stored in just a series of .sst files. There is no per-table / per-database separation.
  • By default in Percona Server for MySQL, MyRocks will use LZ4 compression for all tables. You can change compression settings by changing the rocksdb_default_cf_options server variable. By default it set to compression=kLZ4Compression;bottommost_compression=kLZ4Compression. We chose LZ4 compression as it provides acceptable compression level with very little CPU overhead. Other possible compression methods are Zlib and ZSTD, or no compression at all. You can learn more about compression ratio vs. speed in Peter’s and my post.To compare the data size of a MyRocks table loaded with traffic statistic data from my homebrew router, I’ve used the following table created for pmacct collector:
    CREATE TABLE `acct_v9` ( `tag` int(4) unsigned NOT NULL, `class_id` char(16) NOT NULL, `class` varchar(255) DEFAULT NULL, `mac_src` char(17) NOT NULL, `mac_dst` char(17) NOT NULL, `vlan` int(2) unsigned NOT NULL, `as_src` int(4) unsigned NOT NULL, `as_dst` int(4) unsigned NOT NULL, `ip_src` char(15) NOT NULL, `ip_dst` char(15) NOT NULL, `port_src` int(2) unsigned NOT NULL, `port_dst` int(2) unsigned NOT NULL, `tcp_flags` int(4) unsigned NOT NULL, `ip_proto` char(6) NOT NULL, `tos` int(4) unsigned NOT NULL, `packets` int(10) unsigned NOT NULL, `bytes` bigint(20) unsigned NOT NULL, `flows` int(10) unsigned NOT NULL, `stamp_inserted` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `id` int(11) NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`) ) ENGINE=ROCKSDB AUTO_INCREMENT=20127562
    As you can see, there are about 20mln records in this table. MyRocks (with default LZ4 compression) uses 828MB. InnoDB (uncompressed) uses 3760MB.
  • You can find very verbose information about your RocksDB instance in the LOG file located in .rocksdb directory. Check this file for more diagnostics. You can also try the SHOW ENGINE ROCKSDB STATUS command, but it is even more cryptic than SHOW ENGINE INNODB STATUS. It takes time to parse and to understand it.
  • Keep in mind that at this time MyRocks supports only READ-COMMITTED and SERIALIZABLE isolation levels. There is no REPEATABLE-READ isolation level and no gap locking like in InnoDB. In theory, RocksDB should support SNAPSHOT isolation level. However, there is no notion of SNAPSHOT isolation in MySQL so we have not implemented the special syntax to support this level. Please let us know if you would be interested in this.
  • For bulk loads, you may face problems trying to load large amounts of data into MyRocks (and unfortunately this might be the very first operation when you start playing with MyRocks as you try to LOAD DATA, INSERT INTO myrocks_table SELECT * FROM innodb_table or ALTER TABLE innodb_table ENGINE=ROCKSDB). If your table is big enough and you do not have enough memory, RocksDB crashes. As a workaround, you should set rocksdb_bulk_load=1 for the session where you load data.  See more on this page: https://github.com/facebook/mysql-5.6/wiki/data-loading.
  • Block cache in MyRocks is somewhat similar to innodb_buffer_pool_size, however for MyRocks it’s mainly beneficial for reads. You may want to tune the rocksdb_block_cache_size setting. Also keep in mind it uses buffered reads by default, and in this case the OS cache contains cached compressed data and RockDB block cache will contain uncompressed data. You may keep this setup to have two levels of cache, or you can disable buffering by forcing block cache to use direct reads with rocksdb_use_direct_reads=ON.
  • The nature of LSM trees requires that when a level becomes full, there is a merge process that pushes compacted data to the next level. This process can be quite intensive and affect user queries. It is possible to tune it to be less intensive.
  • Right now there is no hot backup software like Percona XtraBackup to perform a hot backup of MyRocks tables (we are looking into this). At this time you can use mysqldump for logical backups, or use filesystem-level snapshots like LVM or ZFS.

You can find more MyRocks specifics and limitations in our docs at https://www.percona.com/doc/percona-server/LATEST/myrocks/limitations.html.

We are looking for feedback on your MyRocks experience!

Percona Monitoring and Management 1.7.0 (PMM) Is Now Available

Percona announces the release of Percona Monitoring and Management 1.7.0. (PMM ) is a free and open-source platform for managing and monitoring MySQL and MongoDB performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

This release features improved support for external services, which enables a PMM Server to store and display metrics for any available Prometheus exporter. For example, you could deploy the postgres_exporter and use PMM’s external services feature to store PostgreSQL metrics in PMM. Immediately, you’ll see these new metrics in the Advanced Data Exploration dashboard. Then you could leverage many of the pre-developed PostgreSQL dashboards available on Grafana.com, and with a minimal amount of edits have a working PostgreSQL dashboard in PMM! Watch for an upcoming blog post to demonstrate a walk-through of this unlocked functionality.

New Percona Monitoring and Management 1.7.0 Features
  • PMM-1949: New dashboard: MySQL Amazon Aurora Metrics.

Improvements
  • PMM-1712: Improve external exporters to let you easily add data monitoring from an arbitrary Prometheus exporter you have running on your host.
  • PMM-1510: Rename swap in and swap out labels to be more specific and help clearly see the direction of data flow for Swap In and Swap Out. The new labels are Swap In (Reads) and Swap Out (Writes) accordingly.
  • PMM-1966: Remove Grafana from a list of exporters on the dashboard to eliminate confusion with existing Grafana in the list of exporters on the current version of the dashboard.
  • PMM-1974: Add the mongodb_up in the Exporter Status dashboard. The new graph is added to maintain consistency of information about exporters. This is done based on new metrics implemented in PMM-1586.
Bug fixes
  • PMM-1967: Inconsistent formulas in Prometheus dashboards.
  • PMM-1986: Signing out with HTTP auth enabled leaves the browser signed in.

Aurora Hash Join Optimization (with a Gentle Reminder on Lab Features)

The Aurora hash join feature for relational databases has been around for a while now. But unlike MySQL Block Nested Loop algorithm, an Aurora hash join only caters to a specific number of use cases. When implemented with the optimizer properly, they can provide great benefits with certain workloads. Below we’ll see a brief example of a quick win.

This new feature is available in Aurora lab mode version 1.16. Because this is a lab feature, it’s important to make sure to test your queries before upgrading, especially if you are looking to scale up to the new R4 instances before the Superbowl to avoid hitting the same problem I discuss below.

When lab mode is enabled and

hash_join  is ON, you can verify the optimizer feature from the optimizer_switch variable:mysql> SELECT @@aurora_version, @@aurora_lab_mode, @@optimizer_switch G *************************** 1. row *************************** @@aurora_version: 1.16 @@aurora_lab_mode: 1 @@optimizer_switch: index_merge=on,...,hash_join=on,hash_join_cost_based=on

Hash joins work well when joining large result sets because – unlike block nested loop in the same query – the optimizer scans the larger table and matches it against the hashed smaller table instead of the other way around. Consider the tables and query below:

+----------+----------+ | tbl | rows | +----------+----------+ | branches | 55143 | | users | 103949 | | history | 27168887 | +----------+----------+ EXPLAIN SELECT SQL_NO_CACHE COUNT(*) FROM branches b INNER JOIN users u ON (b.u_id = u.u_id) INNER JOIN history h ON (u.u_id = h.u_id);

With hash joins enabled, we can see from the Extra column in the EXPLAIN output how it builds the join conditions:

mysql> EXPLAIN -> SELECT SQL_NO_CACHE COUNT(*) -> FROM branches b -> INNER JOIN users u ON (b.u_id = u.u_id) -> INNER JOIN history h ON (u.u_id = h.u_id); +----+-------------+-------+-------+---------------+---------+---------+------+----------+----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------+---------------+---------+---------+------+----------+----------------------------------------------------------+ | 1 | SIMPLE | u | index | PRIMARY | PRIMARY | 4 | NULL | 103342 | Using index | | 1 | SIMPLE | h | ALL | NULL | NULL | NULL | NULL | 24619023 | Using join buffer (Hash Join Outer table h) | | 1 | SIMPLE | b | index | user_id | user_id | 4 | NULL | 54129 | Using index; Using join buffer (Hash Join Inner table b) | +----+-------------+-------+-------+---------------+---------+---------+------+----------+----------------------------------------------------------+

Without hash joins, it’s a straightforward Cartesian (almost) product of all three tables:

mysql> SET optimizer_switch='hash_join=off'; Query OK, 0 rows affected (0.02 sec) mysql> EXPLAIN -> SELECT SQL_NO_CACHE COUNT(*) -> FROM branches b -> INNER JOIN users u ON (b.u_id = u.u_id) -> INNER JOIN history h ON (u.u_id = h.u_id); +----+-------------+-------+--------+---------------+---------+---------+----------------+----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------+---------+---------+----------------+----------+-------------+ | 1 | SIMPLE | h | ALL | NULL | NULL | NULL | NULL | 24619023 | NULL | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | percona.h.u_id | 1 | Using index | | 1 | SIMPLE | b | ref | user_id | user_id | 4 | percona.h.u_id | 7 | Using index | +----+-------------+-------+--------+---------------+---------+---------+----------------+----------+-------------+

Now, the execution times without hash joins enabled:

mysql> SELECT SQL_NO_CACHE COUNT(*) -> FROM branches b -> INNER JOIN users u ON (b.u_id = u.u_id) -> INNER JOIN history h ON (u.u_id = h.u_id); +-----------+ | COUNT(*) | +-----------+ | 128815553 | +-----------+ 1 row in set (1 min 6.95 sec) mysql> SET optimizer_switch='hash_join=off'; Query OK, 0 rows affected (0.01 sec) mysql> SELECT SQL_NO_CACHE COUNT(*) -> FROM branches b -> INNER JOIN users u ON (b.u_id = u.u_id) -> INNER JOIN history h ON (u.u_id = h.u_id); +-----------+ | COUNT(*) | +-----------+ | 128815553 | +-----------+ 1 row in set (2 min 28.27 sec)

Clearly with this optimization enabled, we have more than a 50% gain from the example query.

Now while this type of query might be rare, most of us know we need to avoid really large JOINs as they are not scalable. But at some point, we find some that take advantage of the feature. Here is an excerpt from an actual production query I’ve recently worked on. It shows the good execution plan versus the one using hash joins.

This particular EXPLAIN output only differs in the row where without a hash join, it uses an index, and the query executes normally. With the hash join enabled, the optimizer thought it was better to use it instead:

... *************************** 3. row *************************** id: 1 select_type: SIMPLE table: t type: eq_ref possible_keys: PRIMARY,r_type_id_ix,r_id_r_type_id_dt_ix key: PRIMARY key_len: 4 ref: db.x.p_id rows: 1 Extra: Using where ... ... *************************** 3. row *************************** id: 1 select_type: SIMPLE table: t type: index possible_keys: PRIMARY,r_type_id_ix,r_id_r_type_id_dt_ix key: r_id_r_type_id_dt_ix key_len: 18 ref: NULL rows: 715568233 Extra: Using where; Using index; Using join buffer (Hash Join Inner table t) ...

Needless to say, it caused problems. Unfortunately, a bug on Aurora 1.16 exists where hash joins cannot be turned off selectively (it is enabled by default) from the parameter group. If you try this, you get an error “Error saving: Invalid parameter value: hash_join=off for: optimizer_switch”. The only way to disable the feature is to turn off

lab_mode , which requires an instance restart. An alternative is to simply add SET optimizer_switch='hash_join=off'; from the application, especially if you rely on some of the other lab mode features in Aurora.

To summarize, the new hash join feature is a great addition. But as it’s a lab feature, be careful when upgrading!

Choosing the best indexes for MySQL query optimization

Many of our users, developers and database administrators, keep asking our team about EverSQL’s indexing recommendations algorithm.

So, we decided to write about it.

This tutorial won’t detail all the internals of the  algorithm, but rather try to lay down the most important aspects of indexing, in simple terms.
Also, and most importantly, we’ll present practical examples for properly indexing your tables and queries by relying on a set of rules, rather than on guessing.

Our focus in this tutorial is on MySQL, MariaDB and PerconaDB databases. This information may be relevant for other database vendors as well, but in some cases may not.

Which indexes should I create for my SQL query?

As a general rule of thumb, you can follow these steps for building compound indexes, when trying to optimize your SQL query:

  1. Start by listing all tables used in your query. Create a separate list for each of the subqueries in your query.
    So if you have one SELECT query with 2 SELECT subqueries inside, you should have 3 lists, one for each of them, containing the tables referenced in them.
    At the end of this process, you will potentially add a list of columns for each of these tables, in each of the query lists.
  2. The left-most columns in any of your indexes should match the columns in the query’s equality comparisons (i.e, age = 25).
    You can add several columns, as long as all of them are compared to a constant with an equality operator.
  3. Then, you should choose a single column which will be the ‘range column’. MySQL only supports one range column in each index.
    Therefore, you should look at all the comparisons with a range operator (<>, >, <, IN(), BETWEEN, LIKE) and choose the one that will filter the most rows.
    Add that column as the next column in your index for that table.
    You can get some more information here about the reasons for adding the equality columns before range columns (the slide is written by a team member of MySQL’s optimizer team).
  4. If no range columns exist in the query, you can add the columns from the GROUP BY clause.
  5. If no range columns exist in the query and no GROUP BY clause, you can add the columns from the ORDER BY clause.
  6. In some cases, it makes sense to also create a separate index that holds the ORDER BY clause’s columns, as MySQL sometimes chooses to use it. Please note though that for this to happen, the index should contain all columns from the ORDER BY clause, they should all be specified in the ORDER BY clause with the same order (ASC / DESC). This doesn’t guarantee that the database’s optimizer will pick this index rather than the WHERE compound index, but it’s worth a try.
  7. At the end, add relevant columns from the SELECT clause, which might allow the MySQL to use the index as a covering index. A covering index is an index that contains all columns in both filtering and selection clauses of the query. Such an index allows the database to run the query solely by using the index, without having the need to access the table. In many cases this approach is significantly faster.

Let’s look at an example to clarify:

SELECT id, first_name, last_name, age from employees where first_name = ‘John’ AND last_name = ‘Brack’ and age > 25 ORDER BY age ASC;

For this query, we’ll start with adding the columns first_name and last_name, which are compared with an equality operator. Then, we’ll add the age column which is compared with a range condition. No need to have the ORDER BY clause indexed here, as the age column is already in the index. Last but not least, we’ll add id from the SELECT clause to the index to have a covering index.

So to index this query properly, you should add the index:
employees (first_name, last_name, age, id).

The above is a very simplified pseudo-algorithm that will allow you to build simple indexes for rather simple SQL queries.

If you’re looking for a way to automate this process, while also adding the benefit of a proprietary indexing algorithm and query optimization, you can try out EverSQL Query Optimizer which does all the heavy lifting for you.

What not to do when indexing (or writing SQL queries)?

We gathered some of the most common mistakes we see programmers and database administrators do when writing queries and indexing their tables.

Indexing each and every column in the table separately

In most cases, MySQL won’t be able to use more than one index for each table in the query.

Therefore, when creating a separate index for each column in the table, the database is bound to perform only one of the search operations using an index, and the rest of them will be significantly slower, as the database can’t use an index to execute them.

We recommend using compound indexes (explained later in this article) rather than single-column indexes.

The OR operator in filtering conditions

Consider this query:
SELECT a, b FROM tbl WHERE a = 3 OR b = 8.

In many cases, MySQL won’t be able to use an index to apply an OR condition, and as a result, this query is not index-able.

Therefore, we recommend to avoid such OR conditions and consider splitting the query to two parts, combined with a UNION DISTINCT (or even better, UNION ALL, in case you know there won’t be any duplicate results)

The order of columns in an index is important

Let’s say I hand you my contacts phone book which is ordered by the contact’s first name and ask you to count how many people are there named “John” in the book. You’ll grab the book in both hands and say “no problem”. You will navigate to the page that holds all names starting with John, and start counting from there.

Now, let’s say I change the assignment and hand you a phone book that is ordered by the contact’s last name, but ask you to still count all contacts with the first name “John”. How would you approach that? Well, the database scratches his head in this situation as well.

Now lets look at an SQL query to demonstrate the same behavior with the MySQL optimizer:
SELECT first_name, last_name FROM contacts WHERE first_name = ‘John’;

Having the index contacts (first_name, last_name) is ideal here, because the index starts with our filtering condition and ends with another column in the SELECT clause.

But, having the reverse index contacts (last_name, first_name) is rather useless, as the database can’t use the index for filtering, as the column we need is second in the index and not first.

The conclusion from this example is that the order of columns in an index is rather important.

Adding redundant indexes

Indexes are magnificent when trying to optimize your SQL queries and they can improve performance significantly.

But, they come with a downside as well. Each index you’re creating should be kept updated and in sync when changes occur in your databases. So for each INSERT / UPDATE / DELETE in your databases, all relevant indexes should be updated. This update can take sometime, especially with large tables / indexes.

Therefore, do not create indexes unless you know you’ll need them.

Also, we highly recommend to analyze your database once in a while, searching for any redundant indexes that can be removed.

MySQL Connector/Java 8.0.9-rc has been released

Dear MySQL users,

MySQL Connector/Java 8.0.9-rc is the first Release Candidate
of the 8.0 branch of MySQL Connector/J, providing an insight into
upcoming features. It is suitable for use with MySQL Server versions
5.5, 5.6, 5.7, and 8.0. It supports the Java Database Connectivity
(JDBC) 4.2 API.

This release includes the following new features and changes, also
described in more detail on

https://dev.mysql.com/doc/relnotes/connector-j/8.0/en/news-8-0-9.html

MySQL Connectors and other MySQL client tools and applications now
synchronize the first digit of their version number with the (highest)
MySQL server version they support.
This change makes it easy and intuitive to decide which client version
to use for which server version.

As always, we recommend that you check the “CHANGES” file in the download
archive to be aware of changes in behavior that might affect your application.

To download MySQL Connector/Java 8.0.9-rc, see the “Development
Releases” tab at http://dev.mysql.com/downloads/connector/j/

Enjoy!

Changes in MySQL Connector/J 8.0.9 (2018-01-30, Release
Candidate)

Functionality Added or Changed

* X DevAPI: In the process of refining the definition of
the X DevAPI to cover the most relevant usage scenarios,
the following API components have been removed from the X
DevAPI implementation for Connector/J:

+ Components that support DDLs for views, including
the createView(), dropView(), and modifyView()
methods.

+ Components that support DDLS for tables, including
the createTable(), dropTable(), and modifyTable()
methods.

+ Components that support session configurations,
including the SessionConfig object, the
PersistenceHandler interface, the PasswordHandler
interface, and the SessionConfigManager class.

* X DevAPI: Added the setSavepoint(), rollbackTo(), and
releaseSavepoint() methods to the Session interface to
support the SAVEPOINT
(http://dev.mysql.com/doc/refman/8.0/en/savepoint.html),
ROLLBACK TO SAVEPOINT
(http://dev.mysql.com/doc/refman/8.0/en/savepoint.html),
and RELEASE SAVEPOINT
(http://dev.mysql.com/doc/refman/8.0/en/savepoint.html)
statements. See MySQL Connector/J X DevAPI Reference
(http://dev.mysql.com/doc/dev/connector-j) for more
details.

* X DevAPI: A new patch() function has been added to the
ModifyStatement interface. The function accepts an
JSON-like object describing document changes and applies
them to documents matched by the modify() filter. See
MySQL Connector/J X DevAPI Reference
(http://dev.mysql.com/doc/dev/connector-j) for more
details.

* X DevAPI: The createIndex() method for the Collection
interface now has a new syntax. See MySQL Connector/J X
DevAPI Reference
(http://dev.mysql.com/doc/dev/connector-j) for more
details.

* X DevAPI: Added the following methods for single-document
operations in the X DevAPI:

+ replaceOne()

+ addOrReplaceOne()

+ getOne()

+ removeOne()
See MySQL Connector/J X DevAPI Reference
(http://dev.mysql.com/doc/dev/connector-j) for more
details.

* X DevAPI: Setters and getters methods have been added for
the configuration properties with the MysqlDataSource,
MysqlXADataSource, and MysqlConnectionPoolDataSource
classes.

* X DevAPI: The connection property enabledTLSProtocols can
now be used to select the allowed TLS versions for an X
Protocol connection to the server.

* Connector/J now supports the new caching_sha2_password
authentication plugin, which is the default
authentication plugin for MySQL 8.0.4 and later (see
Caching SHA-2 Pluggable Authentication
(http://dev.mysql.com/doc/refman/8.0/en/caching-sha2-plug
gable-authentication.html) for details).
Note
To authenticate accounts with the caching_sha2_password
plugin, either a secure connection to the server using
SSL
(http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-using-ssl.html)
or an unencrypted connection that supports password exchange
using an RSA key pair (enabled by setting one or both of the
connecting properties allowPublicKeyRetrieval and
serverRSAPublicKeyFile) must be used.
Because earlier versions of Connector/J 8.0 do not
support the caching_sha2_password authentication plugin
and therefore will not be able to connect to accounts
that authenticate with the new plugin (which might
include the root account created by default during a new
installation of a MySQL 8.0 Server), it is highly
recommended that you upgrade now to Connector/J 8.0.9, to
help ensure that your applications continue to work
smoothly with the latest MySQL 8.0 Server.

* Connector/J now takes advantage of the MySQL Server 8.0
data dictionary by making the connection property
useInformationSchema true by default; this makes
Connector/J, by default, access the data dictionary more
efficiently by querying tables in the INFORMATION_SCHEME.
See INFORMATION_SCHEMA and Data Dictionary Integration
(http://dev.mysql.com/doc/refman/8.0/en/data-dictionary-information-schema.html)
for details. Users can still set useInformationSchema to false,
but for MySQL 8.0.3 and later, some data dictionary queries might
then fail, due to deprecations of older data dictionary features.

* In the past, query texts were always passed as strings to
QueryInterceptor methods, even if the texts were not
actually used by them. Now, only suppliers for the texts
are passed, and the texts are only extracted by get()
calls on the suppliers.

Bugs Fixed

* The connection property nullNamePatternMatchesAll, when
set to false (which was the default value), caused some
DatabaseMetaData methods to throw an error when a null
search string was used with them. The behavior was not
compliant with the JDBC specification, which requires
that a search criterion be ignored when a null search
string is used for it. The connection property has now
been removed from Connector/J 8.0. (Bug #26846249, Bug
#87826)

* Trying to print the query in a PreparedStatement using
the toString() method after it has been closed resulted
in an exception (No operations allowed after statement
closed) being thrown. (Bug #26748909)

* When working with MySQL Server 8.0, an update or delete
statement for a CONCUR_UPDATABLE ResultSet failed when
the ResultSet’s primary keys included a boolean column
and the character set used was not latin1. (Bug
#26266731)

* Connector/J failed to recognize a server greeting error
it received during a handshake with the server and parsed
the error message as a normal greeting packet, causing an
ArrayIndexOutOfBoundsException to be thrown. (Bug
#24924097)

On Behalf of the MySQL/Oracle Release Engineering Team,
Hery Ramilison

Partial LCP in MySQL Cluster 7.6.4

Today MySQL Cluster 7.6.4 DMR is out. This new version contains some very interesting
new developments in the area of checkpointing.

When I developed the original NDB algorithms in the late 90s the computer I had access to
had 2 CPUs, 1 GByte of memory. At the time we were aiming at 10.000 updates per second.

So with average row sizes of 100 bytes this meant that we changed 1 MByte per second.
The local checkpoint algorithm was designed to be executed once per about 5 minutes.

So this meant that most of the database would be changed at high loads. So the
checkpointing algorithm writes the entire database. This means that a lot of updates
are merged together in the checkpoint.

This algorithm has been used now in NDB for 20 years and it still works fine.

Now HW is developing in a number of ways.

1) Memory is getting larger and larger. Today it is not uncommon to find machines
with TBytes of memory.

2) The ratio between available disk space is decreasing.

In the past it was not uncommon to have 100 times as much disk space as memory space.
With SSDs this factor have been decreased significantly. Particularly in servers where
NDB resides this factor have decreased to around 2-3 in many cases.

In addition the development of persistent memory is ongoing, this is likely to cause
memory to grow with a jump of another 4x or so. This means that even tens of TBytes
in a machine is likely to be common.

When starting the development of the new recovery algorithm in NDB 2 years ago the
requirement was thus to implement a new recovery algorithm that will handle
main memory sizes of up to at least 16 TByte of memory and with disk sizes that are
about 2x the memory size.

These requirements leads to the following conclusions:
1) We need to implement some form of incremental checkpoint algorithms.
2) We can only maintain one copy of the data on disk
3) We must have the ability to use REDO logs that are much smaller than the memory size

Currently in NDB a checkpoint is not completed until all data have been written. This
means that we must have disk space to handle up to at least 2x the memory size for
checkpoints.

During massive inserts (e.g. importing data), it is necessary to have a very large REDO log
to ensure that we don't run out of REDO log during import of a database.

These requirements are ok if there is sufficient disk space, but we wanted to make sure
that we don't rely on large disk spaces in the future.

In addition we wanted reengineer the LCP algorithm to take decisions locally and not
rely on all nodes participating in each decision about LCPs. This means that we can now
perform checkpoints locally in a node during restart without affecting LCPs in other
nodes. This is particularly interesting for initial node restarts where a new node will
take a very long time to execute an LCP whereas the live nodes can perform LCPs in
a very short time.

There were two main possible implementations for incremental checkpoints.
1) Use a standard page cache implementation also for main memory
This would mean that we would store two pages for each page in main memory
and write each page such that we always keep the old page until the LCP
is completed.

2) A partial LCP where a part of the rows are fully checkpointed and the rest only
checkpoints the changed rows.

I did analyse the two algorithms and concluded that the standard page cache
algorithm writes far too with small rows.

When the memory size is TBytes in size, the likelihood of one page having more
than write in a checkpoint is small, thus each row change will lead to one
page written in LCP.

With a row size of 100 bytes and a page size of 32 kBytes this would lead to
a waste of more than 300x of the disk bandwidth.

In addition it would still require 2x the disk space.

So the choice was taken to go with the partial LCP variant. Interestingly the
analysis of the standard page cache algorithms will be a problem for all
disk-based DBMSs. The growth to larger page caches will mean that more
and more disk bandwidth is spent on writing checkpoints.

So here is a short description of how the partial LCP algorithm works.

1) For each table partition we keep one or two LCP control files. This file
is normally 4 kBytes in size (can be 8 kBytes in some cases).
This file is used at recovery to know which checkpoint files to use in recovery,
it is also used at the next LCP to know which checkpoint files to write.

2) We keep track of the number of rows in a table partition and we keep
track of the number of row changes since the last LCP was started on the
table partition. These two numbers are used to decide on how many parts
of the table partition to fully checkpoint.

If the number of changes is 0, we only write a new control file.
If there are changes we will write at least 1 part and at most a full
local checkpoint that writes all 2048 parts.

3) The table partition is divided into parts based on the row id. We use the
page part of the row id to decide on which part a row id is part of.

4) The number of parts to write uses some mathematical formulas.
As it turns out there is an interesting relation here.
If we write less parts fully the work at recovery is increasing and the
size of all checkpoints increases but at the same time the amount of
writes to disk is decreasing.
With more parts written per checkpoint we increase the amount of
writes to disk per checkpoint, but we decrease the checkpoint size
and the amount of work at recovery.

We decided to make the choice here configurable in a new configuration
parameter called RecoveryWork. This can be set between 25 and 100 and
defaults to 50.

At 50 the checkpoint size will be around 1.6 times the data size. The
amount of checkpoint writes to do will be around 3 times the size of
the changed rows.

Setting it to 25 means that the checkpoint size will be around 1.35 times
the data size. The checkpoint writes will be around 4 times the size of
the changed rows.

Setting it to 100 means that the checkpoint size will be around 2.1 times
the data size and the checkpoint writes will be around 2 times the size
of the changed rows.

Thus there is an exponential dependency on the amount of checkpoint
writes required to achieve the minimum restart time.

We have selected a balanced approach as the default setting.

It is also possible to set EnablePartialLcp to 0. In this case we always
write full checkpoints if any row changed. This means that the checkpoint
size will be equal to the data size. In this case it isn't possible to use a
small REDO log since checkpoint write speed will ensure that we can
complete an LCP in a certain time. In this setup the REDO log should
be 2x the size of data size to ensure that we can handle survive even a
large import of a database.

The above calculation is based on calculation under the assumption on
that the amount of writes is very small per LCP compared to the data size.
The code contains large comments in Backup.cpp that explains this in
even more detail.

There is an additional 12.5% in the checkpoint size due to the fact that
we only delete files and a full checkpoint writes 8 files, so in the worst
case we might have to keep a file that contains only 1 part that is relevant
and the rest is not needed anymore, this means that 1/8 could in the worst
case be wasted space. Normally this would not be the case, but we want
to ensure that we can keep the disk space within limits all the time, even
in the worst case.




MySQL Cluster 7.6.4 is out

MySQL Cluster 7.6.4 DMR is out.

This new version contains a number of goodies.

1) Local checkpoint algorithm have been rewritten
The new checkpointing is designed to scale to at least 16 TBytes of DataMemory sizes
Checkpoints will be much faster, this decreases recovery times significantly
Table fragments that are not updated will not need any new checkpoints written
Checkpoint size on disk is significantly decreased

2) MySQL Cluster Configurator (MCC, Auto Installer)
MCC is significantly improved. Particularly for setups where you
have external computers either on-premise or in the cloud.

3) New cloud feature
In the cloud with availability domains/zones it is possible to have
10x difference between latency inside an AD compared to between
ADs. To run MySQL Cluster in a cloud region with synchronous
replication between ADs one can now configure nodes with a
LocationDomainId. This LocationDomainId will be used to
ensure that transaction coordinator is placed in the same AD and
that we always prefer reading data from our own AD if possible.

4) New ODirectSyncFlag
When using ODirect there are a number of file systems that ensures that writes
are also synched to disk. If the user knows that he is working in such an
environment setting ODirectSyncFlag can improve disk write speeds by
around 2-3x. This is particularly interesting when using hard drives.

5) Change default behaviour of restart configuration
We changed the BuildIndexThreads from 0 to 128 to improve speed of
index rebuilds. We added a new configuration setting to specify which
CPUs that can be used for index rebuilds.

We increased batch sizes (and made them configurable) to improve
performance of unique index creation and online add node and some
other algorithms.

We changed the default algorithm for initial node restart to rebuild indexes
in a special phase.

All these changes can lead to a very significant reduction in restore times.

6) Many improvements to our parallel query implementation (pushdown join,
SPJ). The improvement depends on the queries, but in our special benchmark
query we have improved latency of query execution to almost half.

7) Parallel UNDO log applier for disk columns
The phase where we apply the UNDO log is now fully parallelised over all
LDM threads. For a scenario with 4 LDM threads we've seen a speed up of
5x for the UNDO log applier phase (this is only used to UNDO changes in
pages containing disk columns during a restart).

8) Bug fixes
We have continued our quality improvements to ensure that each new version
is even more stable compared to the previous one.

MySQL Connector/ODBC 5.3.10 has been released

MySQL Connector/ODBC 5.3.10, a new version of the ODBC driver for the MySQL database management system, has been released.

The available downloads include both a Unicode driver and an ANSI driver based on the same modern codebase. Please select the driver type you need based on the type of your application – Unicode or ANSI. Server-side prepared statements are enabled by default. It is suitable for use with any MySQL version from 5.5.

This is the fifth release of the MySQL ODBC driver conforming to the ODBC 3.8 specification. It contains implementations of key 3.8 features, including self-identification as a ODBC 3.8 driver, streaming of output parameters (supported for binary types only), and support of the SQL_ATTR_RESET_CONNECTION connection attribute (for the Unicode driver only).

Also, Connector/ODBC 5.3 introduces a GTK+-based setup library providing a GUI DSN setup dialog on some Unix-based systems, currently included in the Debian 7/8/9, EL6/OL6, EL7/OL7 (64-bit only), Fedora 24/25/26, FreeBSD 10/11, SLES 12, Ubuntu 14/16/17 packages. Other new features in the 5.3 driver are FileDSN and Bookmarks support.

The release is now available in source and binary form for a number of platforms from our download pages at

http://dev.mysql.com/downloads/connector/odbc/5.3.html

For information on installing, please see the documentation at

http://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation.html

Functionality Added or Changed

Bugs Fixed

  • Fixed an OpenRecordSet memory leak due to get_session_variable() not freeing a result for errors. (Bug #27155880, Bug #88143
  • Calling MySQLDriverConnect with the pcbConnStrOut argument set to NULL caused an unexpected failure. (Bug #27101767, Bug #88371)
  • Connector/ODBC now compiles on MySQL 5.5. Thanks to Vadim Zeitlin for the patch. (Bug #26633971, Bug #87413)

Enjoy and thanks for the support!

MySQL Connector/Node.js 8.0.9 RC has been released

MySQL Connector/Node.js is a new Node.js driver for use with the X DevAPI. This release, v8.0.9, is the third development release of the MySQL Connector/Node.js 8.0 series.

The X DevAPI enables application developers to write code that combines the strengths of the relational and document models using a modern, NoSQL-like syntax that does not assume previous experience writing traditional SQL.

MySQL Connector/Node.js can be downloaded through npm (see https://www.npmjs.com/package/@mysql/xdevapi for details) or from https://dev.mysql.com/downloads/connector/nodejs/.

To learn more about how to write applications using the X DevAPI, see http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information about how the X DevAPI is implemented in MySQL Connector/Node.js, and its usage, see http://dev.mysql.com/doc/dev/connector-nodejs/.

Note

Please note that the X DevAPI requires at least MySQL Server version 5.7.12 or higher with the X Plugin enabled. For general documentation about how to get started using MySQL as a document store, see http://dev.mysql.com/doc/refman/5.7/en/document-store.html.

Functionality Added or Changed

  • Improved the UUID generation algorithm to implement the design improvements suggested in RFC 4122. Before the chance of duplicated values during a small time frame was too high. (Bug #26120588)
  • X DevAPI: In the process of refining the definition of the X DevAPI to cover the most relevant usage scenarios, the following API components have been removed from the X DevAPI implementation for Connector/Node.js:
    • API components that support session configurations, such as the SessionConfig and SessionConfigManager classes.
    • The mysqlx.config namespace and all methods of the namespace, save(), get(), list(), delete(), and more.
    • The createTable(), foreignKey(), dropTable(), createView(), dropView(), and alterView() methods from the Schema class.
  • The following methods were added:
    • Session.setSavePoint: accepts a name or generates one of the form connector-nodejs-{uuid}, and returns a Promise.
    • Session.releaseSavePoint: releases a specific savepoint.
    • Session.rollbackTo: rollbacks to a specified savepoint.
  • The createIndex() method was added to the Collection API.

Bugs Fixed

  • The expression parser used by the CRUD API was replaced with a new implementation written in pure JavaScript. This fixes several grammar related bugs. (Bug #26729768, Bug #26636956, Bug #25036336, Bug #23148246)
  • The CollectionFind.fields() method was updated to support flexible parameters to follow standard X DevAPI conventions. (Bug #22084545)

Enjoy and thanks for the support!

Percona Server for MySQL 5.5.59-38.11 is Now Available

Percona announces the release of Percona Server for MySQL 5.5.59-38.11 on January 30, 2018. Based on MySQL 5.5.59, including all the bug fixes in it, Percona Server for MySQL 5.5.59-38.11 is now the current stable release in the 5.5 series.

Percona Server for MySQL is open-source and free. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:
  • With innodb_large_prefix set to 1, Blackhole storage engine was incompatible with InnoDB table definitions, thus adding new indexes would cause replication errors on the slave. Fixed #1126 (upstream #53588).
  • A GCC 7 warning fix introduced introduced regression in Percona Server 5.5.58-38.10 that lead to a wrong SQL query built to access the remote server when Federated storage engine was used. Bug fixed #1134.
  • Percona Server 5.5 embedded server builds were broken. Bug fixed #2893.
  • Percona Server now uses TraviCI for additional tests. Bug fixed #3777.

Other bugs fixed: #257 and #2415.

This release contains fixes for the following CVE issues: CVE-2018-2562, CVE-2018-2622, CVE-2018-2640, CVE-2018-2665, CVE-2018-2668.

Find the release notes for Percona Server for MySQL 5.5.59-38.11 in our online documentation. Report bugs on the Jira bug tracker.

Pages