Planet MySQL

Apply Integrity Constraint to Your MySQL Database With the Help of dbForge Studio!

Our database tools team is pleased to announce the release of our dbForge MySQL products with a CHECK constraint support. We are looking forward to seeing our customers benefit from it in their everyday work. The fact that MySQL doesn’t support CHECK Constraints syntax caused lots of inconveniences to the developers and DBAs. That deviation […]

Jul 30: Where is the MySQL team this week?!

This week you can find our MySQL & MySQL Community experts at following shows: 

  • Mid-Atlantic Developer Conference, Baltimore, US, August 1-2, 2019
    • Find us at MySQL booth and come to listen David Stokes' sessions as follows:
      • "MySQL & GIS", scheduled for Aug 1 @ 11:30-12:30pm
      • "MySQL without the SQL - Oh My!" scheduled for Aug 2 @ 10:15-11:15 am
  • OpenSource Conference Kyoto, Japan, August 2-3, 2019
    • Come to talk to us to our Gold booth in the expo area as well as listen the MySQL talk given by Yuki Yamazaki as:
      • "MySQL Development Trends-Introducing the Latest Information on Oracle/MySQL Cloud Service" scheduled for Aug 3 @15:15-16:00. 

Setting up a InnoDB Sandbox Cluster in SIX steps

I have not used the new InnoDB clone feature that is now part of MySQL 8.0.17 but wanted to see how it worked.  Setting a sandbox cluster with the new shell is easy and quick.  How easy and quick?
 well, it takes six commands and just a few minutes.

The Steps
1. mysqlsh> dba.deploySandboxInstance(3310)
A new MySQL sandbox instance will be created on this host in
C:\Users\dstokes\MySQL\mysql-sandboxes\3310

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.

Please enter a MySQL root password for the new instance: ******
Deploying new MySQL instance...

Instance localhost:3310 successfully deployed and started.
Use shell.connect('root@localhost:3310'); to connect to the instance.

 MySQL  localhost:33060+ ssl  so  JS > dba.deploySandboxInstance(3320)
A new MySQL sandbox instance will be created on this host in
C:\Users\dstokes\MySQL\mysql-sandboxes\3320

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.

Please enter a MySQL root password for the new instance: ******
Deploying new MySQL instance...

Instance localhost:3320 successfully deployed and started.
Use shell.connect('root@localhost:3320'); to connect to the instance.

 2.  mysqlsh> dba.deploySandboxInstance(3330)
A new MySQL sandbox instance will be created on this host in
C:\Users\dstokes\MySQL\mysql-sandboxes\3330

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.

Please enter a MySQL root password for the new instance: ******
Deploying new MySQL instance...

Instance localhost:3330 successfully deployed and started.
Use shell.connect('root@localhost:3330'); to connect to the instance.


3. mysqlsh> \c root@localhost:3310
Creating a session to 'root@localhost:3310'
Please provide the password for 'root@localhost:3310': ******
Save password for 'root@localhost:3310'? [Y]es/[N]o/Ne[v]er (default No): y
Fetching schema names for autocompletion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 12
Server version: 8.0.17 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.

4. mysqlsh > var cluster = dba.createCluster('DaveTest')
A new InnoDB cluster will be created on instance 'localhost:3310'.

Validating instance at localhost:3310...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3310

Instance configuration is suitable.
Creating InnoDB cluster 'DaveTest' on 'localhost:3310'...

Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

 5. mysqlsh > cluster.addInstance('root@localhost:3320')
Please provide the password for 'root@localhost:3320': ******
Save password for 'root@localhost:3320'? [Y]es/[N]o/Ne[v]er (default No): y

NOTE: The target instance 'localhost:3320' has not been pre-provisioned (GTID set is
empty). The Shell is unable to decide whether incremental distributed state
recovery can correctly provision it.
The safest and most convenient way to provision a new instance is through
automatic clone provisioning, which will completely overwrite the state of
'localhost:3320' with a physical snapshot from an existing cluster member. To
use this method by default, set the 'recoveryMethod' option to 'clone'.

The incremental distributed state recovery may be safely used if you are sure
all updates ever executed in the cluster were done with GTIDs enabled, there
are no purged transactions and the new instance contains the same GTID set as
the cluster or a subset of it. To use this method by default, set the
'recoveryMethod' option to 'incremental'.


Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): c
Validating instance at localhost:3320...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3320

Instance configuration is suitable.
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: localhost:3320 is being cloned from 127.0.0.1:3310
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ============================================================    0%  In Progress
    REDO COPY  ============================================================    0%  Not Started

NOTE: localhost:3320 is shutting down...

* Waiting for server restart... ready
* 127.0.0.1:3320 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 58.51 MB transferred in about 1 second (~inf TB/s)

State recovery already finished for 'localhost:3320'

The instance 'localhost:3320' was successfully added to the cluster.

 6. mysqlsh>cluster.addInstance('root@localhost:3320')
<output omitted but much like the above in step 5>

And the Sandbox Cluster is Ready!With those five steps we have a sandbox InnoDB Cluster we can use for testing.

mysqlsh > cluster.status()
{
    "clusterName": "DaveTest",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "127.0.0.1:3310",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "127.0.0.1:3310": {
                "address": "127.0.0.1:3310",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            },
            "127.0.0.1:3320": {
                "address": "127.0.0.1:3320",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            },
            "127.0.0.1:3330": {
                "address": "127.0.0.1:3330",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "127.0.0.1:3310"
}


Now we can fire up MySQL Router and start using our sandbox cluster.

How to Setup a WordPress MySQL Database in the Cloud

WordPress is the largest website builder platform in the world, supporting over 34% of all websites on the internet today. MySQL is a free open source relational database management system that is leveraged across a majority of WordPress sites, and allows you to query your data such as posts, pages, images, user profiles, and more. As any WordPress developer knows, each installation requires a database in the backend, and MySQL is the database of choice for storing and retrieving your WordPress data.

In order for your WordPress website to be able to access, store and retrieve the data in your MySQL database, it needs to be hosted online through a cloud computing service. ScaleGrid offers a convenient way to setup and configure MySQL hosting for your WordPress site. In this blog post, we’ll show you how to create your MySQL deployment at ScaleGrid while setting up your new WordPress site, and how to migrate to a ScaleGrid MySQL deployment for your existing WordPress deployment.

Advantages of Using ScaleGrid MySQL Hosting for Your WordPress

Managing a database is hard, as it needs continuous updating, tuning, and monitoring to ensure the performance of your website. Fortunately, there are fully managed and hosted solutions like ScaleGrid’s MySQL-as-a-Service plans which take care of all your MySQL maintenance so you can focus on developing your websites and applications. Here are some of the ways ScaleGrid can help you improve your production WordPress setup:

  1. Platform and configuration of choice

    Host MySQL on AWS, or MySQL on Azure with configurable instance sizes through the top two cloud providers in the world. We help you configure your MySQL deployment to optimize your performance based on the size of your databases.

  2. High Availability

    Make sure your website doesn’t go down with a server or datacenter crash by configuring a master-slave setup for high availability. Distribute your MySQL data across different datacenter regions, so if one goes down, you can automatically failover to the replica to stay online.

  3. Automated Backups

    Backups are copies of your data that are archived so you can access and restore them in case of a data loss event. This is a tedious process that needs to be performed regularly, but with ScaleGrid, you can automate a schedule to backup your systems as frequently as you like, and customize how long you want to keep your backups around.

  4. Monitoring & Alerts

    Keep an eye on your database performance with advanced monitoring tools and query analysis for your database workload.

  5. Scale On-Demand

    As your data size grows, you will need to scale up your MySQL deployments to ensure your performance does not suffer due to a lack in capacity. This is again a tedious process that usually involves some downtime, but at ScaleGrid, you can scale up to a larger instance size in one click with zero downtime for a master-slave deployment.

  6. Unlimited Databases

    Create and deploy an unlimited number of MySQL databases through one single cluster at ScaleGrid, and easily create new database users with custom role privileges in a few simple clicks.

How to Setup a #WordPress #MySQL Database in the CloudClick To Tweet How to Create a MySQL Deployment at ScaleGrid
  1. Sign up for a free 30-day trial on the ScaleGrid console.
  2. Create a new MySQL deployment in a few easy steps. Make sure to create it in a region that matches your WordPress deployment region. We support two different MySQL DBaaS plans on both AWS and Azure.

  3. Customize Your Plan

    Enter a name for your new MySQL cluster, and select the same cloud region you are using for your WordPress hosting site (or the closest available one) for the best deployment performance. Next, select the VM size, ranging from Micro at 10GB of storage up to X4XLarge at 700GB of storage, and then your MySQL version and storage engine.

  4. Replication

    Next, you can enable a master-slave configuration to ensure high availability for your MySQL database. This means, if one of your servers goes down, we will automatically failover your primary server to a healthy and available region. By default, you can use either a 2+1 configuration (master-slave-quorum) or a 3-node setup (master-slave-slave) with either semisynchronous or asynchronous replication and customize the regions to use for your secondary nodes.

  5. Advanced Security

    In this step, you have an option to enable SSL for your MySQL deployment to secure connections between WordPress and MySQL. We recommend enabling SSL for every production deployment. You can also enable disk encryption to further protect your deployment from unauthorized access.

  6. Firewall Rules

    In order for your WordPress server to be able to connect with your MySQL server, you must open up access by whitelisting this IP under your clusters firewall rules. Find your WordPress servers IP address in your WordPress account under the My Sites tab > Network Admin > WP Engine > General Settings, then add it here to the IP whitelist. You can also learn more in our MySQL Firewalls documentation.

  7. Summary & Create

    Review your MySQL configurations on the last Summary page, and click Create.

How to Setup Your MySQL Database & Admin User for WordPress

While setting up your WordPress site, you will be asked to supply the hostname, database name and user credentials for your MySQL database. Here’s how you can set this up:

Create a New MySQL Database
  1. Go to your MySQL cluster list page, and select your newly created cluster.
  2. Select the ‘Databases’ tab, and click the green ‘New Database’ button.
  3. In the ‘Create a new database’ window that pops up, simply enter a name for your database, for example, ‘wordpress_database’, and click ‘Create’:
Create a New MySQL Database User
  1. Select the ‘Users‘ tab on your MySQL cluster details page, and click the ‘New User’ button.
  2. Select the new ‘wordpress_database’ that you just created from the dropdown field as the database for this user.
  3. In the ‘New User’ window that comes up, enter a name and password for the new user, for example, ‘wpuser’.
  4. Lastly, select ‘Read-Write’ as the ‘Role’ for this new user:
  5. Click ‘Create’ and your new user will be created! Make note of your database name, username, and password which you’ll need for your WordPress installation.
Find Your MySQL Hostname
  1. Go to your MySQL Cluster Details page and select the ‘Overview’ tab.
  2. Find the Command Line Syntax section at the bottom of the page to see the command that can be used to connect your MySQL deployment through MySQL client:
  3. The server name following the ‘-h’ option is the hostname of your MySQL deployment, and in this particular case, it is ‘SG-help-1-master.devservers.scalegrid.io’.
Test Connectivity Between Your WordPress Site & MySQL Server

Before you begin configuring your WordPress installation, we recommend checking the connectivity between your WordPress server and MySQL deployment. In order to check this, you will need a MySQL client on your server. Follow the instructions here to download and install and MySQL client if you dont already have one installed. Execute the following from the command line of your WordPress server and make sure you are able to connect:

mysql -u wpuser -h SG-help-1-master.devservers.scalegrid.io -p<Your Password>

If you’ve enabled SSL for your MySQL server, make sure that you see the ‘Cipher in use’ output for the SSL field as shown below:

mysql> status; -------------- mysql  Ver 14.14 Distrib 5.7.15, for Win64 (x86_64) Connection id:          79854 Current database: Current user:           wpuser@xxxx SSL:                    Cipher in use is DHE-RSA-AES256-SHA Using delimiter:        ; Server version:         5.7.21-log MySQL Community Server (GPL) Protocol version:       10 -------------- Configuring Your WordPress Installation to Use MySQL at ScaleGrid

During your WordPress installation, you will be asked to specify your MySQL database details. Now that you’ve created a new MySQL deployment, database, user, and found your hostname, you can enter these details here to connect:

Once you submit, your WordPress installation should go through and be able to connect with your MySQL deployment hosted on ScaleGrid.

Additional Step For SSL-Enabled MySQL Deployments

If you have enabled SSL for your MySQL deployment, then your WordPress installation requires an additional setting that cannot be specified through the UI above. In such cases, before installing WordPress, you will have to edit a file called wp-config-sample.php and save it as wp-config.php.

You can follow detailed instructions here on editing your wp-config.php file. The sample below shows the MySQL section in the wp-config.php file, and the last line in this section indicates that MySQL has been configured with SSL:

Once you complete the installation, you will see that your WordPress site is up and running using ScaleGrid’s MySQL hosting as its database!

In the next part, I will talk about migrating from your existing database to ScaleGrid’s MySQL hosting for your existing WordPress site. Stay tuned!

MySQL Connector/J 5.1.48 GA has been released

Dear MySQL Users,

MySQL Connector/J 5.1.48, a maintenance release of the production 5.1

branch has been released. Connector/J is the Type-IV pure-Java JDBC

driver for MySQL.

MySQL Connector/J is available in source and binary form from the

Connector/J download pages at

http://dev.mysql.com/downloads/connector/j/5.1.html

and mirror sites as well as Maven repositories.

MySQL Connector/J (Commercial) is available for download on the My Oracle

Support (MOS) website. This release will shortly be available on

eDelivery (OSDC).

As always, we recommend that you check the “CHANGES” file in the

download archive to be aware of changes in behavior that might affect

your application.

MySQL Connector/J 5.1.48 includes the following general bug fixes and

improvements, also available in more detail on

https://dev.mysql.com/doc/relnotes/connector-j/5.1/en/news-5-1-48.html

Changes in MySQL Connector/J 5.1.48 (2019-07-29, General Availability)

   Version 5.1.48 is a maintenance release of the production 5.1

   branch. It is suitable for use with MySQL Server versions

   5.6, 5.7, and 8.0. It supports the Java Database Connectivity

   (JDBC) 4.2 API.

Functionality Added or Changed

     * Important Change: For MySQL Server 8.0.14 and later,

       5.7.25 and later, 5.6.43 and later, and 5.5.63 and later,

       minimal permissions on named pipes are granted to clients

       that use them to connect to the server. Connector/J,

       however, can only use named pipes when granted full

       access on them. As a workaround, the MySQL Server that

       Connector/J wants to connect to must be started with the

       system variable named_pipe_full_access_group; see the

       description for the system variable for more details.

       (Bug #28971500)

     * The collation utf8mb4_zh_0900_as_cs has been added to the

       CharsetMapping class. (Bug #29244101)

     * The list of MySQL Server reserved words, stored as the

       MYSQL_KEYWORDS string in the DatabaseMetaData class, has

       been updated. (Bug #28268619, Bug #91435)

     * A number of private parameters of ProfilerEvents (for

       example, hostname) had no getters for accessing them from

       outside of the class instance. Getter methods have now

       been added for all the parameters of the class. (Bug

       #20010454, Bug #74690)

     * Added support for the utf8mb4_0900_bin collation, which

       is now supported by MySQL Server 8.0.17.

Bugs Fixed

     * When a CallableStatement was used to call a stored

       procedure or function that did not exist on the server or

       that the client had no rights to access its parameters’

       metadata, Connector/J tried to infer the parameter

       metadata from the SQL call string itself, but did so

       wrongly. It was because Connector/J did not distinguish

       between a stored procedure and a stored function in its

       inference, and this fix makes Connector/J do so now. (Bug

       #29907618)

     * ResultSetMetaData.getTableName() returned null when no

       applicable results could be returned for a column.

       However, the JDBC documentation specified an empty string

       to be returned in that case. This fix makes the method

       behave as documented. The same correction has been made

       for getCatalogName() and getSchemaName(). (Bug #29452669,

       Bug #94585)

     * Because the SHOW PROCESSLIST

       (https://dev.mysql.com/doc/refman/8.0/en/show-processlist.html)

       statement might cause the server to fail sometimes,

       Connector/J now avoids using the statement, but queries

       the performance scheme instead for the information it

       needs. (Bug #29329326)

     * Enabling logSlowQueries resulted in many unnecessary

       calls of LogUtils.findCallingClassAndMethod(). With this

       fix, LogUtils.findCallingClassAndMethod() is called only

       when profileSQL is true and even in that case, the number

       of calls are reduced to a minimal to avoid the excessive

       stack trace data the function used to generate. Thanks to

       Florian Agsteiner for contributing to the fix. (Bug

       #29277648, Bug #94101, Bug #17640628, Bug #70677)

     * For an SSL connection, after a client disconnected from a

       server by calling Connection.close(), the TCP connection

       remained in the TIME_WAIT

       (https://dev.mysql.com/doc/refman/8.0/en/can-not-connect-to-server.html)

       state on the server side. With this fix, the connection

       remains in the TIME_WAIT

       (https://dev.mysql.com/doc/refman/8.0/en/can-not-connect-to-server.html)

       state on the client side instead, in most cases.

       (Bug #29054329, Bug #93590)

     * A memory leak occurred if Connector/J was loaded via the

       bootstrap class path instead of the main application

       classpath. It was because

       AbandonedConnectionCleanupThread failed to initialize its

       internal thread in that case, so that references for

       closed connections were not cleaned up, and their number

       kept growing. This fix repairs the clean up process for

       closed connections and also makes the process thread

       safe. (Bug #28747636, Bug #92508)

     * PreparedStatement.setTimestamp threw a

       NullPointerException if getParameterMetaData() was called

       before the statement was executed. This fix adds the

       missing null checks to getParameterMetaData() to avoid

       the exception. (Bug #28529781, Bug #92089)

     * Setting the connection property characterEncoding to an

       encoding that maps to the MySQL character set latin1 or

       utf8mb4 did not result in the corresponding default

       connection collation (latin1_swedish_ci or

       utf8mb4_0900_ai_ci, respectively) to be used on the

       server. With this fix, the server default is used in the

       situation. (Bug #28207422)

     * Characters returned in a ResultSet were garbled when a

       server-side PreparedStatement was used, and the query

       involved concatenation of a number and a string with

       multi-byte characters. That was due to an issue with the

       number-to-string conversion involved, which has been

       corrected by this fix. (Bug #27453692)

     * ResultSet.updateRow() failed with a

       MySQLSyntaxErrorException when the server was in

       NO_BACKSLASH_ESCAPES SQL mode and some but not all text

       columns were updated. It was because Connector/J sent the

       strings as hex literals in that case and added to them

       unnecessary quotes for escapes. This fix add checks to

       avoid quotes for being added unnecessarily. (Bug

       #22850444, Bug #80441)

     * PreparedStatement.executeUpdate() failed with a

       MySQLSyntaxErrorException when the server was in

       NO_BACKSLASH_ESCAPES SQL mode and BLOB data types were

       involved in the statement. It was because Connector/J did

       not escape single quotes in BLOB value properly in

       NO_BACKSLASH_ESCAPES mode, and the problem has been

       corrected by this fix. (Bug #20913289)

     * When a procedure and a function with the same name

       existed in a database, Connector/J retrieved parameter

       information from both at the same time by default,

       causing various errors at statement executions. This fix

       leverages the JDBC 4 DatabaseMetaData methods to return

       parameter information only from the procedure or function

       of interest, so that statements are executed without

       errors. (Bug #19531305, Bug #73774)

     * Calling ProfilerEvent.pack() resulted in an

       ArrayIndexOutOfBoundsException. It was due to a

       mishandling of data types, which has been corrected by

       this fix. (Bug #11750577, Bug #41172)

On Behalf of Oracle/MySQL Release Engineering Team,

Hery Ramilison

MySQL Connector/NET 6.10.9 has been released

Dear MySQL users,

MySQL Connector/NET 6.10.9 is the fifth GA release with .NET Core
now supporting various connection-string options and MySQL 8.0 server
features.

MySQL Connector/NET 6.10.9 is the final release in the 6.10 series.

To download MySQL Connector/NET 6.10.9 GA, see the “Generally Available
(GA) Releases” tab at
http://dev.mysql.com/downloads/connector/net/

Changes in MySQL Connector/NET 6.10.9 (2019-07-29, General Availability)


     * Functionality Added or Changed

     * Bugs Fixed

Functionality Added or Changed


     * Connector/NET now supports IPV6 connections made using
       the classic MySQL protocol when the operating system on
       the server host also supports IPV6. (Bug #29682333)

     * Support was added for .NET Core 2.2, which is a
       cross-platform version of .NET for building applications
       that run on Linux, macOS and Windows (see Connector/NET
       Versions
       (https://dev.mysql.com/doc/connector-net/en/connector-net-versions.html)).

Bugs Fixed


     * The InvariantCulture property was missing from some data
       types, which created issues during platform migration
       operations. Thanks to Effy Teva for the patch. (Bug
       #29262195, Bug #94045)

     * Connector/NET connections executed SHOW VARIABLES
       unnecessarily. (Bug #28928543, Bug #93202)

     * An exception was generated when the MySqlDbType
       enumeration was given an explicit value and then passed
       as a parameter to the MySqlCommand.Prepare method. (Bug
       #28834253, Bug #92912)

     * The CreateCommandBuilder and CreateDataAdapter methods
       were added to MySqlClientFactory class. Thanks to Cédric
       Luthi for the patch. (Bug #28560189, Bug #92206)

     * Validation was added to ensure that when a column is of
       type TIME and the value is 00:00:00, it takes the value
       instead of setting NULL. (Bug #28383726, Bug #91752)

     * Connector/NET returned the wrong time zone when the
       TIMESTAMP column was read from a MySQL table. (Bug
       #28156187)

     * Entity Framework Core: A syntax error was generated
       during an operation attempting to rename a table that was
       previously migrated from code. Now, the primary key
       constraint for an existing table can be dropped without
       errors when the follow-on migration operation is
       performed. (Bug #28107555, Bug #90958)

     * The implementation of DbProviderFactory prevented an
       application from connecting to MySQL in a generic way.
       Now, invoking the CreateDataAdapter method returns a
       valid adapter instead of returning the null value. (Bug
       #27292344, Bug #88660)

     * Connector/NET access to MySQL stopped working after the
       computer hosting the server was started and continued to
       operate uninterrupted for a defined period of time. (Bug
       #26930306, Bug #75604)

On Behalf of MySQL Release Engineering Team,
Surabhi Bhat

In a proxy-ed world, where do connections come from?

Overview The Skinny

Database Proxies provide a single entry point into MySQL for the calling client applications.

Proxies are wonderful tools to handle various situations like a master role switch to another node for maintenance, or for transparency with read and write connections.

However, when the time comes to perform the switch action, all of the calling clients have been funneled through the proxy, so identification of the calling host from the database itself becomes difficult.

The Problem What is going on?

Let’s illustrate how not knowing the source of a client connection can be an issue for the database administrator…

In the following diagram, three client applications connect to a Tungsten Cluster via the Connector proxy:

One of those applications slows down the whole cluster by selecting a huge amount of data (i.e. SELECT * FROM HUGE_TABLE;).

The DBA wants to correct the problem by killing the application/query (before letting the developer what he thinks of his work ;-).

The natural action for the DBA is to call the MySQL show processlist command to find the origin host:

mysql> show processlist; +-------+----------+--------------+-----------------+---------+------+--------------+-----------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +-------+----------+--------------+-----------------+---------+------+--------------+-----------------------------------------------------------------------------------------------+ | 51 | tungsten | db1:32956 | tungsten_global | Sleep | 5175 | | NULL | | 52 | tungsten | db1:32958 | tungsten_global | Sleep | 0 | | NULL | | 22293 | app_user | db1:43390 | NULL | Query | 0 | starting | show processlist | | 22517 | app_user | proxy1:44092 | test | Query | 0 | query end | update ta3 set value = 439, changed = '2019-07-24 16:28:05.975' where k1 = 100 and k2 = 80000 | | 22518 | app_user | proxy1:44096 | test | Query | 0 | Sending data | select * from ta3 limit 100 | | 22522 | app_user | proxy1:43640 | test | Query | 0 | query end | update ta3 set value = 254, changed = '2019-07-24 16:28:05.975' where k1 = 71 and k2 = 19000 | | 22526 | app_user | proxy1:44122 | test | Query | 0 | query end | update ta3 set value = 836, changed = '2019-07-24 16:28:05.973' where k1 = 27 and k2 = 95000 | | 22527 | app_user | proxy1:44128 | test | Query | 0 | updating | update ta3 set value = 39, changed = '2019-07-24 16:28:05.971' where k1 = 97 and k2 = 34000 | | 22549 | app_user | proxy1:43670 | test | Query | 0 | query end | select * from ta3 limit 50 | | 22550 | app_user | proxy1:43672 | test | Query | 0 | Sending data | select * from ta3 | +-------+----------+--------------+-----------------+---------+------+--------------+-----------------------------------------------------------------------------------------------+ 10 rows in set (0.00 sec)

Fail!

The host is proxy1, and there is no way to know the real calling client origin host from this output.

The Solution Quick and Easy, Just Add Tungsten!

Luckily, our smart DBA uses a Tungsten Cluster. Through the Connector proxy, the DBA will be able to run tungsten show processlist; from the MySQL client command prompt:

mysql> tungsten show processlist; +---------------+----------------+----------+------------+------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | DataSource | Id | User | Host | db | Command | Time | State | Info | +---------------+----------------+----------+------------+------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | db2 | 23405 | app_user | app5:55322 | test | Query | 0 | Sending data | select c.* from ta3 c join ta1 a on c.k1 = a.k1 join ta2 b on c.k2 = b.k2 where a.value between 512 | | db1 | 23382 | app_user | app3:39364 | test | Query | 0 | query end | update ta3 set value = 479, changed = '2019-07-24 15:54:45.598' where k1 = 26 and k2 = 80000 | | db3 | 22518 | app_user | app1:44096 | test | Query | 0 | Sending data | select * from ta3 limit 100 | | db1 | 23383 | app_user | app2:42228 | test | Query | 0 | update | insert into ta3 values (257, 40000, '2019-07-24 15:54:45.792', '2019-07-24 15:54:45.792', 907) | | db2 | 23312 | app_user | app3:39292 | test | Query | 0 | Sending data | select c.* from ta3 c join ta1 a on c.k1 = a.k1 join ta2 b on c.k2 = b.k2 where a.value between 356 | | db3 | 23420 | app_user | app5:55356 | test | Query | 0 | Sending data | select c.* from ta3 c join ta1 a on c.k1 = a.k1 join ta2 b on c.k2 = b.k2 where a.value between 477 | | db3 | 23375 | app_user | app3:39352 | test | Query | 0 | Sending data | select c.* from ta3 c join ta1 a on c.k1 = a.k1 join ta2 b on c.k2 = b.k2 where a.value between 595 | | db3 | 22550 | app_user | app2:42166 | test | Query | 0 | Sending data | select * from ta3 | | db1 | 23413 | app_user | app5:55342 | test | Query | 0 | updating | update ta3 set value = 324, changed = '2019-07-24 15:54:46.315' where k1 = 137 and k2 = 96000 | +---------------+----------------+----------+------------+------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ 9 rows in set (0.44 sec)

Bingo!

The Tungsten Connector has post-processed the request and replaced its own hostname with the real, origin hostname.

Oh, this “DataSource” entry? It will show on which database host the connection actually ended up…
This tool is available in proxy mode (which includes smart-scale, host-or-port-based read-write splitting) thanks to the Tungsten Connector SQL parser.

For those who use Bridge mode (the default as of v5.0.0), we developed an external tool that can be called from the command line:

$ ./tungsten-connector/bin/connector client-list Executing Tungsten Connector Service --client-list ... +--------------------+-------------------+--------------------+-----------------------+ | Client | Connector Inbound | Connector Outbound | Data Source | +--------------------+-------------------+--------------------+-----------------------+ | /10.10.1.218:33084 | /10.10.1.212:3306 | /10.10.1.212:45074 | db1/10.10.1.212:13306 | | /10.10.1.242:48830 | /10.10.1.212:3306 | /10.10.1.212:44918 | db2/10.10.1.213:13306 | | /10.10.1.218:33132 | /10.10.1.212:3306 | /10.10.1.212:45146 | db2/10.10.1.213:13306 | | /10.10.1.242:48844 | /10.10.1.212:3306 | /10.10.1.212:44928 | db3/10.10.1.214:13306 | | /10.10.1.218:33124 | /10.10.1.212:3306 | /10.10.1.212:45134 | db2/10.10.1.213:13306 | | /10.10.1.242:48802 | /10.10.1.212:3306 | /10.10.1.212:44906 | db1/10.10.1.212:13306 | | /10.10.1.218:33130 | /10.10.1.212:3306 | /10.10.1.212:45144 | db1/10.10.1.212:13306 | | /10.10.1.200:56626 | /10.10.1.212:3306 | /10.10.1.212:45108 | db3/10.10.1.214:13306 | | /10.10.1.242:48806 | /10.10.1.212:3306 | /10.10.1.212:44910 | db4/10.10.1.215:13306 | | /10.10.1.200:56542 | /10.10.1.212:3306 | /10.10.1.212:45064 | db1/10.10.1.212:13306 | +--------------------+-------------------+--------------------+-----------------------+ Done Tungsten Connector Service --client-list

Match the offending request with the host/port couple and you’re back to the origin application host!

The Library Please read the docs!

For more information about using various Tungsten Connector-based inline commands, please visit the docs page at http://docs.continuent.com/tungsten-clustering-6.0/connector-inline.html

For more information about Tungsten Clustering, please visit https://docs.continuent.com.

Summary The Wrap-Up

In this blog post we discussed how to accurately track end-to-end client connections through a proxy using the Tungsten Connector in a Tungsten Cluster.

Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit https://www.continuent.com/solutions

Want to learn more or run a POC? Contact us.

Shinguz: FromDual Backup and Recovery Manager for MariaDB and MySQL 2.2.0 has been released

FromDual has the pleasure to announce the release of the new version 2.2.0 of its popular Backup and Recovery Manager for MariaDB and MySQL (brman).

The new FromDual Backup and Recovery Manager can be downloaded from here. The FromDual Repositories were updated. How to install and use the Backup and Recovery Manager is describe in FromDual Backup and Recovery Manager (brman) installation guide.

In the inconceivable case that you find a bug in the FromDual Backup and Recovery Manager please report it to the FromDual Bugtracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 2.x to 2.2.0 shell> cd ${HOME}/product shell> tar xf /download/brman-2.2.0.tar.gz shell> rm -f brman shell> ln -s brman-2.2.0 brman
Changes in FromDual Backup and Recovery Manager 2.2.0

This release is a new major release series. It contains some new features. We have tried to maintain backward-compatibility with the 1.2, 2.0 and 2.1 release series. But you should test the new release seriously!

You can verify your current FromDual Backup Manager version with the following command:

shell> fromdual_bman --version shell> bman --version shell> rman --version
General
  • Automated testing improved.
  • Packaging optimized. Especially unnecessary dependencies removed.
  • Code redundancy removed, made code more readable and partially code reorganized.

FromDual Backup Manager
  • Requirement checks improved. Warnings do not spoil rc any more.
  • Bug with --pass-through and --per-schema fixed (MGB).
  • Backup size is now correctly reported to fpmmm. And message is written to the log.
  • Timestamp format was wrong in start backup tag.
  • Globals removed from function doPrivilegeBackup.
  • Columns of table mysql.user quoted for MariaDB 10.4 compatibility.
  • Test on fpmmm-cache-file fixed from dir to file.
  • Some bugs in function binlogBackup simulation fixed.
  • Too verbose debugging disabled.
  • Sftp archiving did not fail or complain if directory does not exist.
  • Better input validation for option --archivedestination.
  • Function checkArguments renamed to checkBmanOptions.
  • Wrong error message for servers without binary log removed.
  • Skip gathering of binlog file and pos in case of binary log was not enabled.

FromDual Recovery Manager
  • Missing --stop-instance option added to usage.
  • Omitted option --policy PHP error fixed for logical and physical restore, this allows us to restore from conventional backup infrastructure as well.
  • Tables slave_master_info and slave_relay_log_info in mysql schema are omitted in restore progress output.
  • Error message for missing --backup-name option improved.
  • Error handling and messages improved as preparation for PiTR.
  • Binlog restore type removed and check made accordingly that it is not possible any more.
  • PiTR pre Recover test implemented.
  • Various tests for test automation improved.
  • Binlog position discovery possible also for compressed files.
  • Execution time information added.
  • Option --debug added to rman.
  • Point-in-Time-Recovery for logical and physical backup implemented.
  • Instance starting for physical backup was implemented.
  • Option --simulate was implemented everywhere.
  • Option --disable-log-bin added and error handling improved.
  • Option --config activated in rman.
  • Fast PiTR is implemented.
  • Only full physical restore is supported atm. This was not caught correctly.
  • Schema PiTR also possible.

FromDual brman Catalog
  • No changes.

Subscriptions for commercial use of FromDual Backup and Recovery Manager you can get from from us.

Taxonomy upgrade extras:  Backup Restore Recovery pitr brman release bman rman

SQL Stored Procedure Example | Stored Procedure In SQL Tutorial

SQL Stored Procedure Example | Stored Procedure In SQL Tutorial is today’s topic. A stored procedure is the prepared SQL code that is used to save time and can be reused over and over again. This code is stored in the database and helps to get the required output by passing parameters. These stored procedures are used in PL/SQL or SQL SERVER. PL/SQL is a block-structured query language that helps developers to combine the power of SQL with procedural statements.

SQL Stored Procedure

A stored procedure is the prepared SQL code that you can save so that the code can be reused over and over again.

So if you have the SQL query that you write over and over again, save it as the stored procedure, and then call it to execute it.

You can also pass parameters to a stored procedure so that the stored procedure can act based on the parameter value(s) that is passed.

The procedure can be a function or a method that can be invoked through triggers, applications in java or PHP, etc.

Syntax Create procedure <procedure_name> AS Begin <SQL STATEMENT> End Go

Let’s understand the above syntax with a proper example:

EXAMPLE:

Consider table CUSTOMERS:

ID NAME AGE ADDRESS SALARY  1 Tom 21 Kolkata 500 2 Karan 22 Allahabad 600 3 Hardik 23 Dhanbad 700 4 Komal 24 Mumbai 800

 

QUERY CREATE PROCEDURE Sample AS SELECT * FROM Testdb.Customers Go; OUTPUT: ID NAME AGE ADDRESS SALARY  1 Tom 21 Kolkata 500 2 Karan 22 Allahabad 600 3 Hardik 23 Dhanbad 700 4 Komal 24 Mumbai 800

 

So, here we have created a procedure named Sample which displays the details of the customer’s table which was stored in Testdb database.

#MODIFYING THE EXISTING PROCEDURE

We can modify the procedure using the ALTER statement.

#SYNTAX ALTER procedure <procedure_name> AS Begin <SQL STATEMENT> End Go QUERY: ALTER PROCEDURE Sample AS SELECT * FROM Testdb.New_Customers Go;

So, here, the customers’ table data will not be displayed New_customers; instead, data will be displayed, which was previously explicitly created in Testdb database.

#DROPPING A PROCEDURE:

The created procedure can be dropped using the DROP keyword.

SYNTAX: DROP PROCEDURE procedure_name; QUERY: DROP PROCEDURE sample;

So, here, the procedure named sample is dropped, which was previously created above.

Let’s look at the disadvantages of Procedures.

#Stored Procedure With One Parameter

The following SQL statement creates the stored procedure that selects Customers from a particular City from the “Customers” table:

CREATE PROCEDURE SelectAllCustomers @City nvarchar(30) AS SELECT * FROM Customers WHERE City = @City GO;

Execute the stored procedure above as follows:

EXEC SelectAllCustomers City = "Marbella"; #Stored Procedure With Multiple Parameters

Setting up multiple parameters is very easy. Just list the each parameter and the datatype separated by the comma as shown below.

The following SQL statement creates a stored procedure that selects Customers from a particular City with a specific PostalCode from the “Customers” table:

CREATE PROCEDURE SelectAllCustomers @City nvarchar(30), @PostalCode nvarchar(10) AS SELECT * FROM Customers WHERE City = @City AND PostalCode = @PostalCode GO;

Execute the stored procedure above as follows.

EXEC SelectAllCustomers City = "Perth", PostalCode = "6021"; #Advantages of Stored Procedure
  1. They result in the improvement of the performance of the article. Whenever a procedure is being called frequently in an application in a single connection, the compiled version of that procedure is delivered.
  2. They reduce traffic between the database and application as the lengthy statements are already fed into the database and do not require to be sent again and again by the application.
  3. They are like code reusability as that of functions and methods in C/C++ and JAVA.
#Disadvantages
  1. Debugging of the stored procedure is not supported by MySQL.
  2. Stored procedures result in a lot of memory usage.
#Conclusion

It groups one or more TransactSQL statements into the logical unit and is stored as an object in the Database Server. When the stored procedure is called at the first time, MySQL creates the execution plan and stores it in the plan cache.

In the subsequent executions of that stored procedure, MySQL reuses a plan so that the stored procedure can execute very fast with the reliable performance.

Finally, the Stored Procedure In SQL Tutorial is over.

The post SQL Stored Procedure Example | Stored Procedure In SQL Tutorial appeared first on AppDividend.

Addressing common challenges when scaling out MySQL horizontally

When we explore strategies to scale our MySQL-based database layer, we realize that there are many different options. As opposed to proprietary solutions, open-source products nourish themselves from their communities and novel architectures or enhanced versions are generated, often incorporating other open source technologies. MySQL is not an exception.

Other players build completely different engines aiming to mitigate some of MySQL well-known weaknesses, but still support its wire protocol, with the challenge of sacrificing as little compatibility as possible.

In this white paper, we briefly discuss the scale-up vs scale-out dilemma, to then review the different scale-out strategies. We are also going to go over some of the most common MySQL architectures and products with horizontal scaling in mind.

Finally, we will share some guidelines to help you choose the more appropriate technology and scaling strategy.

Download the white paper today to learn more about the different options of varying complexity in the MySQL ecosystem.

Shinguz: Schulung MariaDB/MySQL für Fortgeschrittene im August 2019 in Köln

Sommerferien-Zeit – für all die Daheimgebliebenen dürfte es jetzt hoffentlich etwas ruhiger zu und her gehen. Zeit für eine Weiterbildung? Es bleibt nicht mehr viel Zeit, das Jahres-Schulungs-Budget aufzubrauchen!

Vom 19. bis 23. August führt FromDual wieder einmal die Schulung MariaDB und MySQL für Fortgeschrittene in Köln durch. Siehe auch unsere Schulungstermine.

Es hat noch Plätze frei! Anmelden kann man sich direkt bei unserem Schulungs-Partner, der GFU Cyrus AG.

Diese MariaDB/MySQL Weiterbildung richtet sich an alle DBAs, DevOps und System Administratoren, welche MariaDB und MySQL Datenbanken zu betreuen habe und gerne besser verstehen wollen, wie man das noch besser macht.

In dieser Schulung behandeln wir Backup, Restore und Point-in-Time-Recovery sowohl einer kleinen wie auch einer grossen Datenbank. Aufsetzen von hochverfügbaren MariaDB und MySQL Datenbanken mittels der Master/Slave Replikation sowie dem Galera Cluster inklusive Switch-Over-Techniken. Schliesslich und endlich beschäftigen wir uns auch noch zwei Tage mit Datenbank Performance Tuning (Hardware, O/S, DB Konfiguration, Schema Tuning und SQL Query Tuning, etc.).

Das Ganze ist mir zahlreichen Übungen versehen, damit man das gelernte auch gleich praktisch anwenden kann!

Die Schulung findet in deutscher Sprache statt.

Den detaillierten Inhalt dieser fünftägigen MySQL/MariaDB Schulung finden Sie hier.

Bei weiteren Fragen nehmen Sie bitte mit uns Kontakt auf.

Taxonomy upgrade extras:  schulung training galera Backup Restore point-in-time-recovery replikation Performance Tuning

MySQL Optimizer: Naughty Aberrations on Queries Combining WHERE, ORDER BY and LIMIT

Sometimes, the MySQL Optimizer chooses a wrong plan, and a query that should execute in less than 0.1 second ends-up running for 12 minutes !  This is not a new problem: bugs about this can be traced back to 2014, and a blog post on this subject was published in 2015.  But even if this is old news, because this problem recently came yet again to my attention, and because this is still not fixed in MySQL 5.7 and 8.0, this is a subject worth writing about.

The MySQL Optimizer

Before looking at the problematic query, we have to say a few words about the optimizer.  The Query Optimizer is the part of query execution that chooses the query plan.  A Query Execution Plan is the way MySQL chooses to execute a specific query.  It includes index choices, join types, table query order, temporary table usage, sorting type …  You can get the execution plan for a specific query using the EXPLAIN command.

A Case in Question

Now that we know what are the Query Optimizer and a Query Execution Plan, I can introduce you to the table we are querying.  The SHOW CREATE TABLE for our table is below.

mysql> SHOW CREATE TABLE _test_jfg_201907\G *************************** 1. row ***************************        Table: _test_jfg_201907 Create Table: CREATE TABLE `_test_jfg_201907` (   `id` int(10) unsigned NOT NULL AUTO_INCREMENT,   `str1` varchar(150) DEFAULT NULL,   `id1` int(10) unsigned NOT NULL,   `id2` bigint(20) unsigned DEFAULT NULL,   `str2` varchar(255) DEFAULT NULL, [...many more id and str fields...]   `create_datetime` datetime NOT NULL,   `update_datetime` datetime DEFAULT NULL,   PRIMARY KEY (`id`),   KEY `key1` (`id1`,`id2`) ) ENGINE=InnoDB AUTO_INCREMENT=_a_big_number_ DEFAULT CHARSET=utf8 1 row in set (0.00 sec)

And this is not a small table (it is not very big either though…):

# ls -lh _test_jfg_201907.ibd  -rw-r----- 1 mysql mysql 11G Jul 23 13:21 _test_jfg_201907.ibd

Now we are ready for the problematic query (I ran PAGER cat > /dev/null before to skip printing the result):

mysql> SELECT * FROM _test_jfg_201907   WHERE id1 = @v AND id2 IS NOT NULL   ORDER BY id DESC LIMIT 20; 20 rows in set (27.22 sec)

Hum, this query takes a long time (27.22 sec) considering that the table has an index on id1 and id2.  Let’s check the query execution plan:

mysql> EXPLAIN SELECT * FROM _test_jfg_201907   WHERE id1 = @v AND id2 IS NOT NULL   ORDER BY id DESC LIMIT 20\G *************************** 1. row ***************************            id: 1   select_type: SIMPLE         table: _test_jfg_201907    partitions: NULL          type: index possible_keys: key1           key: PRIMARY       key_len: 4           ref: NULL          rows: 13000      filtered: 0.15         Extra: Using where 1 row in set, 1 warning (0.00 sec)

What ? The query is not using the index key1, but is scanning the whole table (key: PRIMARY in above EXPLAIN) !  How can this be ?  The short explanation is that the optimizer thinks — or should I say hopes — that scanning the whole table (which is already sorted by the id field) will find the limited rows quick enough, and that this will avoid a sort operation.  So by trying to avoid a sort, the optimizer ends-up losing time scanning the table.

Some Solutions

How can we solve this ?  The first solution is to hint MySQL to use key1 as shown below. Now the query is almost instant, but this is not my favourite solution because if we drop the index, or if we change its name, the query will fail.

mysql> SELECT * FROM _test_jfg_201907 USE INDEX (key1)     WHERE id1 = @v AND id2 IS NOT NULL     ORDER BY id DESC LIMIT 20; 20 rows in set (0.00 sec)

A more elegant, but still very hack-ish, solution is to prevent the optimizer from using an index for the ORDER BY.  This can be achieved with the modified ORDER BY clause below (thanks to Shlomi Noach for suggesting this solution on a MySQL Community Chat).  This is the solution I prefer so far, even if it is still somewhat a hack.

mysql> SELECT * FROM _test_jfg_201907     WHERE id1 = @v AND id2 IS NOT NULL     ORDER BY (id+0) DESC LIMIT 20; 20 rows in set (0.00 sec)

A third solution is to use the Late Row Lookups trick.  Even if the post about this trick is 10 years old, it is still useful — thanks to my colleague Michal Skrzypecki for bringing it to my attention.  This trick basically forces the optimizer to choose the good plan because the query is modified with the intention of making the plan explicit. This is an elegant hack, but as it makes the query more complicated to understand, I prefer not to use it.

mysql> SELECT y.* FROM (   SELECT id FROM _test_jfg_201907     WHERE id1 = @v AND id2 IS NOT NULL     ORDER BY id DESC LIMIT 20) x   JOIN _test_jfg_201907 y ON x.id = y.id   ORDER by y.id DESC; 20 rows in set (0.00 sec)

The ideal solution…

Well, the best solution would be to fix the bugs below. I claim Bug#74602 is not fixed even if it is marked as such in the bug system, but I will not make too much noise about this as Bug#78612 also raises attention on this problem:

PS-4935 is a duplicate of PS-1653 that I opened a few months ago.  In that report, I mention a query that is taking 12 minutes because of a bad choice by the optimizer (when using the good plan, the query is taking less than 0.1 second).

One last thing before ending this post: I wrote above that I would give a longer explanation about the reason for this bad choice by the optimizer.  Well, this longer explanation has already been written by Domas Mituzas in 2015, so I am referring you to his on ORDER BY optimization post for more details.


Photo by Jamie Street on Unsplash

The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.

The post MySQL Optimizer: Naughty Aberrations on Queries Combining WHERE, ORDER BY and LIMIT appeared first on Percona Community Blog.

MySQL 8.0.17 - New Features - Clone Plugin - some reference links


The usage of this clone plugin is so much helpful to bring new MySQL instance to live.

Usage can go with New Replica (as in MySQL Replication), or we can easily create New Node in MySQL InnoDB Cluster.  or we can provision a New MySQL Instance for Testing or Troubleshooting.  or we need the New Instance with data for snapshot query.

Many useful links have been posted in the recent week(s).   

Please check the URL with the details.   This CLONE feature is the **NEW** super star add-on to the MySQL today.

Usage :

https://lefred.be/content/mysql-innodb-cluster-from-scratch-even-more-easy-since-8-0-17/

https://lefred.be/content/create-an-asynchronous-mysql-replica-in-5-minutes/

https://mysqlandmore.blogspot.com/2019/07/mysql-8017-release-and-clone-plugin.html


https://mysqlhighavailability.com/mysql-8-0-17-replication-enhancements/

Documentation :
https://dev.mysql.com/doc/refman/8.0/en/clone-plugin.html

Fun with Bugs #86 - On Some Public Bugs Fixed in MySQL 5.7.27

This week new minor releases of MySQL server were announced. Of them all I am mostly interested in MySQL 5.7.27. I plan to concentrate on InnoDB, replication and optimizer bugs reported in public MySQL bugs database and fixed in MySQL 5.7.27. As usual I name bug reporters explicitly and give links to their remaining currently active bug reports, if any.

Let me start with InnoDB bug fixes:
  • Bug #94699 - "Mysql deadlock and bugcheck on aarch64 under stress test". This bug report with a fix for insufficient memory barriers in the rw-lock implementation was contributed by Cai Yibo.
  • Bug #93708 - "Page Cleaner will sleep for long time if clock changes". This bug caused long delays on shutdown. It was reported by Marcelo Altmann. It took some efforts from MySQL Community to have it accepted as a real bug, but now it's fixed!
  • Bug #67526 - "Duplicate key error on auto-inc PK with mixed auto_increment_increment clients". This regression bug was reported back in 2012 by Rob Lineweaver and affected all MySQL versions starting from 5.5.28. Release notes states that the fix was to revert the patch that fixed Bug #65225 - "InnoDB miscalculates auto-increment after changing auto_increment_increment" reported by  Elena Stepanova. I am not sure what had really happened without checking the source code or at least running test cases from both bugs on 5.7.27.
  • Bug #94383 - "simple ALTER cause unnecessary InnoDB index rebuilds, 5.7.23 or later 5.7 rlses". This bug was reported by Mikhail Izioumtchenko, who had contributed diagnostics patch.
  • Bug #93670 - "virtual generated column index data inconsistency". I list this bug that happens when foreign keys are involved as InnoDB one, as no other engine in MySQL 5.7.27 supports foreign keys anyway. The bug was reported by Rui Xu.
Now let me continue with replication bugs:
  • Bug #93771 - "Contribution: Add log_bin_implicit_delete setting". This report was created for the patch contributed by Daniël van Eeden (who had also got a public credit for this contribution in a blog post) and is now closed and documented as fixed. But only part of the original patch was used, adding a comment to the binary log. It's strange to see this happening without any comments on why the rest of the patch was not used.
  • Bug #93440 - "Noop UPDATE query is logged to binlog after read_only flag is set". The problem is actually in a new GTID being generated in this case. The bug was reported by Artem Danilov.
  • Bug #92398 - "point in time recovery using mysqlbinlog broken with temporary table -> errors". This bug was reported by Shane Bester himself. I do not see any complete test case in the public bug report, for some reason. But if you search for private bug number (28642318) at GitHub you can easily identify related commit and changes in the mysql-test/extra/binlog_tests/drop_temp_table.test.
  • Bug #89987 - "super_read_only with master_info_repository=TABLE breaks replication on restart". It was reported by Clemens Weiß and fixed in 5.7.27, but it took some efforts and public comment by Jean-François Gagné to get this clarified.
  • Bug #93395 - "ALTER USER succeeds on master but fails on slave". This bug was reported by Jean-François Gagné and is still marked as "Verified" and is NOT explicitly listed in the release notes. But read this:
    "CREATE USER and ALTER USER did not check the validity of a hashed authentication string when used with IDENTIFIED WITH auth_plugin AS 'hash_string' syntax. (Bug #29395944)"The description of the bug sounds very similar, and according to the release notes it's fixed in 5.7.27. How comes the bug above is NOT closed and listed as at least related? It seems somebody does NOT look for duplicates, either when copying public bugs into the internal bugs database, or when closing the bug as fixed. What we have as a result looks like a lack of proper documenting. As bug reporter noted in the comment:
    "This is the 2nd attribution omission I am finding in 5.7.27 release notes (the other one was on Bug#95484). Is this a new policy not to update public bugs with fixed version and not to mention the public bugs in the release notes ?"This situation is wrong. It looks awkward like this "house" in Brighton:

and attracts attention. I hope Oracle engineers will care to add public comments to these two bugs to clarify what really happened and why.

Finally a couple of optimizer bugs were also fixed:
  • Bug #92809 - "Inconsistent ResultSet for different Execution Plans". The bug was reported by Juan Arruti. It took a lot of efforts from several Percona engineers to force it to be verified. Complete analysis (by Yura Sorokin) identified the root cause and the fact that the problem is already fixed in MySQL 8.0.x while MySQL 5.6.x is also affected (and seems NOT to be fixed). Good job by MySQL Community, somewhat questionable assistance from Oracle's engineer involved.
  • Bug #90398 - "Duplicate entry for key '<group_key>' error". It was reported by Yoseph Phillips. The fix is actually a more useful error message, nothing more.
That's all I have to say about the release. To summarize:
  1. MySQL 5.7.27 includes fixes to several serious InnoDB and replication bugs, so consider upgrade.
  2. Percona engineers contribute a lot to MySQL, both in form of bug reports, patches and by helping other community users to make their point and get their bugs fixed fast.
  3. There are things to improve in a way Oracle engineers handle bugs processing (verification, checking for duplicates, proper documenting of public bug reports that are fixed).
Some items above are the same as in my summary for the previous 5.7.26 release, isn't it?

MySQL ORDER BY with Simple Examples

This tutorial explains MySQL ORDER BY clause and its usage with both the ASC and DESC options. We’ll describe this MySQL statement with the help of simple examples. 1. ORDER BY Clause Syntax 2. ORDER BY W/O ASC or DESC 3. ORDER BY DESC 4. ORDER BY Relative Position 5. ORDER BY Two Fields ASC & DESC Let’s now go through each of the section one by one. MySQL ORDER BY Clause The MySQL SELECT command doesn’t return a sorted result set by default. Hence, to sort the output, you have to append the ORDER BY clause in the SELECT

The post MySQL ORDER BY with Simple Examples appeared first on Learn Programming and Software Testing.

My first impression on Mariadb 10.4.x with Galera4

MariaDB 10.4 has being declared GA, and few presentations on Galera4 in the last conferences were hold.

So, I thought, it is time to give it a try and see what is going on.

It is not a secret that I consider the Codership guys my heroes, and that I have push for Galera as solid and very flexible HA solution, for many years.

Given that my first comment is that it was a shame to have Galera4 available only in MariaDB, I would have preferred to test the MySQL vanilla version for many reasons, but mainly because the MySQL/Oracle is and remain the official and main line of the MySQL software, you like it or not, and as such I want to see how the Galera4 behave with that. Anyhow Codership state that the other versions will be out AFTER the summer, and I hope this will be true.

To test the new version given I do not have the vanilla MySQL, I decide to use the other distribution coming from Percona. At the end the test where done comparing MariaDB 10.4.x with PXC 5.7.x. In short Galera4 Vs Galera3.

I setup on the same machines the two different software, and I configure as close as possible. Said that I did 2 main set of tests: Data ingest and OLTP, both running for 90 minutes, not pushing like hell, but gently simulate some traffic. Configuration files can be found here.

Galera4 stream replication was disable, following the Codership instruction (wsrep_trx_fragment_size=0).

Test1 Ingest

For the ingest test I had use my stresstool application (here) with only 10 threads and 50 batch inserts each thread, the schema definition is in the windmills.json file.

As always, an image says more than many words:

In general terms, PXC was able to execute same load in less than MariaDB.

And PXC was able to deal with a higher number of events per thread as well.

The average galera latency was around 9ms in the writer and around 5ms for the receivers in PXC. With same load, same machines, same configuration:

The latency in MariaDB was significantly higher around 19ms for the writer, and between 9 and 5 ms for the receivers.

In short overall PXC 5.7 with galera3 was performing better than MariaDB 10.4 with galera4.

The amount of data on transmitted and received on PXC was higher (good) than Mariadb:

PXC:

MariaDB:

OLTP

For oltp test I have sysbenc with oltp r/w tests, 180 threads (90 from two different application nodes), 200K rows for table, 40 tables and 90 minutes run.

Let see what happened:

PXC was performing better than MariaDB, executing more writes/s and and more events_thread/sec.

Checking the latency, we can see:

Also in this case PXC was having less average latency than MariaDB in the execution.

What about galera?

For PXC/Galera3, the average galera latency was around 3.5ms in the writer and less in the receivers:

In this case the latency in Galera4 was same or less of the one in Galera3:

Also analyzing the MAX latency:

Galera3

Galera4

We can see that Galera4 was dealing with it much better than the version3.

 

I have done many other checks and it seems to me that in the OLTP, but I do not exclude this is valid for ingest as well, Galera4 is in some way penalize by the MariaDB platform.

I am just at the start of my investigation and I may be wrong, but I cannot confirm or deny until Codership will release the code for MySQL.

Conclusions

Galera4 seems to come with some very good new feature, please review Seppo presentation, and one thing I noticed it comes with optimized node communication, reducing the latency fluctuation.

Just this for me is great, plus we will have stream replication that can be very useful but I have not tested it yet.

Nevertheless, I would not move to it just yet. I would wait to have the other MySQL distribution supported, do some tests, and see where the performance problem is.

Because at the moment also with not heavy load, current version of PXC 5.7/Galera3 runs better than MariaDB/Galera4, so why I should migrate to a platform that locks me in like MariaDB, and do not give me benefit (yet)? Also considering that once Galera4 will be available for the standard MySQL versions, we can have all the good coming from Galera4, without being lock in by MariaDB.

A small note about MariaDB, while I was playing with it, I noticed that by default MariaDB comes with the plugin level BETA, which means potentially run in production code that is still in beta stage, no comment!

References

https://github.com/Tusamarco/blogs/tree/master/Galera4

https://galeracluster.com/

Seppo presentation https://www.slideshare.net/SakariKeskitalo/galera-cluster-4-presentation-in-percona-live-austin-2019

https://mariadb.org/mariadb-10-4-4-now-available/

My first impression on Mariadb 10.4.x with Galera4

MariaDB 10.4 has being declared GA, and few presentations on Galera4 in the last conferences were hold.

So, I thought, it is time to give it a try and see what is going on.

It is not a secret that I consider the Codership guys my heroes, and that I have push for Galera as solid and very flexible HA solution, for many years.

Given that my first comment is that it was a shame to have Galera4 available only in MariaDB, I would have preferred to test the MySQL vanilla version for many reasons, but mainly because the MySQL/Oracle is and remain the official and main line of the MySQL software, you like it or not, and as such I want to see how the Galera4 behave with that. Anyhow Codership state that the other versions will be out AFTER the summer, and I hope this will be true.

To test the new version given I do not have the vanilla MySQL, I decide to use the other distribution coming from Percona. At the end the test where done comparing MariaDB 10.4.x with PXC 5.7.x. In short Galera4 Vs Galera3.

I setup on the same machines the two different software, and I configure as close as possible. Said that I did 2 main set of tests: Data ingest and OLTP, both running for 90 minutes, not pushing like hell, but gently simulate some traffic. Configuration files can be found here.

Galera4 stream replication was disable, following the Codership instruction (wsrep_trx_fragment_size=0).

Test1 Ingest

For the ingest test I had use my stresstool application (here) with only 10 threads and 50 batch inserts each thread, the schema definition is in the windmills.json file.

As always, an image says more than many words:

In general terms, PXC was able to execute same load in less than MariaDB.

And PXC was able to deal with a higher number of events per thread as well.

The average galera latency was around 9ms in the writer and around 5ms for the receivers in PXC. With same load, same machines, same configuration:

The latency in MariaDB was significantly higher around 19ms for the writer, and between 9 and 5 ms for the receivers.

In short overall PXC 5.7 with galera3 was performing better than MariaDB 10.4 with galera4.

The amount of data on transmitted and received on PXC was higher (good) than Mariadb:

PXC:

MariaDB:

OLTP

For oltp test I have sysbenc with oltp r/w tests, 180 threads (90 from two different application nodes), 200K rows for table, 40 tables and 90 minutes run.

Let see what happened:

PXC was performing better than MariaDB, executing more writes/s and and more events_thread/sec.

Checking the latency, we can see:

Also in this case PXC was having less average latency than MariaDB in the execution.

What about galera?

For PXC/Galera3, the average galera latency was around 3.5ms in the writer and less in the receivers:

In this case the latency in Galera4 was same or less of the one in Galera3:

Also analyzing the MAX latency:

Galera3

Galera4

We can see that Galera4 was dealing with it much better than the version3.

 

I have done many other checks and it seems to me that in the OLTP, but I do not exclude this is valid for ingest as well, Galera4 is in some way penalize by the MariaDB platform.

I am just at the start of my investigation and I may be wrong, but I cannot confirm or deny until Codership will release the code for MySQL.

Conclusions

Galera4 seems to come with some very good new feature, please review Seppo presentation, and one thing I noticed it comes with optimized node communication, reducing the latency fluctuation.

Just this for me is great, plus we will have stream replication that can be very useful but I have not tested it yet.

Nevertheless, I would not move to it just yet. I would wait to have the other MySQL distribution supported, do some tests, and see where the performance problem is.

Because at the moment also with not heavy load, current version of PXC 5.7/Galera3 runs better than MariaDB/Galera4, so why I should migrate to a platform that locks me in like MariaDB, and do not give me benefit (yet)? Also considering that once Galera4 will be available for the standard MySQL versions, we can have all the good coming from Galera4, without being lock in by MariaDB.

A small note about MariaDB, while I was playing with it, I noticed that by default MariaDB comes with the plugin level BETA, which means potentially run in production code that is still in beta stage, no comment!

References

https://github.com/Tusamarco/blogs/tree/master/Galera4

https://galeracluster.com/

Seppo presentation https://www.slideshare.net/SakariKeskitalo/galera-cluster-4-presentation-in-percona-live-austin-2019

https://mariadb.org/mariadb-10-4-4-now-available/

MySQL Data Types Explained

This tutorial explains all MySQL data types, their characteristics, and min, max as well as possible default values. We’ll describe their usages so that you can use them efficiently for creating schemas and tables. A MySQL table can have one or more fields with specific data types such as a string or date. However, there are more available in MySQL to ease up your job of collecting and storing data. It is also crucial that you understand which data type should you use and when. Here are some standard goals that define them what do they represent: 1. The data,

The post MySQL Data Types Explained appeared first on Learn Programming and Software Testing.

MySQL TIMESTAMP with Simple Examples

This tutorial explains MySQL TIMESTAMP and TIMESTAMP field characteristics such as automated initialization and updating. We’ll describe their usages with the help of simple examples. 1. TIMESTAMP Syntax 2. TIMESTAMP Simple Examples 3. Set Timezone and Use Timestamp 4. Auto Init and Update Timestamp Let’s now go through each of the section one by one. MySQL TIMESTAMP The MySQL TIMESTAMP is a transient data type that contains a mixture of date and time. It is exactly 19 characters long. The structure of a TIMESTAMP field is as follows: Syntax # MySQL Timestamp YYYY-MM-DD HH:MM:SS The TIMESTAMP value shows in UTC

The post MySQL TIMESTAMP with Simple Examples appeared first on Learn Programming and Software Testing.

Pages