Planet MySQL

MySQL Connector/J 8.0.16 has been released

Dear MySQL users,

MySQL Connector/J Version 8.0.16 is the GA release of the 8.0
branch of MySQL Connector/J. It is suitable for use with MySQL Server
versions 8.0, 5.7 and 5.6. It supports the Java Database
Connectivity (JDBC) 4.2 API, and implements the X DevAPI.

This release includes the following new features and changes, also
described in more detail on

https://dev.mysql.com/doc/relnotes/connector-j/8.0/en/news-8-0-16.html

As always, we recommend that you check the “CHANGES” file in the
download archive to be aware of changes in behavior that might affect
your application.

To download MySQL Connector/J 8.0.16 GA, see the “Generally Available
(GA) Releases” tab at http://dev.mysql.com/downloads/connector/j/

Enjoy!

———————————————————————–

Changes in MySQL Connector/J 8.0.16 (2019-04-25, General Availability)

Functionality Added or Changed

* X DevAPI: Added BigInteger, BigDecimal, and Character as
supported classes whose instances can be passed to a
Table as Expression instances. Also made the error
message clearer when applications try to pass instances
of unsupported classes. (Bug #25650912)

* X DevAPI: Connector/J now supports the ability to send
connection attributes
(http://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html)
(key-value pairs that application programs can pass to the
server at connect time) for X Protocol connections.
Connector/J defines a default set of attributes
(http://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html#performance-schema-connection-attributes-cj),
which can be disabled or enabled. In addition,
applications can specify attributes to be
passed in addition to the default attributes. The default
behavior is to send the default attribute set. See the
description for the new configuration property
xdevapi.connect-attributes for details.
Note
The aggregate size of connection attribute data sent by a
client is limited by the value of the
performance_schema_session_connect_attrs_size server
variable. The total size of the data package should be
less than the value of the server variable, or the
attribute data will be truncated.

* X DevAPI: When using X DevAPI, performance for statements
that are executed repeatedly (two or more times) is
improved by using server-side prepared statements for the
second and subsequent executions. See Working with
Prepared Statements
(http://dev.mysql.com/doc/x-devapi-userguide/en/working-with-prepared-statements.html)
in the X DevAPI User Guide
(http://dev.mysql.com/doc/x-devapi-userguide/en/) for details.

* The version number has been removed from the name of the
Connector/J JAR archive within the RPM packages for
Connector/J. That makes upgrading Connector/J with RPM
packages easier. (Bug #29384853)

* The collation utf8mb4_zh_0900_as_cs has been added to the
CharsetMapping class. (Bug #29244101)

* The following third-party libraries have been removed
from the distribution bundles for Connector/J:

+ Google protobuf for Java (required for using X
DevAPI and for building Connector/J from source)

+ C3P0 (required for building Connector/J from source)

+ JBoss common JDBC wrapper (required for building
Connector/J from source)

+ Simple Logging Facade API (required for using the
logging capabilities provided by the default
implementation of org.slf4j.Logger.Slf4JLogger by
Connector/J, and for building Connector/J from
source)
Users who need those libraries have to obtain them on
their own. See Installing Connector/J from a Binary
Distribution
(http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-binary-installation.html)
and Installing from Source
(http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-installing-source.html)
for details.

Bugs Fixed

* X DevAPI: The method unquoteWorkaround() has been removed
from the ExprParser class, as the workaround is no longer
needed, and it actually produced wrong results in some
cases. (Bug #29257922)

* X DevAPI: Connector/J threw an error when a JSON document
contained only a field with an empty array as its value.
With this fix, Connector/J now takes that as a valid JSON
document. (Bug #28834959, Bug #92819)

* getByte() calls failed on table columns of the
BINARY data type. This was due to issues with string
conversion, which has been corrected with this fix. (Bug
#25650385)

* X DevAPI: Any statements sent after a failed procedure
call caused Connector/J to hang. This was because after
the failed call, Connector/J was not aware that the
result streamer had already been closed by the server.
With this fix, an error is thrown when the procedure call
fails, and the result streamer is nullified. (Bug
#22038729)

* X DevAPI: Unary negative and positive operators inside
expressions were parsed wrongly as binary minus and plus
operators. (Bug #21921956)

* Because the SHOW PROCESSLIST
(http://dev.mysql.com/doc/refman/8.0/en/show-processlist.html)
statement might cause the server to fail sometimes,
Connector/J now avoids using the statement, but queries
the performance scheme instead for the information it
needs. (Bug #29329326)

* Some unnecessary information has been removed from the
Connector/J log. (Bug #29318273)

* In the DatabaseMetaDataUsingInfoSchema interface, the
getProcedureColumns() and getFunctionColumns() methods
returned wrong results for the PRECISION column, and the
getColumns() and getVersionColumns() methods returned
wrong results for the COLUMN_SIZE column. The errors were
due to the wrong handling of the temporal type precision
by Connector/J, which has now been fixed. (Bug #29186870)

* For an SSL connection, after a client disconnected from a
server by calling Connection.close(), the TCP connection
remained in the TIME_WAIT
(http://dev.mysql.com/doc/refman/8.0/en/can-not-connect-to-server.html)
state on the server side. With this fix,
the connection remains in the TIME_WAIT
(http://dev.mysql.com/doc/refman/8.0/en/can-not-connect-to-server.html)
state on the client side instead, in most
cases. (Bug #29054329, Bug #93590)

* The function
LoadBalancedConnectionProxy.getGlobalBlacklist() always
returned an empty map, thus there was never a blacklist
for load-balanced connections. (Bug #28860051, Bug
#93007)

* The redundant file, changelog.gz, has been removed from
the Debian 9 package for Connector/J. The file repeated
the contents of the CHANGES.gz file. (Bug #27786499)

* Using getBytes() to retrieve TEXT data resulted in a
NumberFormatException. With this fix, the proper
exception (SQLDataException), is now thrown. (Bug
#27784363)

* A changeUser() call failed with a java.io.IOException
when the configuration property enablePacketDebug was set
to true for a connection. (Bug #25642021)

* bindings.getBoolean() always returned false. It was due
to a mishandling of data types, which has been corrected
with this fix. (Bug #22931700)

Enjoy and thanks for the support!

On Behalf of MySQL/ORACLE RE Team
Sreedhar S

MySQL Connector/ODBC 8.0.16 has been released

Dear MySQL users,

MySQL Connector/ODBC 8.0.16 is a new version in the MySQL Connector/ODBC 8.0 series,
the ODBC driver for the MySQL Server.

The available downloads include both a Unicode driver and an ANSI driver based on the
same modern codebase. Please select the driver type you need based on the type of your
application – Unicode or ANSI. Server-side prepared statements are enabled by default.
It is suitable for use with the latest MySQL server version 8.0.

This release of the MySQL ODBC driver is conforming to the ODBC 3.8 specification.
It contains implementations of key 3.8 features, including self-identification
as a ODBC 3.8 driver, streaming of output parameters (supported for binary types
only), and support of the SQL_ATTR_RESET_CONNECTION connection attribute (for the
Unicode driver only).

The release is now available in source and binary form for a number of platforms
from our download pages at

https://dev.mysql.com/downloads/connector/odbc/

For information on installing, please see the documentation at

https://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation.html

Enjoy and thanks for the support!

==================================================

Changes in MySQL Connector/ODBC 8.0.16 (2019-04-25, General Availability)

Bugs Fixed

* Connector/ODBC 8.0 is now built with OpenSSL 1.0.2R.
Issues fixed in the new OpenSSL version are described at
http://www.openssl.org/news/vulnerabilities.html. (Bug
#29538143)

* An exception was emitted when fetching contents of a
BLOB/TEXT records after executing a statement as a
server-side prepared statement with a bound parameter.
The workaround is not using parameters or specifying
NO_SSPS=1 in the connection string; this allows the
driver to fetch the data. (Bug #29282638, Bug #29512548,
Bug #28790708, Bug #93895, Bug #94545, Bug #92078)

On Behalf of Oracle/MySQL Release Engineering Team,
Hery Ramilison

MySQL 8.0.16 Check Constraints

Before MySQL 8.0.16 you could put constraint checks into your Data Definition Language (DDL) when creating tables but the server ignored them.  There was much gnashing of teeth as taunts of "It is not a real database" from other databases taunted the MySQL Community. 
 
But with 8.0.16 this has all changed. You can now have your data constraints checked by the server. Below is an example table with two constraints.

mysql>CREATE TABLE parts 
            (id int, cost decimal(5,2) not null check (cost > 0),
             price decimal(5,2) not null check (price > 1.0)
          );
Query OK, 0 rows affected (0.09 sec)

mysql> insert into parts (id,cost,price) values (1,1.10,2.25);
Query OK, 1 row affected (0.03 sec)

mysql> insert into parts (id,cost,price) values (1,1.10,0.25);
ERROR 3819 (HY000): Check constraint 'parts_chk_2' is violated.

mysql> insert into parts (id,cost,price) values (2,-1.1,4.25);
ERROR 3819 (HY000): Check constraint 'parts_chk_1' is violated.

The first data input above does passes the constraints since both the cost and price columns are greater than the minimum. But not so with the next inserts, So the constraints work!

Finding Out What The Error Message MeansNow seeing Check constraint 'parts_chk_1' is violated. tells me the data is out of value. But how do I find out what parts_chk_1 is?

The fist thing to check is the table itself.


mysql> show create table parts;
+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table                                                                                                                                                                                                                                                                                |
+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| parts | CREATE TABLE `parts` (
  `id` int(11) DEFAULT NULL,
  `cost` decimal(5,2) NOT NULL,
  `price` decimal(5,2) NOT NULL,
  CONSTRAINT `parts_chk_1` CHECK ((`cost` > 0)),
  CONSTRAINT `parts_chk_2` CHECK ((`price` > 1.0))

) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci |
+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)

And you can also see the information in the IS.

mysql> select * from INFORMATION_SCHEMA.CHECK_CONSTRAINTS;
+--------------------+-------------------+-----------------+-----------------+
| CONSTRAINT_CATALOG | CONSTRAINT_SCHEMA | CONSTRAINT_NAME | CHECK_CLAUSE    |
+--------------------+-------------------+-----------------+-----------------+
| def                | davetest          | xxx_chk_1       | (`age` > 18)    |
| def                | davetest          | parts_chk_1     | (`cost` > 0)    |
| def                | davetest          | parts_chk_2     | (`price` > 1.0) |
| def                | davetest          | client_chk_1    | (`age` > 17)    |
+--------------------+-------------------+-----------------+-----------------+
4 rows in set (0.00 sec)

MySQL Connector/NET 8.0.16 has been released

Dear MySQL users,

MySQL Connector/NET 8.0.16 is the fourth version to support
Entity Framework Core 2.1 and the sixth general availability release
of MySQL Connector/NET to add support for the new X DevAPI, which
enables application developers to write code that combines the
strengths of the relational and document models using a modern,
NoSQL-like syntax that does not assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/index.html. For more
information about how the X DevAPI is implemented in Connector/NET, see
http://dev.mysql.com/doc/dev/connector-net. NuGet packages provide functionality at a project level. To get the
full set of features available in Connector/NET such as availability
in the GAC, integration with Visual Studio’s Entity Framework Designer
and integration with MySQL for Visual Studio, installation through
the MySQL Installer or the stand-alone MSI is required.

Please note that the X DevAPI requires at least MySQL Server version
8.0 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

To download MySQL Connector/NET 8.0.16, see
http://dev.mysql.com/downloads/connector/net/

Installation instructions can be found at
https://dev.mysql.com/doc/connector-net/en/connector-net-installation.html

Changes in MySQL Connector/NET 8.0.16 ( 2019-04-25, General Availability )

* Functionality Added or Changed

* Bugs Fixed

Functionality Added or Changed

* Document Store: Support was added for the -> operator to
be used with JSON document paths in relational
statements. For example:
table.Select().Where("additionalinfo->$.hobbies = 'Reading'");

(Bug #29347028)

* Document Store: The performance for statements that are
executed repeatedly (two or more times) is improved by
using server-side prepared statements for the second and
subsequent executions. This happens internally;
applications need take no action and API behavior should
be the same as previously. For statements that change,
repreparation occurs as needed. Providing different data
values or different OFFSET or LIMIT clause values does
not count as a change. Instead, the new values are passed
to a new invocation of the previously prepared statement.

* Document Store: Connector/NET now supports the ability to
send connection attributes (key-value pairs that
application programs can pass to the server at connect
time). Connector/NET defines a default set of attributes,
which can be disabled or enabled. In addition,
applications can specify attributes to be passed together
with the default attributes. The default behavior is to
send the default attribute set.
The aggregate size of connection attribute data sent by a
client is limited by the value of the
performance_schema_session_connect_attrs_size server
variable. The total size of the data package should be
less than the value of the server variable. For X DevAPI
applications, specify connection attributes as a
connection-attributes parameter in a connection string.
For usage information, see Options for X Protocol Only
(http://dev.mysql.com/doc/connector-net/en/connector-net-8-0-connection-options.html#connector-net-8-0-connection-options-xprotocol).
For general information about connection attributes, see
Performance Schema Connection Attribute Tables
(http://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html).

* Document Store: Connector/NET now has improved support
for resetting sessions in connection pools. Returning a
session to the pool drops session-related objects such as
temporary tables, session variables, and transactions,
but the connection remains open and authenticated so that
reauthentication is not required when the session is
reused.

* Connector/NET applications now can use certificates in
PEM format to validate SSL connections in addition to the
native PFX format (see Tutorial: Using SSL with Connector/NET
(http://dev.mysql.com/doc/connector-net/en/connector-net-tutorials-ssl.html)).
PEM support applies to both classic MySQL protocol
and X Protocol connections.

Bugs Fixed

* Document Store: All methods able to execute a statement
were unable to execute the same statement a second time.
Now, the values and binding parameters remain available
after the method is executed and string parameters are no
longer converted to numbers. Both changes enable a
follow-on execution to reuse the previous parameters.
(Bug #29249857, Bug #29304767)

* An exception was generated when the MySqlDbType
enumeration was given an explicit value and then passed
as a parameter to the MySqlCommand.Prepare method. (Bug
#28834253, Bug #92912)

* Validation was added to ensure that when a column is of
type TIME and the value is 00:00:00, it takes the value
instead of setting NULL. (Bug #28383726, Bug #91752)

On Behalf of MySQL Release Engineering Team,
Surabhi Bhat

MySQL Connector/Node.js 8.0.16 has been released

Dear MySQL users,

MySQL Connector/Node.js is a new Node.js driver for use with the X
DevAPI. This release, v8.0.16, is a maintenance release of the
MySQL Connector/Node.js 8.0 series.

The X DevAPI enables application developers to write code that combines
the strengths of the relational and document models using a modern,
NoSQL-like syntax that does not assume previous experience writing
traditional SQL.

MySQL Connector/Node.js can be downloaded through npm (see
https://www.npmjs.com/package/@mysql/xdevapi for details) or from
https://dev.mysql.com/downloads/connector/nodejs/.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information
about how the X DevAPI is implemented in MySQL Connector/Node.js, and
its usage, see http://dev.mysql.com/doc/dev/connector-nodejs/.

Please note that the X DevAPI requires at least MySQL Server version
8.0 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

Changes in MySQL Connector/Node.js 8.0.16 (2019-04-25, General
Availability)

X DevAPI Notes

* Connector/Node.js now supports connection attributes as
key-value pairs that application programs can pass to the
server. Connector/Node.js defines a default set of
attributes, which can be disabled or enabled. In addition
to these default attributes, applications can also
provide their own set of custom attributes.

+ Specify connection attributes as a
connection-attributes parameter in a connection
string, or by using the connectionAttributes
property using either a plain JavaScript object or
JSON notation to specify the connection
configuration options.
The connection-attributes parameter value must be
either empty (the same as specifying true), a
Boolean value (true or false to enable or disable
the default attribute set), or a list of zero or
more key=value pair specifiers separated by commas
(to be sent in addition to the default attribute
set). Within a list, a missing key value evaluates
as NULL.
The connectionAttributes property allows passing
user-defined attributes to the application using
either a plain JavaScript object or JSON notation to
specify the connection configuration options. Define
each attribute in a nested object under
connectionAttributes where the property names
matches the attribute names, and the property values
match the attribute values. Unlike
connection-attributes, and while using plain
JavaScript objects or JSON notation, if the
connectionAttributes object contains duplicate keys
then no error is thrown and the last value specified
for a duplicate object key is chosen as the
effective attribute value.
Examples:
Not sending the default client-defined attributes:
mysqlx.getSession('{ "user": "root", "connectionAttributes": false }')

mysqlx.getSession('mysqlx://root@localhost?connection-attributes=false
')

mysqlx.getSession({ user: 'root', connectionAttributes: { foo: 'bar',
baz: 'qux', quux: '' } })
mysqlx.getSession('mysqlx://root@localhost?connection-attributes=[foo=
bar,baz=qux,quux]')

Application-defined attribute names cannot begin with _
because such names are reserved for internal attributes.
If connection attributes are not specified in a valid
way, an error occurs and the connection attempt fails.
For general information about connection attributes, see
Performance Schema Connection Attribute Tables
(http://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html).

Functionality Added or Changed

* Optimized the reuse of existing connections through
client.getSession() by only re-authenticating if
required.

* For X DevAPI, performance for statements that are
executed repeatedly (two or more times) is improved by
using server-side prepared statements for the second and
subsequent executions. This happens internally;
applications need take no action and API behavior should
be the same as previously. For statements that change,
repreparation occurs as needed. Providing different data
values or different offset() or limit() values does not
count as a change. Instead, the new values are passed to
a new invocation of the previously prepared statement.

Bugs Fixed

* Idle pooled connections to MySQL Server were not reused,
and instead new connections had to be recreated. (Bug
#29436892)

* Executing client.close() would not close all associated
connections in the connection pool. (Bug #29428477)

* connectTimeout instead of maxIdleTime determined whether
idle connections in the connection pool were reused
rather than creating new connections. (Bug #29427271)

* Released connections from the connection pool were not
being reset and reused; instead new connections were
being made. (Bug #29392088)

* Date values in documents were converted to empty objects
when inserted into a collection. (Bug #29179767, Bug
#93839)

* A queueTimeout value other than 0 (infinite) prevented
the acquisition of old released connections from the
connection pool. (Bug #29179372, Bug #93841)

On Behalf of MySQL/ORACLE RE Team
Gipson Pulla

MySQL Connector/C++ 8.0.16 has been released

Dear MySQL users,

MySQL Connector/C++ 8.0.16 is a new release version of the MySQL Connector/C++ 8.0 series.

Connector/C++ 8.0 can be used to access MySQL implementing Document Store or in a traditional way, using SQL queries. It allows writing both C++ and plain C applications using X DevAPI and X DevAPI for C. It also supports the legacy API of Connector/C++ 1.1 based on JDBC4.

To learn more about how to write applications using X DevAPI, see “X DevAPI User Guide” at

https://dev.mysql.com/doc/x-devapi-userguide/en/

See also “X DevAPI Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/devapi_ref.html

and “X DevAPI for C Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/xapi_ref.html

For generic information on using Connector/C++ 8.0, see

https://dev.mysql.com/doc/dev/connector-cpp/

For general documentation about how to get started using MySQL as a document store, see

http://dev.mysql.com/doc/refman/8.0/en/document-store.html

To download MySQL Connector/C++ 8.0.16, see the “Generally Available (GA) Releases” tab at

https://dev.mysql.com/downloads/connector/cpp/

Changes in MySQL Connector/C++ 8.0.16 (2019-04-25, General Availability) Character Set Support * Connector/C++ supports all Unicode character sets for connections to servers for MySQL 8.0.14 and higher, but previously had Unicode support limited to the utf8 character set for servers older than MySQL 8.0.14. Connector/C++ now supports all Unicode character sets for older servers, including utf8mb4, utf16, utf16le, utf32, and ucs2. (Bug #28966038) Compilation Notes * Thanks to Daniël van Eeden, who contributed a code change to use the stdbool.h header file rather than a bool typedef. (Bug #29167120, Bug #93803) * Thanks to Daniël van Eeden, who contributed a code change to use lib instead of lib64 on 64-bit FreeBSD. (Bug #29167098, Bug #93801) * Previously, for Connector/C++ applications that used the legacy JDBC API, source files had to use this set of #include directives: #include <jdbc/mysql_driver.h> #include <jdbc/mysql_connection.h> #include <jdbc/cppconn/*.h> Now a single #include directive suffices: #include <mysql/jdbc.h> Configuration Notes * Thanks to Daniël van Eeden, who contributed a code change to build the documentation as part of the all target if Connector/C++ is configured with -DWITH_DOC=ON. (Bug #29167107, Bug #93802) * Previously, for Connector/C++ 8.0 applications that use the legacy JDBC connector, only static linking to the MySQL client library was supported. The MYSQLCLIENT_STATIC_LINKING and MYSQLCLIENT_STATIC_BINDING CMake options are now available to permit dynamic linking. By default, MYSQLCLIENT_STATIC_LINKING is enabled, to use static linking to the client library. Disable this option to use dynamic linking. If MYSQLCLIENT_STATIC_LINKING is enabled, MYSQLCLIENT_STATIC_BINDING may also be used. If MYSQLCLIENT_STATIC_BINDING is enabled (the default), Connector/C++ is linked to the shared MySQL client library. Otherwise, the shared MySQL client library is loaded and mapped at runtime. * Connector/C++ 8.0 configuration now requires a minimum CMake version of 3.0. Packaging Notes * Connector/C++ debug packages are now available for Linux and Windows, The packages enable symbolic debugging using tools such as gdb on Linux and windbg on Windows, as well as obtaining symbolic information about connector code locations from application crash dumps. Use of the debug packages requires that you have installed and configured the Connector/C++ sources. (Bug #29117059, Bug #93645, Bug #26128420, Bug #86415) * For improved GitHub friendliness, Community Connector/C++ source distributions now include a CONTRIBUTING.md markdown file. CONTRIBUTING.md contains guidelines intended to be helpful to contributors. * The Protobuf sources bundled in the Connector/C++ source tree were updated to Protobuf 3.6.1. (Only the parts needed for Connector/C++ are included, to reduce compilation time.) Prepared Statement Notes * For X DevAPI and X DevAPI for C, performance for statements that are executed repeatedly (two or more times) is improved by using server-side prepared statements for the second and subsequent executions. This happens internally; applications need take no action and API behavior should be the same as previously. For statements that change, repreparation occurs as needed. Providing different data values or different OFFSET or LIMIT clause values does not count as a change. Instead, the new values are passed to a new invocation of the previously prepared statement. X DevAPI Notes * For X DevAPI and X DevAPI for C applications, Connector/C++ now supports the ability to send connection attributes (key-value pairs that application programs can pass to the server at connect time). Connector/C++ defines a default set of attributes, which can be disabled or enabled. In addition, applications can specify attributes to be passed in addition to the default attributes. The default behavior is to send the default attribute set. + For X DevAPI applications, specify connection attributes as a connection-attributes parameter in a connection string, or by using a SessionOption::CONNECTION_ATTRIBUTES option for the SessionSettings constructor. The connection-attributes parameter value must be empty (the same as specifying true), a Boolean value (true or false to enable or disable the default attribute set), or a list or zero or more key=value specifiers separated by commas (to be sent in addition to the default attribute set). Within a list, a missing key value evaluates as an empty string. Examples: "mysqlx://user@host?connection-attributes" "mysqlx://user@host?connection-attributes=true" "mysqlx://user@host?connection-attributes=false" "mysqlx://user@host?connection-attributes=[attr1=val1,attr2,attr3=]" "mysqlx://user@host?connection-attributes=[]" The SessionOption::CONNECTION_ATTRIBUTES option value must be a Boolean value (true or false to enable or disable the default attribute set), or a DbDoc or JSON string (to be sent in addition to the default attribute set). Examples: Session sess(..., SessionOption::CONNECTION_ATTRIBUTES, false); Session sess(..., SessionOption::CONNECTION_ATTRIBUTES, attr_doc ); Session sess(..., SessionOption::CONNECTION_ATTRIBUTES, R"({ "attr1": "val1", "attr2" : "val2" })" ); + For X DevAPI for C applications, specify connection attributes using the OPT_CONNECTION_ATTRIBUTES() macro for the mysqlx_session_option_set() function. The option value must be null (to disable the default attribute set) or a JSON string (to be sent in addition to the default attribute set). Examples: mysqlx_session_option_set(opts, OPT_CONNECTION_ATTRIBUTES(nullptr)); mysqlx_session_option_set(opts, OPT_CONNECTION_ATTRIBUTES("{ \"attr1\": \"val1\", \"attr2\" : \"val2\" }") ); Application-defined attribute names cannot begin with _ because such names are reserved for internal attributes. If connection attributes are not specified in a valid way, an error occurs and the connection attempt fails. For general information about connection attributes, see Performance Schema Connection Attribute Tables (http://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html). X DevAPI for C Notes * The signatures for several X DevAPI for C functions have been changed to enable better error information to be returned to applications by means of a mysqlx_error_t handle. These functions are affected: mysqlx_client_t* mysqlx_get_client_from_url( const char *conn_string, const char *client_opts, mysqlx_error_t **error ) mysqlx_client_t* mysqlx_get_client_from_options( mysqlx_session_options_t *opt, mysqlx_error_t **error ) mysqlx_session_t* mysqlx_get_session( const char *host, int port, const char *user, const char *password, const char *database, mysqlx_error_t **error ) mysqlx_session_t* mysqlx_get_session_from_url( const char *conn_string, mysqlx_error_t **error ) mysqlx_session_t* mysqlx_get_session_from_options( mysqlx_session_options_t *opt, mysqlx_error_t **error ) mysqlx_session_t * mysqlx_get_session_from_client( mysqlx_client_t *cli, mysqlx_error_t **error ) The final argument in each case is a mysqlx_error_t handle into which Connector/C++ stores error information. If the argument is a null pointer, Connector/C++ ignores it. The application is responsible to free non-null handles by passing them to mysqlx_free(). The signature for mysqlx_free() has also been changed to accept a void * argument so that it can accept a handle of any type. Consequently, other type-specific free functions, such as mysqlx_free_options(), are no longer needed and are deprecated. The preceding modifications change the Connector/C++ API, which has these implications: + The modifications change the ABI, so the ABI version is changed from 1 to 2. This changes the connector library names. + X DevAPI for C applications to be compiled against the new API must be modified to use the new function signatures. (X DevAPI applications should build without change.) + Applications built against the old ABI will not run with the new connector library. + The API change and ABI version change do not affect the legacy JDBC interface, so library names for the legacy JDBC connector library do not change and legacy application need not be changed. + It is possible to install both the old and new libraries. However, installers may remove the old libraries, so they may need to be re-added manually after installing the new libraries. Functionality Added or Changed * Thanks to Daniël van Eeden, who contributed documentation for the mysqlx_column_get_collation() function and various corrections in the developer documentation. (Bug #29123114, Bug #93665, Bug #29115285, Bug #93640, Bug #29122490, Bug #93663) * Connector/C++ now has improved support for resetting sessions in connection pools. Returning a session to the pool drops session-related objects such as temporary tables, session variables, and transactions, but the connection remains open and authenticated so that reauthentication is not required when the session is reused. Bugs Fixed * Previously, for the SSL_MODE_VERIFY_IDENTITY connection option, Connector/C++ checked whether the host name that it used for connecting matched the Common Name value in the certificate but not the Subject Alternative Name value. Now, if used with OpenSSL 1.0.2 or higher, Connector/C++ checks whether the host name matches either the Subject Alternative Name value or the Common Name value in the server certificate. (Bug #28964313, Bug #93301) * After repeated calls, mysqlx_get_session_from_client() could hang. (Bug #28587287) * The SessionSettings/ClientSettings iterator implementation was incomplete. (Bug #28502574)

Enjoy and thanks for the support!

On Behalf of Oracle/MySQL Release Engineering Team,
Kent Boortz

Watch Webinar: Billion Goods in Few Categories – How Histograms Save a Life?


Please join Percona’s Principal Support Engineer Sveta Smirnova as she presents Billion Goods in Few Categories: How Histograms Save a Life?

Watch the Recorded Webinar

We store data with the intention to use it: search, retrieve, group, sort, etc. To perform these actions effectively, MySQL storage engines index data and communicate statistics with the Optimizer when it compiles a query execution plan. This approach works perfectly well unless your data distribution is uneven.

Last year I worked on several tickets where data followed the same pattern: millions of popular products fit into a couple of categories and the rest used all the others. We had a hard time finding a solution for retrieving goods fast. Workarounds for version 5.7 were offered. However, we learned a new MySQL 8.0 feature – histograms – would work better, cleaner, and faster. Thus, the idea of our talk was born.

In this webinar, we will discuss:
– How index statistics are physically stored
– Which data is exchanged with the Optimizer
– Why it is not enough to make a correct index choice

In the end, I will explain which issues are resolved by histograms, and we will discuss why using index statistics are insufficient for the fast retrieval of unevenly distributed data.

How to Install Phorum with Nginx on Ubuntu 18.04 LTS

Phorum is a PHP and MySQL based Open Source forum software. In this guide, we will guide you step-by-step through the Phorum installation process on the Ubuntu 18.04 LTS operating system using Nginx as the web server, MySQL as the database, and acme.sh and Let's Encrypt for HTTPS.

Announcing MySQL Server 8.0.16, 5.7.26 and 5.6.44

MySQL Server 8.0.16, 5.7.26 and 5.6.44, new versions of the popular Open Source Database Management System, have been released. These releases are recommended for use on production systems. For an overview of what’s new, please see http://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html For information on installing the release on new servers, please see the MySQL installation documentation at […]

The MySQL 8.0.16 Maintenance Release is Generally Available

The MySQL Development team is very happy to announce that MySQL 8.0.16 is now available for download at dev.mysql.com. In addition to bug fixes there are a few new features added in this release.  Please download 8.0.16 from dev.mysql.com or from the MySQL  YumAPT, or SUSE repositories.…

Facebook Twitter Google+ LinkedIn

MySQL Workbench 8.0.16 has been released

Dear MySQL users,

The MySQL developer tools team announces 8.0.16 as our general available (GA) for MySQL Workbench 8.0.

For the full list of changes in this revision, visit
http://dev.mysql.com/doc/relnotes/workbench/en/news-8-0.html

For discussion, join the MySQL Workbench Forums:
http://forums.mysql.com/index.php?152

The release is now available in source and binary form for a number of platforms from our download pages at:

http://dev.mysql.com/downloads/tools/workbench/

Enjoy!

MySQL Connector/Python 8.0.16 has been released

Dear MySQL users,

MySQL Connector/Python 8.0.16 is the latest GA release version of the
MySQL Connector Python 8.0 series. The X DevAPI enables application
developers to write code that combines the strengths of the relational
and document models using a modern, NoSQL-like syntax that does not
assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information
about how the X DevAPI is implemented in MySQL Connector/Python, and its
usage, see http://dev.mysql.com/doc/dev/connector-python.

Please note that the X DevAPI requires at least MySQL Server version 8.0
or higher with the X Plugin enabled. For general documentation about how
to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

To download MySQL Connector/Python 8.0.16, see the “General Available
(GA) releases” tab at http://dev.mysql.com/downloads/connector/python/

Enjoy!

Changes in MySQL Connector/Python 8.0.16 (2019-04-25, General Availability)      * Functionality Added or Changed      * Bugs Fixed Functionality Added or Changed      * Two informative text files were added: INFO_BIN contains        information about the build environment used to produce        the distribution, and INFO_SRC provides information about        the product version and the source repository from which        the distribution was produced. (Bug #29454706)      * Django 1.11 is now the minimum supported Django version.      * For X DevAPI applications, Connector/Python now supports        the ability to send connection attributes (key-value        pairs that application programs can pass to the server at        connect time). Connector/Python defines a default set of        attributes, which can be disabled or enabled. In        addition, applications can specify attributes to be        passed in addition to the default attributes. The default        behavior is to send the default attribute set.        For X DevAPI applications, specify connection attributes        as a connection-attributes parameter in a connection        string, or setting connection-attributes as a dictionary        inside the connection settings parameter under the        connection-attributes key. Both the mysqlx.get_session()        and mysqlx.get_client() methods can receive this        information.        The connection-attributes parameter value must be empty        (the same as specifying true), a Boolean value (true or        false to enable or disable the default attribute set), or        a list or zero or more key=value specifiers separated by        commas (to be sent in addition to the default attribute        set). Within a list, a missing key value evaluates as an        empty string. An example connection string: mysqlx://user:password@host:33060/schema?connection-attributes=[foo=bar,baz=qux,quux]        Application-defined attribute names cannot begin with _        because such names are reserved for internal attributes.        If connection attributes are not specified in a valid        way, an error occurs and the connection attempt fails.        For general information about connection attributes, see        Performance Schema Connection Attribute Tables (http://dev.mysql.com/doc/refman/8.0/en/performance-schema-connection-attribute-tables.html).      * Connector/Python now has improved support for resetting        sessions in connection pools. Returning a session to the        pool drops session-related objects such as temporary        tables, session variables, and transactions, but the        connection remains open and authenticated so that        reauthentication is not required when the session is        reused.      * Protobuf was updated to Protobuf 3.6.1.      * For X DevAPI, performance for statements that are        executed repeatedly (two or more times) is improved by        using server-side prepared statements for the second and        subsequent executions. This happens internally;        applications need take no action and API behavior should        be the same as previously. For statements that change,        repreparation occurs as needed. Providing different data        values or different offset() or limit() values does not        count as a change. Instead, the new values are passed to        a new invocation of the previously prepared statement. Bugs Fixed      * Added a "username" alias for the "user" connection        argument. Thanks to Matthew Woods for the patch. (Bug        #29324966, Bug #94248)      * Solaris 11 package files had the expected owner/group set        as pb2user/common instead of root/bin. (Bug #29278489)      * CRUD operations would not allow referencing a renamed        column (AS SomeLabel) from the fetched result. (Bug        #29001628)      * Fixed a memory corruption issue that caused an unexpected        halt when fetching fields. (Bug #28479054)      * Querying an empty LONG BLOB raised an IndexError        exception. (Bug #27897881, Bug #90519)

Enjoy and thanks for the support!

On Behalf of Oracle/MySQL Release Engineering Team,
Balasubramanian Kandasamy

Creating Custom Sysbench Scripts

Sysbench has long been established as the de facto standard when it comes to benchmarking MySQL performance. Percona relies on it daily, and even Oracle uses it when blogging about new features in MySQL 8. Sysbench comes with several pre-defined benchmarking tests. These tests are written in an easy-to-understand scripting language called Lua. Some of these tests are called: oltp_read_write, oltp_point_select, tpcc, oltp_insert. There are over ten such scripts to emulate various behaviors found in standard OLTP applications.

But what if your application does not fit the pattern of traditional OLTP? How can you continue to utilize the power of load-testing, benchmarking, and results analysis with sysbench? Just write your own Lua script!

For those that want to jump ahead and see the full source, here you go.

Sysbench API

To start off, each Lua script you create must implement three core sysbench-Lua API functions. These are thread_init, thread_done, and event. You can read the comments in the code below for the meaning of each function and what is happening inside.

-- Called by sysbench one time to initialize this script function thread_init() -- Create globals to be used elsewhere in the script -- drv - initialize the sysbench mysql driver drv = sysbench.sql.driver() -- con - represents the connection to MySQL con = drv:connect() end -- Called by sysbench when script is done executing function thread_done() -- Disconnect/close connection to MySQL con:disconnect() end -- Called by sysbench for each execution function event() -- If user requested to disable transactions, -- do not execute BEGIN statement if not sysbench.opt.skip_trx then con:query("BEGIN") end -- Run our custom statements execute_selects() execute_inserts() -- Like above, if transactions are disabled, -- do not execute COMMIT if not sysbench.opt.skip_trx then con:query("COMMIT") end end

That’s all pretty simple and should function as a good template in your scripts. Now let’s take a look at the rest of the script.

Sanity checks and options

Now let’s get into the core code. At the top you’ll find the following sections:

if sysbench.cmdline.command == nil then error("Command is required. Supported commands: run") end sysbench.cmdline.options = { point_selects = {"Number of point SELECT queries to run", 5}, skip_trx = {"Do not use BEGIN/COMMIT; Use global auto_commit value", false} }

The first section is a sanity check to make sure the user actually wants to run this test. Other test scripts, mentioned above, support commands like prepare, run, and cleanup. Our script only supports run as the data we are using is pre-populated by our core application.

The second section allows us, the script writer, to let the user pass some options specific to our test script. In the code above, we can see an option for the number of SELECT statements that will be ran on each thread/iteration (default is 5) and the other option allows the user to disable BEGIN/COMMIT if they so desire (default is false). If you want more customization in your script, simply add more options. You’ll see how to reference these parameters later on.

The queries

Now it is time to define the custom queries we want to execute in our script.

-- Array of categories to be use in the INSERTs local page_types = { "actor", "character", "movie" } -- Array of COUNT(*) queries local select_counts = { "SELECT COUNT(*) FROM imdb.title" } -- Array of SELECT statements that have 1 integer parameter local select_points = { "SELECT * FROM imdb.title WHERE id = %d", "SELECT * FROM imdb.comments ORDER BY id DESC limit 10", "SELECT AVG(rating) avg FROM imdb.movie_ratings WHERE movie_id = %d", "SELECT * FROM imdb.users ORDER BY RAND() LIMIT 1" } -- Array of SELECT statements that have 1 string parameter local select_string = { "SELECT * FROM imdb.title WHERE title LIKE '%s%%'" } -- INSERT statements local inserts = { "INSERT INTO imdb.users (email_address, first_name, last_name) VALUES ('%s', '%s', '%s')", "INSERT INTO imdb.page_views (type, viewed_id, user_id) VALUES ('%s', %d, %d)" }

The above code defines several arrays/lists of different queries. Why is this necessary? Later on in the code, we will have to parse each SQL statement and populate/replace the various parameters with randomly generated values. It would not do us any good to repeat the same SELECT * FROM fooTable WHERE id = 44 every time, now would it? Certainly not. We want to generate random numbers and have our queries select from the entire dataset.

Some queries have no parameters, some have integer-based, and some string-based. We will handle these differently below, which is why they are in different arrays above. This method also allows for future expansion. When you want to run additional queries within the script, just add another line to each array; no need to change any other code.

Parse and execute

The function below, execute_selects, will be called from the parent function, event, which we discussed earlier in the post. You can see for-loops for each of the three SELECT categories we created above. The comments inline should help explain what is happening. Note the use of the user-provided option –point-selects in the second loop below, which we created previously in the ‘Sanity and Options’ section.

function execute_selects() -- Execute each simple, no parameters, SELECT for i, o in ipairs(select_counts) do con:query(o) end -- Loop for however many queries the -- user wants to execute in this category for i = 1, sysbench.opt.point_selects do -- select random query from list local randQuery = select_points[math.random(#select_points)] -- generate random id and execute local id = sysbench.rand.pareto(1, 3000000) con:query(string.format(randQuery, id)) end -- generate random string and execute for i, o in ipairs(select_string) do local str = sysbench.rand.string(string.rep("@", sysbench.rand.special(2, 15))) con:query(string.format(o, str)) end end

Two more things to mention for this code. First, you will notice the use of sysbench.rand.pareto to generate a random number between 1 and 3,000,000. For our dataset, we know that each table referenced in all queries relating to WHERE id = ? has that many number of rows, at minimum. This is specific to our data. Your values will certainly be different. Second, notice the use of sysbench.rand.string, and string.rep. The string.rep segment will generate a string comprised of ‘@’ symbols, between 2 and 15 characters long. That string of ‘@’ symbols will then be passed to sysbench.rand.string, which will swap out each ‘@’ for a random alphanumeric value. For example, ‘@@@@@@’ could be changed to ‘Hk9EdC’ which will then replace the ‘%s’ inside the query string (string.format) and be executed.

Handle inserts

Our INSERT statements require values. Again, sysbench calls the function execute_inserts from event on each iteration. Inside execute_inserts, we generate some fake string data using built-in functions described above.

Those strings are then formatted into the SQL and executed.

function create_random_email() local username = sysbench.rand.string(string.rep("@",sysbench.rand.uniform(5,10))) local domain = sysbench.rand.string(string.rep("@",sysbench.rand.uniform(5,10))) return username .. "@" .. domain .. ".com" end function execute_inserts() -- generate fake email/info local email = create_random_email() local firstname = sysbench.rand.string("first-" .. string.rep("@", sysbench.rand.special(2, 15))) local lastname = sysbench.rand.string("last-" .. string.rep("@", sysbench.rand.special(2, 15))) -- INSERT for new imdb.user con:query(string.format(inserts[1], email, firstname, lastname)) -- INSERT for imdb.page_view local page = page_types[math.random(#page_types)] con:query(string.format(inserts[2], page, sysbench.rand.special(2, 500000), sysbench.rand.special(2, 500000))) end

Example run

$ sysbench imdb_workload.lua \ --mysql-user=imdb --mysql-password=imdb \ --mysql-db=imdb --report-interval=1 \ --events=0 --time=0 run WARNING: Both event and time limits are disabled, running an endless test sysbench 1.0.17 (using system LuaJIT 2.0.4) Running the test with following options: Number of threads: 1 Report intermediate results every 1 second(s) Initializing random number generator from current time Initializing worker threads... Threads started! [ 1s ] thds: 1 tps: 15.96 qps: 177.54 (r/w/o: 112.71/31.92/32.91) lat (ms,95%): 158.63 err/s: 0.00 reconn/s: 0.00 [ 2s ] thds: 1 tps: 15.01 qps: 169.09 (r/w/o: 109.06/30.02/30.02) lat (ms,95%): 137.35 err/s: 0.00 reconn/s: 0.00 [ 3s ] thds: 1 tps: 26.00 qps: 285.00 (r/w/o: 181.00/52.00/52.00) lat (ms,95%): 108.68 err/s: 0.00 reconn/s: 0.00 [ 4s ] thds: 1 tps: 15.00 qps: 170.00 (r/w/o: 108.00/32.00/30.00) lat (ms,95%): 164.45 err/s: 0.00 reconn/s: 0.00

And there we have it! Custom queries specific to our application and dataset. Most of the sysbench parameters are self-explanatory, but let me talk about –report-interval=1 which shows statistics every 1 second. Normally sysbench does not output stats until the end of the run, however, the example execution will run forever (–events=0 –time=0) so we need stats to show all the time. You can adjust the parameters to your liking. For instance, if you only want to run a test for 5 minutes, set –events=0 –run-time=300.

Conclusion

Sysbench is a very well designed application that allows you to load-test your MySQL instances using pre-defined and custom queries. Using the Lua scripting language, you can create just about any scenario to fit your needs. The above is just one example that we use within Percona’s Training and Education department. It is by no means an exhaustive example of all of the capabilities of sysbench-Lua.


Photo by Lachlan Donald on Unsplash

How to Perform a Failback Operation for MySQL Replication Setup

MySQL master-slave replication is pretty easy and straightforward to set up. This is the main reason why people choose this technology as the first step to achieve better database availability. However, it comes at the price of complexity in management and maintenance; it is up to the admin to maintain the data integrity, especially during failover, failback, maintenance, upgrade and so on.

There are many articles out there describing on how to perform failover operation for replication setup. We have also covered this topic in this blog post, Introduction to Failover for MySQL Replication - the 101 Blog. In this blog post, we are going to cover the post-disaster tasks when restoring to the original topology - performing failback operation.

Why Do We Need Failback? Related resources  Introduction to Failover for MySQL Replication - the 101 Blog  MySQL Replication for High Availability  MySQL Replication for High Availability - Tutorial

The replication leader (master) is the most critical node in a replication setup. It requires good hardware specs to ensure it can process writes, generate replication events, process critical reads and so on in a stable way. When failover is required during disaster recovery or maintenance, it might not be uncommon to find us promoting a new leader with inferior hardware. This situation might be okay temporarily, however for a long run, the designated master must be brought back to lead the replication after it is deemed healthy.

Contrary to failover, failback operation usually happens in a controlled environment through switchover, it rarely happens in panic-mode. This gives the operation team some time to plan carefully and rehearse the exercise for a smooth transition. The main objective is simply to bring back the good old master to the latest state and restore the replication setup to its original topology. However, there are some cases where failback is critical, for example when the newly promoted master did not work as expected and affecting the overall database service.

How to Perform Failback Safely?

After failover happened, the old master would be out of the replication chain for maintenance or recovery. To perform the switchover, one must do the following:

  1. Provision the old master to the correct state, by making it the most up-to-date slave.
  2. Stop the application.
  3. Verify all slaves are caught up.
  4. Promote the old master as the new leader.
  5. Repoint all slaves to the new master.
  6. Start up the application by writing to the new master.

Consider the following replication setup:

"A" was a master until a disk-full event causing havoc to the replication chain. After a failover event, our replication topology was lead by B and replicates onto C till E. The failback exercise will bring back A as the leader and restore the original topology before the disaster. Take note that all nodes are running on MySQL 8.0.15 with GTID enabled. Different major version might use different commands and steps.

While this is what our architecture looks like now after failover (taken from ClusterControl's Topology view):

Node Provisioning

Before A can be a master, it must be brought up-to-date with the current database state. The best way to do this is to turn A as slave to the active master, B. Since all nodes are configured with log_slave_updates=ON (it means a slave also produces binary logs), we can actually pick other slaves like C and D as the source of truth for initial syncing. However, the closer to the active master, the better. Keep in mind of the additional load it might cause when taking the backup. This part takes the most of the failback hours. Depending on the node state and dataset size, syncing up the old master could take some time (it could be hours and days).

Once problem on "A" is resolved and ready to join the replication chain, the best first step is to attempt replicating from "B" (192.168.0.42) with CHANGE MASTER statement:

mysql> SET GLOBAL read_only = 1; /* enable read-only */ mysql> CHANGE MASTER TO MASTER_HOST = '192.168.0.42', MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'p4ss', MASTER_AUTO_POSITION = 1; /* master information to connect */ mysql> START SLAVE; /* start replication */ mysql> SHOW SLAVE STATUS\G /* check replication status */

If replication works, you should see the following in the replication status:

Slave_IO_Running: Yes Slave_SQL_Running: Yes

If the replication fails, look at the Last_IO_Error or Last_SQL_Error from slave status output. For example, if you see the following error:

Last_IO_Error: error connecting to master 'rpl_user@192.168.0.42:3306' - retry-time: 60 retries: 2

Then, we have to create the replication user on the current active master, B:

mysql> CREATE USER rpl_user@192.168.0.41 IDENTIFIED BY 'p4ss'; mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@192.168.0.41;

Then, restart the slave on A to start replicating again:

mysql> STOP SLAVE; mysql> START SLAVE;

Other common error you would see is this line:

Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: ...

That probably means the slave is having problem reading the binary log file from the current master. In some occasions, the slave might be way behind whereby the required binary events to start the replication have been missing from the current master, or the binary on the master has been purged during the failover and so on. In this case, the best way is to perform a full sync by taking a full backup on B and restore it on A. On B, you can use either mysqldump or Percona Xtrabackup to take a full backup:

$ mysqldump -uroot -p --all-databases --single-transaction --triggers --routines > dump.sql # for mysqldump $ xtrabackup --defaults-file=/etc/my.cnf --backup --parallel 1 --stream=xbstream --no-timestamp | gzip -6 - > backup-full-2019-04-16_071649.xbstream.gz # for xtrabackup

Transfer the backup file to A, reinitialize the existing MySQL installation for a proper cleanup and perform database restoration:

$ systemctl stop mysqld # if mysql is still running $ rm -Rf /var/lib/mysql # wipe out old data $ mysqld --initialize --user=mysql # initialize database $ systemctl start mysqld # start mysql $ grep -i 'temporary password' /var/log/mysql/mysqld.log # retrieve the temporary root password $ mysql -uroot -p -e 'ALTER USER root@localhost IDENTIFIED BY "p455word"' # mandatory root password update $ mysql -uroot -p < dump.sql # restore the backup using the new root password

Once restored, setup the replication link to the active master B (192.168.0.42) and enable read-only. On A, run the following statements:

mysql> SET GLOBAL read_only = 1; /* enable read-only */ mysql> CHANGE MASTER TO MASTER_HOST = '192.168.0.42', MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'p4ss', MASTER_AUTO_POSITION = 1; /* master information to connect */ mysql> START SLAVE; /* start replication */ mysql> SHOW SLAVE STATUS\G /* check replication status */

For Percona Xtrabackup, please refer to the documentation page on how to restore to A. It involves a prerequisite step to prepare the backup first before replacing the MySQL data directory.

Once A has started replicating correctly, monitor the Seconds_Behind_Master in the slave status. This will give you an idea on how far the slave has left behind and how long you need to wait before it catches up. At this point, our architecture looks like this:

Once Seconds_Behind_Master falls back to 0, that's the moment when A has caught up as an up-to-date slave.

If you are using ClusterControl, you have the option to resync the node by restoring from an existing backup or create and stream the backup directly from the active master node:

Staging the slave with existing backup is the recommended way to do in order to build the slave, since it doesn't bring any impact the active master server when preparing the node.

Promote the Old Master

Before promoting A as the new master, the safest way is to stop all writes operation on B. If this is not possible, simply force B to operate in read-only mode:

mysql> SET GLOBAL read_only = 'ON'; mysql> SET GLOBAL super_read_only = 'ON';

Then, on A, run SHOW SLAVE STATUS and check the following replication status:

Read_Master_Log_Pos: 45889974 Exec_Master_Log_Pos: 45889974 Seconds_Behind_Master: 0 Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates

The value of Read_Master_Log_Pos and Exec_Master_Log_Pos must be identical, while Seconds_Behind_Master is 0 and the state must be 'Slave has read all relay log'. Make sure that all slaves have processed any statements in their relay log, otherwise you will risk that the new queries will affect transactions from the relay log, triggering all sorts of problems (for example, an application may remove some rows which are accessed by transactions from relay log).

On A, stop the replication and use RESET SLAVE ALL statement to remove all replication-related configuration and disable read only:

mysql> STOP SLAVE; mysql> RESET SLAVE ALL; mysql> SET GLOBAL read_only = 'OFF'; mysql> SET GLOBAL super_read_only = 'OFF';

At this point, A is ready to accept writes (read_only=OFF), however slaves are not connected to it, as illustrated below:

For ClusterControl users, promoting A can be done by using "Promote Slave" feature under Node Actions. ClusterControl will automatically demote the active master B, promote slave A as master and repoint C and D to replicate from A. B will be put aside and user has to explicitly choose "Change Replication Master" to rejoin B replicating from A at a later stage.

ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE Slave Repointing

It's now safe to change the master on related slaves to replicate from A (192.168.0.41). On all slaves except E, configure the following:

mysql> STOP SLAVE; mysql> CHANGE MASTER TO MASTER_HOST = '192.168.0.41', MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'p4ss', MASTER_AUTO_POSITION = 1; mysql> START SLAVE;

If you are a ClusterControl user, you may skip this step as repointing is being performed automatically when you decided to promote A previously.

We can then start our application to write on A. At this point, our architecture is looking something like this:

From ClusterControl topology view, we have restored our replication cluster to its original architecture which looks like this:

Take note that failback exercise is much less risky if compared to failover. It's important to schedule this exercise during off-peak hours to minimize the impact to your business.

Final Thoughts

Failover and failback operation must be performed carefully. The operation is fairly simple if you have a small number of nodes but for multiple nodes with complex replication chain, it could be a risky and error-prone exercise. We also showed how ClusterControl can be used to simplify complex operations by performing them through the UI, plus the topology view is visualized in real-time so you have the understanding on the replication topology you want to build.

Tags:  MySQL MariaDB replication failback failover

Use Artificial Intelligence to Suggest 1-5 Star Ratings

A handful of products rely on reviews and star-ratings for product feedback. When customers are impressed or dissatisfied about our product, they come back to where it was purchased looking for a way to leave feedback. Apps and e-commerce products are examples of the scenarios where we would love to hear what the customer thinks.

Here's what we'll build. Notice how the stars generate based on our text!

TLDR: How does it work!?

We'll create a serverless function that accepts text. We'll analyze the text using Cognitive Services by Azure. We'll display the stars using React.

Review Flow (1-5 Stars)

The common flow for requesting feedback for our products might start with asking for a rating "1-5 stars" for example, and followed by a request for more personal observations to give context to the rating that’s been requested. This additional context gives future consumers more information about whether they might also like the product being reviewed based on common context.

The other strategy is to present the customer with both the form to write their review and the star rating control at the same time.

What if we take the user experience up one level and help them suggest stars based on the sentiments in their review texts?

This would look like the customer typing and we automatically rate the product in realtime.

  • For each keystroke or set of keystrokes, we will analyze what the customer is typing, generate a score based on the sentiment of the customer’s input, then star the product.
  • The customer can also click on the star to rate directly before writing the feedback or if they don’t agree on the suggestion

In this article, I will walk you through all the steps to make this possible. You can use it as an idea for a related project or read to get more excited about Artificial Intelligence.

The project is divided into 2 sections:

  1. A serverless function that handles the requests and analysis
  2. A React app for the UI
Serverless function for our server

One of the reasons why serverless functions are shining these days is because it takes only 5 mins to set up a scalable backend for your products at all complexity levels. I use Azure Functions a lot because it integrates nicely with VS Code, so I end up not having to go to a dashboard in the browser or even the terminal to test and deploy servers.

If you have not set up a function before or need a refresher, take a look at my detailed yet quick 5 minute post on how to set up a serverless function using VS Code.

Create a new function called autostar in your function project/folder using HTTPTrigger. You should get the default function’s template in autostar/index.js.

Getting started with Cognitive Services (CogS)

Cognitive Services is a subset of Artificial Intelligence that gives you trained models. These models help you with things like text analysis, sentiment analysis, facial recognition, speech analysis, and a lot more.

What this means is that you will NOT need to manually use data to train a system before it can be smart enough to answer some questions like:

  1. Which celebrity is in this picture?
  2. How sad is the text I sent?
  3. What is the license number of the cars that visited my hotel today?
  4. How happy is my grandmother this morning?
  5. Is this customer old enough to order a beer?
  6. Who said that?

With CogS, you just send an audio, video, image or text file to an endpoint, then get some analysis and answers. Answers enough to solve the questions we listed above.

Get a CogS endpoint and key

You need an API endpoint to send your requests to and a key to authenticate that it’s really you.

  1. Go to Try Cognitive Service.
  2. Choose the Language APIs tab
  3. Click Get API Key.
  4. Pick one of the signup options that works for you. You can pick something very temporary just for learning and fun sake.
  5. Copy your Endpoint and Keys. You can only use one of the keys at a given request.
Requesting for text (sentiment) analysis

Sentiment analysis is basically analyzing what someone said and weighing how positive or negative their content is. Is it sad? Is it happy? Is it hateful? This is what we need to determine how many stars a feedback is worth.

This is what the lifecycle of our App will look like when trying to analyze feedback:

The React app will send the feedback content to the serverless endpoint. The serverless endpoint validates the request and sends a payload to the CogS endpoint for analysis. After analyzing, the CogS endpoint will send the sentiment payload back to the serverless function and the serverless function forwards this response to the React app for the stars.

Delete everything in autostar/index.js and add the following:

const fetch = require('node-fetch'); module.exports = async function(context, req) { const accessKey = '<KEY>'; const baseUri = 'https://<center>.api.cognitive.microsoft.com'; const sentimentPath = '/text/analytics/v2.0/sentiment'; const feedback = req.query.feedback || (req.body && req.body.feedback); if (feedback) { const documents = { documents: [{ id: 1, language: 'en', text: feedback }] }; const params = { method: 'post', body: JSON.stringify(documents), headers: { 'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': accessKey } }; const res = await fetch(baseUri + sentimentPath, params); const json = await res.json(); context.res = { body: json }; } else { context.res = { status: 400, body: { error: 'Please pass a feedback text on the query string or in the request body' } }; } };

When a request hits this function, it validates if feedback can be found anywhere in the body or query string. After which it assembles the feedback into an array then sends a request to CogS. Notice how we are passing the key using the Ocp-Apim-Subscription-Key.

Rating and submitting feedback with React

We have the endpoint for our UI to push feedback to, now let’s start sending those requests. Create a new React app in CodeSandbox or with the CLI tool:

npx create-react-app auto-star-ui

We need an external library for rating and Lodash’s throttle. The throttle function will help us delay spamming our server with too much requests during keystrokes from the customer:

npm install --save react-rater lodash.throttle

Create a FeedbackForm.js file in the src folder. This will be the component that holds and handles the feedback form. Import the following:

import React from 'react'; import Rater from 'react-rater'; import throttle from 'lodash.throttle'; import 'react-rater/lib/react-rater.css'; import './App.css';

I am also importing the App.css file because we will update it with styles that are related to this component.

Create the actual functional component skeleton:

export const FeedbackForm = ({ handleFormSubmit, handleFeedbackChange, setRating, curRating = 0 }) => { // ... }

The component will receive

  1. handleFormSubmit to be called when the form is submitted.
  2. handleFeedbackChange This is where the real thing happens. It’s called at keystrokes (customer is typing) and it also sends the request to our serverless function.
  3. setRating is used to update the stars.
  4. curRating is the current rating at any given time.

We need an internal state to hold the feedback text that the user is typing:

export const FeedbackForm = ({ //... }) => { const [feedback, setFeedback] = React.useState(''); }

If we send a request at every keystroke, we would flood our endpoint with meaningless or wasteful content. A quick strategy to fix this is to throttle the requests. What this means is that we only send a request every x period of time; in our case, every 1.5 seconds. Lodash’s throttle will help us with this.

const [feedback, setFeedback] = React.useState(''); const throttledChangeHandler = React.useRef( throttle(feedback => handleFeedbackChange(feedback), 1500) ); React.useEffect(() => { if (!!feedback) { return throttledChangeHandler.current(feedback); } }, [feedback]);

We are wrapping the throttle in a useRef because for every render; we get a new throttle function which resets everything, so the function does not remember what the last inputs of the user were.I explained this in more detail in an article I wrote previously.

Now we can render the elements, pass in some states and props to them, and handle events:

export const FeedbackForm = ({ // ... }) => { //... return ( <form> <div className="label-stars"> <label htmlFor="feedback">What do you think?</label> <Rater total={5} rating={curRating} onRate={({ rating }) => { setRating(rating); }} /> </div> <textarea value={feedback} onKeyPress={e => { if (e.which === 13 && !e.shiftKey) { e.preventDefault(); handleFormSubmit(feedback, curRating); setFeedback(''); } }} onChange={e => { setFeedback(e.target.value); }} placeholder="Love or hate? No hard feelings!" /> </form> ); };

When we type in the textarea, both onChang``e() and onKeyPres``s() is called:

  1. onChange calls setFeedback which updates the state. When the state is updated, the component re-renders and calls useEffect which we saw previously. When useEffect is called, it runs throttledChangeHandler and then handleFeedbackChange is called.
  2. onKeyPress just checks if we hit enter without holding shift. If we held shit, it would move to a new line. If we didn’t, it would submit the feedback.

I am also updating the rater through the onRate property and also setting the current rating with rating property.

Sending requests from React

We are just passing down handleFeedbackChange, handleFormSubmit, and setRating but we don’t know what they look like. We are now going to render FeedbackForm in App and see how those props get passed down and what those props look like.

Delete the content of ./src/App.js and update with the following:

import React, { Component } from 'react'; import { FeedbackForm } from './FeedbackForm'; import './App.css'; class App extends Component { state = { curRating: 0, }; // ... render() { return ( <div className="App"> <FeedbackForm curRating={this.state.curRating} handleFeedbackChange={this.handleFeedbackChange} handleFormSubmit={this.handleFormSubmit} setRating={this.setRating} /> </div> ); } } export default App;

Next, add the setRating and handleFeedbackChange instance methods to the class:

setRating = (rating) => { this.setState({ curRating: rating }); } handleFeedbackChange = feedback => { fetch('http://localhost:7071/api/autoStar', { method: 'post', body: JSON.stringify({ feedback }), headers: { 'Content-Type': 'application/json' } }) .then(res => res.json()) .then(json => { const sentimentScore = json.documents && json.documents[0] ? json.documents[0].score : 0; this.setState({ curRating: Math.round((sentimentScore * 10) / 2) }); }); };

setRating simply updates the curRating state item which is also passed as a prop to FeedbackForm.

handleFeedbackChange sends a request to our serverless function with the user’s feedback attached to the body as payload. When a response is returned, we expect to have a sentiment score if no error occurred. Lastly, in the response callback, we are updating the curRating state with a value between 1-5. The reason for multiplying with 10 is because CogS gives its rating between 0 and 1.

What’s Next?

You might have noticed that I did not implement what happens when you submit. I left this one up to you. It’s less difficult than what we just learned. You need to set up a database of choice (maybe Cosmos DB since it integrates well with Functions) and create one more function that the create request will be sent to.

If you get stuck, you can take a look at the GitHub repo which has the completed project. You can also have a look at the final demo to play with and see how things are expected to works.

I will strongly suggest you take a look at the Mojifier project. It’s a project that uses CogS to suggest emojis based on facial expressions. More like what you learned here but with images and faces.

Conferences with MySQL this week!

As a reminder for the shows where you can find MYSQL at this week, please see below:

  1. ​​GIDS - Great Indian Developer Summit, Bangalore, India, Apr 22-26, 2019 - with MySQL at Oracle's booth & with a MySQL talk given by Sanjay Manwani on "MySQLaaS & Rapid" in the Data track. 
  2. LinuxFest Northwest, Bellingham, US, April 26-28, 2019 - with MySQL booth in the expo area as well as talk given by David Stokes on "7 Database Mistakes You Are Making and MySQL 8 Features for Developers".

We are looking forward seeing & talking to you @ above shows this week!


  •  

MySQL Community Contributor Award Program 2019

We would like to announce that MySQL Community Team 2019 MySQL Contributor Reward Program winners. This award started in 2015 to identify & reward the most active community members from the MySQL Community world. MySQL Community Team let printed a limited edition of “MySQL Community Contributor” t-shirts which were already distributed to almost all winners.

Same as in 2015 also now specific criteria for the nominations were wet as follows:

  1. The most active code contributor
  2. Bug reporter
  3. The most active MySQL blogger
  4. People who plays very active role in translation/documentation of MySQL articles
  5. People who provides feedback on DMR releases, Labs releases, or change proposals
  6. Anyone in the community who worked their butt off or did really useful work that you feel should be thanks publicly

We are happy to announce that the people below successfully met one or more criteria above, were nominated by MySQL group and approved by our committee, please join us to congratulate to them!

  • Laurynas Biveinis for a lot of code contributions & bug submissions
  • René Cannaò for ProxySQL & for his continuous support of it
  • Vin Chen @Tencent Games CROS DBA Team for the Instant Add Columns as a main contribution to MySQL 8.0
  • Daniel Van Eeden, the MySQL ACE for a lot of code contributions in a lot of areas (connectors, the server, etc.), and the detailed feedback for the bugs or issues he found
  • Gabriela D'Ávila Ferrara for interesting blogs about MySQL
  • Bill Karwin for his active role on many MySQL forums (MySQL, Stack Overflow, Quora, &c.), as well as for multiple speeches focusing on all things SQL
  • Alexey Kopytov for the new Sysbench code (the #1 tool for MySQL Performance evaluation)
  • Felix Liang @Tencent Games CROS DBA Team for his active role in ACMUG (Chinese MySQL User Group) and contributions to the Instant Add Column in MySQL 8.0 
  • Tomita Masahiro for supporting MySQL community and for the MySQL Parameters tool
  • Giuseppe Maxia for the amazing feedback provided to us, for his blog posts & for dbdeployer
  • Simon Mudd for his active role in Madrid MySQL UG and, for a number of bugs submitted, for contributing the original logging to syslog, and other tools such ps-top
  • Shlomi Noach for delivering open source solutions to solve cases around MySQL and for multiple MySQL talks as well as for the tooling within the MySQL space, such as MySQL orchestrator and ghOST
  • Zsolt Parragi for a lot of code contributions & bug submissions
  • Alex Rubin for extensive testing of our new DD code and for bug submissions
  • Sveta Smirnova for popularization of MySQL Performance Schema, general MySQL troubleshooting and promoting new mysql features
  • Yura Sorokin for a lot of code contributions & bug submissions
  • Sugu Sougoumarane for running Vitess.io and helping the users on multiple places
  • Peter Zaitsev for his active role in MySQL world as well as for his contributions to MySQL Community

Please join us to congratulate to all above!   

Percona Monitoring and Management (PMM) 2.0.0-alpha1 Is Now Available

We are pleased to announce the launch of PMM 2.0.0-alpha1, Percona’s first Alpha release of our long-awaited PMM 2 project! We focused exclusively on MySQL support in our first Alpha (because we wanted to show progress sooner rather than later), and you’ll find updated MySQL dashboards along with enhancements to Query Analytics. We’ve also added better visibility regarding which services are registered with PMM Server, the client-side addition of a new agent called pmm-agent, and finally PMM Server is now exposing an API!

  • Query Analytics
    • Support for large environments – default view all queries from all instances
    • Filtering – display only the results matching filters – MySQL schema name, MySQL server instance
    • Sorting and more columns – now sort by any visible column. Add a column for any field exposed by the data source, for example add rows_examined, lock_time to your Overview
    • Queries source – MySQL PERFORMANCE SCHEMA (slow log coming in our next alpha around May 1st, 2019)
  • Labels – Prometheus now supports auto-discovered and custom labels
  • Inventory Overview Dashboard – Displays the agents, services, and nodes which are registered with PMM Server
  • API – View versions and list hosts using the API
  • pmm-agent – Provides secure remote management of the exporter processes and data collectors on the client

PMM 2 is still a work in progress – expect to see bugs and other missing features! We are aware of a number of issues, but please report any and all that you find to Percona’s JIRA.

Query Analytics Dashboard

Query Analytics Dashboard now defaults to display all queries on each of the systems that are configured for MySQL PERFORMANCE_SCHEMA, Slow Log, and MongoDB Profiler (this release includes support for MySQL PERFORMANCE SCHEMA only), and includes comprehensive filtering capabilities.

Query Analytics Overview

You’ll recognize some of the common elements in PMM 2 Query Analytics such as the Load, Count, and Latency columns, however there are new elements such as the filter box and more arrows on the columns which will be described further down:

Query Detail

Query Analytics continues to deliver detailed information regarding individual query performance:

Filter and Search By

Filtering panel on the left, or use the search by bar to set filters using key:value syntax, for example, I’m interested in just the queries that are executed in MySQL schema db3, I could then type d_schema:db3:

Sort by any column

This is a much requested feature from PMM Query Analytics and we’re glad to announce that you can sort by any column! Just click the small arrow to the right of the column name and

Add extra columns

Now you can add a column for each additional field which is exposed by the data source. For example you can add Rows Examined by clicking the + sign and typing or selecting from the available list of fields:

Labels

An important concept we’re introducing in PMM 2 is that when a label is assigned it is persisted in both the Metrics (Prometheus) and Query Analytics (Clickhouse) databases.  So when you browse a target in Prometheus you’ll notice many more labels appear – particularly the auto-discovered (replication_set, environment, node_name, etc.) and (soon to be released) custom labels via custom_label.

Inventory Dashboard

We’ve introduced a new dashboard with several tabs so that users are better able to understand which nodes, agents, and services are registered against PMM Server.  We have an established hierarchy with Node at the top, then Service and Agents assigned to a Node.

  • Nodes – Where the service and agents will run. Assigned a node_id, associated with a machine_id (from /etc/machine-id)
    • Examples: bare metal, virtualized, container
  • Services – Individual service names and where they run, against which agents will be assigned. Each instance of a service gets a service_id value that is related to a node_id
    • Examples: MySQL, Amazon Aurora MySQL
    • You can also use this feature to support multiple mysqld instances on a single node, for example: mysql1-3306, mysql1-3307
  • Agents – Each binary (exporter, agent) running on a client will get an agent_id value
    • pmm-agent is the top of the tree, assigned to a node_id
    • node_exporter is assigned to pmm-agent agent_id
    • mysqld_exporter & QAN MySQL Perfschema are assigned to a service_id
    • Examples: pmm-agent, node_exporter, mysqld_exporter, QAN MySQL Perfschema

You can now see which services, agents, and nodes are registered with PMM Server.

Nodes

In this example I have PMM Server (docker) running on the same virtualized compute instance as my Percona Server 5.7 instance, so PMM treats this as two different nodes.

Services

This example has two MySQL services configured:

Agents

For a monitored Percona Server instance, you’ll see an agent for each of:

  1. pmm-agent
  2. node_exporter
  3. mysqld_exporter
  4. QAN Perfschema

 

Query Analytics Filters

Query Analytics now provides you with the opportunity to filter based on labels. We’ are beginning with labels that are sourced from MySQL Performance Schema, but eventually will include all fields from MySQL Slow Log, MongoDB Profiler, and PostgreSQL views.  We’ll also be offering the ability to set custom key:value pairs which you’ll use when setting up a new service or instance with pmm-admin during the add ... routine.

Available Filters

We’re exposing four new filters in this release, and we show where we source them from and what they mean:

Filter name Source Notes d_client_host MySQL Slow Log MySQL PERFORMANCE_SCHEMA doesn’t include client host, so this field will be empty d_username MySQL Slow Log MySQL PERFORMANCE_SCHEMA doesn’t include username, so this field will be empty d_schema MySQL Slow Log

MySQL Perfschema

MySQL Schema name d_server MySQL Slow Log

MySQL Perfschema

MySQL server instance

 

API

We are exposing an API for PMM Server! You can view versions, list hosts, and more!

The API is not guaranteed to work until we get to our GA release – so be prepared for breaking changes during our Alpha and Beta releases.

Browse the API using Swagger at /swagger

 

Installation and configuration Install PMM Server with docker

The easiest way to install PMM Server is to deploy it with Docker. You can run a PMM 2 Docker container with PMM Server by using the following commands (note the version tag of 2.0.0-alpha1):

docker create -v /srv --name pmm-data-2-0-0-alpha1 perconalab/pmm-server:2.0.0-alpha1 /bin/true docker run -d -p 80:80 -p 443:443 --volumes-from pmm-data-2-0-0-alpha1 --name pmm-server-2.0.0-alpha1 --restart always perconalab/pmm-server:2.0.0-alpha1

Install PMM Client

Since PMM 2 is still not GA, you’ll need to leverage our experimental release of the Percona repository. You’ll need to download and install the official percona-release package from Percona, and use it to enable the Percona experimental component of the original repository. Specific instructions for a Debian system are as follows:

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb sudo dpkg -i percona-release_latest.generic_all.deb

Now enable the correct repo:

sudo percona-release disable all sudo percona-release enable original experimental

Now install the pmm2-client package:

apt-get update apt-get install pmm2-client

See percona-release official documentation for details.

Please note that having experimental packages enabled may affect further packages installation with versions which are not ready for production. To avoid this, disable this component with the following commands:

sudo percona-release disable original experimental sudo apt-get update

Configure PMM

Once PMM Client is installed, run the pmm-admin setup command with your PMM Server IP address to register your Node within the Server:

# pmm-agent setup --server-insecure-tls --server-address=<IP Address>:443

We will be moving this functionality back to pmm-admin config in a subsequent Alpha release.

You should see the following:

Checking local pmm-agent status... pmm-agent is running. Registering pmm-agent on PMM Server... Registered. Configuration file /usr/local/percona/pmm-agent.yaml updated. Reloading pmm-agent configuration... Configuration reloaded.

You then add MySQL services (Metrics and Query Analytics) with the following command:

# pmm-admin add mysql --use-perfschema --username=pmm --password=pmm

where username and password are credentials for the monitored MySQL access, which will be used locally on the database host.

After this you can view MySQL metrics or examine the added node on the new PMM Inventory Dashboard:

You can then check your MySQL dashboards and Query Analytics in order to view your server’s performance information!

We hope you enjoy this release, and we welcome your comments on the blog!

About PMM

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL®, MongoDB®, and PostgreSQL performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL®, MongoDB®, and PostgreSQL® servers to ensure that your data works as efficiently as possible.

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

Getting started with the MySQL Operator for Kubernetes

Kubernetes Operators are amazing and they are already playing an important role for those who are managing large scale applications. I personally think that we will manage all applications using Operators.

In this tutorial I will show you how to setup a MySQL cluster using the MySQL Operator on Kubernetes.

Prerequisites:

  1. Install kubectl
  2. Install Helm
  3. Kubernetes Cluster: Use minikube locally or launch your own Kubernetes Cluster in 5–10 minutes on Oracle

Let’s start with cloning the MySQL repository which contains the Operator.

$ git clone git@github.com:oracle/mysql-operator.git

Now enter the directory.

$ cd mysql-operator

Make sure your helm repo’s are up to date.

$ helm repo update

Let’s create a namespace where the operator will be installed.

$ kubectl create ns mysql-operator

Install the mysql operator.

$ helm install --name mysql-operator mysql-operator

You should see an output similar to this:

NAME: mysql-operator
LAST DEPLOYED: Tue Apr 23 15:48:53 2019
NAMESPACE: default
STATUS: DEPLOYED RESOURCES:
==> v1beta1/CustomResourceDefinition
NAME AGE
mysqlbackupschedules.mysql.oracle.com 4s
mysqlclusters.mysql.oracle.com 4s
mysqlbackups.mysql.oracle.com 4s
mysqlrestores.mysql.oracle.com 4s ==> v1beta1/ClusterRole
mysql-operator 4s
mysql-agent 4s ==> v1beta1/ClusterRoleBinding
NAME AGE
mysql-operator 3s
mysql-agent 3s ==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mysql-operator 1 1 1 0 3s ==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mysql-operator-d99c84c9-sldb7 0/1 ContainerCreating 0 3s ==> v1/ServiceAccount
NAME SECRETS AGE
mysql-agent 1 5s
mysql-operator 1 4s NOTES:
Thanks for installing the MySQL Operator. Check if the operator is running with kubectl -n mysql-operator get po

As the document states we can check the status of the cluster. Notice because we installed it in the namespace called mysql-operator we are using the -n flag.

$ kubectl -n mysql-operator get po NAME READY STATUS RESTARTS AGE
mysql-operator-d99c84c9-sldb7 1/1 Running 0 2m

Our Operator is up and running.

Let’s create a new namespace where we will install the MySQL Cluster:

$ kubectl create ns mysql-cluster

Since we are creating the MySQL Cluster in a new namespace, which we have named “mysql-cluster”, we need to create themysql-agent ServiceAccount and RoleBinding in that namespace.

If you decide to name your namespace something else make sure to change it below also.

Let’s create the mysql-agentServiceAccount and RoleBinding:

$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysql-agent
namespace: mysql-cluster
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: mysql-agent
namespace: mysql-cluster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: mysql-agent
subjects:
- kind: ServiceAccount
name: mysql-agent
namespace: mysql-cluster
EOF

You should see an output similar to this:

serviceaccount "mysql-agent" created
rolebinding "mysql-agent" created

Create a cluster.yaml file with a database name and namespace. I have named mine “my-first-db” and I’m using the namespace “mysql-cluster”.

apiVersion: mysql.oracle.com/v1alpha1
kind: Cluster
metadata:
name: my-first-db
namespace: mysql-cluster

Now let’s create the MySQL Cluster.

$ kubectl apply -f cluster.yaml
mysqlcluster "my-first-db" created

Notice below, that because we have installed the mysql-operator, kubernetes now have a new resource called “mysqlclusters”. Similar to how we write kubectl get pods we can now write kubectl get mysqlclusters.

We are still using the -n flag because we installed the mysql cluster in the namespace called “mysql-cluster”. I should perhaps have picked a different name to avoid confusing it with the resource name.

Let’s check our mysql clusters:

$ kubectl -n mysql-cluster get mysqlclusters
NAME AGE
my-first-db 32s

We can see that we have one mysql-cluster running.

Let’s check how many pods are running:

$ kubectl -n mysql-cluster get pods
NAME READY STATUS RESTARTS AGE
my-first-db-0 2/2 Running 0 5m
my-first-db-1 2/2 Running 0 5m
my-first-db-2 2/2 Running 0 5m

As you can see our cluster consists of three pods. By default the cluster has three members. You can specify otherwise by adding spec: members: 3 to the cluster.yaml file. There are also other options but I will cover those in a different blog post.

Let’s get the root password for our new database. Notice the secret and configs automatically created by the operator are prefixed with database name we entered above in the cluster.yaml file.

$ kubectl -n mysql-cluster get secret my-first-db-root-password -o jsonpath="{.data.password}" | base64 --decode #output
FDlQQxPCVdMZ6lAt

Let’s create a temporary container that contains the MySQL client to test that we can connect to the newly created database.

Notice we creating the container in the same namespace. If we don’t we won’t be able to use the hostname to reach the database created.

$ kubectl -n mysql-cluster run mysql-client --image=mysql:5.7 -it --rm --restart=Never -- /bin/bash If you don't see a command prompt, try pressing enter.
root@mysql-client:/#

Replace the password below with the password generated above.

$ mysql -h my-first-db -uroot -pFDlQQxPCVdMZ6lAt -e 'SELECT 1' mysql: [Warning] Using a password on the command line interface can be insecure.
+---+
| 1 |
+---+
| 1 |
+---+

Congratulations. You have installed the MySQL Operator, created a MySQL Cluster and connected to it.

If you want to give this a try on Oracle Cloud sign up for a free trial and launch Kubernetes in a few minutes.

In case you run into problems or need advice setting up a production grade system feel free to reach out to me on twitter or linkedin.

Pages