Planet MySQL

MySQL Connector/J 8.0.13 has been released

Dear MySQL users,

Version 8.0.13 is the latest General Availability release of the 8.0
series of MySQL Connector/J. It is suitable for use with MySQL Server
versions 8.0, 5.7, 5.6, and 5.5. It supports the Java Database
Connectivity (JDBC) 4.2 API, and implements the X DevAPI.

This release includes the following new features and changes, also
described in more detail on

https://dev.mysql.com/doc/relnotes/connector-j/8.0/en/news-8-0-13.html

As always, we recommend that you check the “CHANGES” file in the
download archive to be aware of changes in behavior that might affect
your application.

To download MySQL Connector/J 8.0.13 GA, see the “Generally Available
(GA) Releases” tab at http://dev.mysql.com/downloads/connector/j/

Enjoy!

Changes in MySQL Connector/J 8.0.13 (2018-10-22, General Availability) Functionality Added or Changed * Important Change: Connector/J now requires Protocol Buffers 3.6.1 as an external library for using X DevAPI and for building Connector/J from source.  See Connector/J Installation (http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-installing.html) on installation requirements for Connector/J. (Bug #28499094) * X DevAPI: X DevAPI now provides a connection pooling feature, which can reduce overhead for applications by allowing idle connections to be reused. Connection pools are managed by the new Client objects, from which sessions can be obtained. See Connecting to a Single MySQL Server Using Connection Pooling in the X DevAPI User Guide (http://dev.mysql.com/doc/x-devapi-userguide/en/) for details. * X DevAPI: A new connection property, xdevapi.connect-timeout, now defines the timeout (in milliseconds) for establishing an X-Protocol connection to the server. Default value is 10000 (10s), and a value of 0 disables timeout, which makes Connector/J wait for the underlying socket to time out instead. See Configuration Properties (http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference- configuration-properties.html) for details. Note that if xdevapi.connect-timeout is not set explicitly and connectTimeout is, xdevapi.connect-timeout takes up the value of connectTimeout. * The connection property useOldUTF8Behavior is no longer supported. The connection property never had any meaning for the MySQL Server versions supported by Connector/J 8.0, but actually corrupted the data when it was used with them. (Bug #28444461) * Connector/J now translates the legacy value of convertToNull for the connection property zeroDateTimeBehavior to CONVERT_TO_NULL. This allows applications or frameworks that use the legacy value (for example, NetBeans) to work with Connector/J 8.0. (Bug #28246270, Bug #91421) * A new connection property, sslMode, has been introduced to replace the connection properties useSSL, requireSSL, and verifyServerCertificate, which are now deprecated.  Also, when not explicitly set, the connection properties xdevapi.ssl-mode, xdevapi.ssl-truststore, xdevapi.ssl-truststore-password, and xdevapi.ssl-truststore-type now take up the values of sslMode, trustCertificateKeyStoreUrl, trustCertificateKeyStorePassword, and trustCertificateKeyStoreType, respectively. See Connecting Securely Using SSL (http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-using-ssl.html) and Configuration Properties (http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference- configuration-properties.html) for details. Note that for ALL server versions, the default setting of sslMode is PREFERRED, and it is equivalent to the legacy settings of useSSL=true, requireSSL=false, and verifyServerCertificate=false, which are different from their default settings for Connector/J 8.0.12 and earlier in some situations. Applications that continue to use the deprecated properties and rely on their old default settings should be reviewed.  (Bug #27102307) * The value UTF-8 for the connection property characterEncoding now maps to the utf8mb4 character set on the server and, for MySQL Server 5.5.2 and later, characterEncoding=UTF-8 can now be used to set the connection character set to utf8mb4 even if character_set_server has been set to something else on the server. (Before this change, the server must have character_set_server=utf8mb4 for Connector/J to use that character set.) Also, if the connection property connectionCollation is also set and is incompatible with the value of characterEncoding, characterEncoding will be overridden with the encoding corresponding to connectionCollation.  See Using Character Sets and Unicode (http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-charsets.html) for details, including how to use the utf8mb3 character set now for connection. (Bug #23227334, Bug #81196) Bugs Fixed * X DevAPI: Connector/J threw a WrongArgumentException when it encountered a JSON number with more than ten digits.  This was due to an error in the JSON parser, which has now been fixed. (Bug #28594434, Bug #92264) * X DevAPI: Session.getUri() returned a NullPointerException when the default value is null for any of the connection properties contained in the connection URL; and when Session.getUri() returned a URL, the URL contained a comma (",") before its first connection property. (Bug #23045604) * X DevAPI: When handling an invalid JSON document, Connector/J threw a NullPointerException. With this fix, a WrongArgumentException is thrown instead in the situation. (Bug #21914769) * Setting the connection property characterEncoding to an encoding that maps to the MySQL character set latin1 or utf8mb4 did not result in the corresponding default connection collation (latin1_swedish_ci or utf8mb4_0900_ai_ci, respectively) to be used on the server. With this fix, the server default is used in the situation. (Bug #28207422) * Calling UpdatableResultSet.updateClob() resulted in an SQLFeatureNotSupportedException. It was because the implementation of the method was missing from Connector/J, and it has been added with this fix. (Bug #28207088) * When a connection property's value contained an equal sign ("=") in itself, an exception ("WrongArgumentException: Malformed database URL") was thrown. This was due to an error in the parser for the connection URL, which has been corrected by this fix.  (Bug #28150662) * Connector/J threw a SQLSyntaxErrorException when the parameter tableName for DatabaseMetaDataUsingInfoSchema.getTables() had a null argument. (Bug #28034570, Bug #90887) * Setting rewriteBatchedStatements=true and useLocalTransactionState=true caused transactions to be uncommitted for batched UPDATE and DELETE statements. It was due to the intermediate queries for enabling multiquery support on the server resetting the local transaction state as a side effect. With this fix, the local transaction state is preserved when the intermediate queries are executed. (Bug #27658489, Bug #89948) * Rewriting prepared INSERT statements in a multiquery batch failed with a BatchUpdateException when the statements did not contain place holders. This was due a faulty mechanism for query rewriting, which has been corrected by this fix. (Bug #25501750, Bug #84813) * When using batched prepared statements with multiple queries per statement, queries rewriting was incorrect, resulting in the wrong queries being sent to the server. (Bug #23098159, Bug #81063) * Record updates failed for a scrollable and updatable PreparedStatement when the WHERE clause for the updater or refresher contained fractional timestamp values and the connection property sendFractionalSeconds was set to false. It was because in the situation, Connector/J did not perform the proper adjustments of the fractional parts in the WHERE clause values according to the length of the field's fractional part as defined in the database. This fix makes Connector/J perform the proper adjustment to the fractional part, so that the WHERE clause value can be properly compared to the value fetched from the database. (Bug #22305979) * Some tests in the testsuite failed as they could not recognize system time zone values like CEST or WEST, even with the connection property serverTimezone set. This was because the value of serverTimezone in the testsuite URLs, after being processed by the testsuite, was not actually propagated as a connection property to Connector/J. This fix makes sure the property is in the actual URLs passed to Connector/J. (Bug #21774249) * When a Java Date value was bound to a PreparedStatement parameter, attempts to format the value by a proleptic GregorianCalendar failed to make the dates proleptic, so that dates before the Julian-Gregorian cutover (October 15, 1582) were stored wrongly. With this fix, a proleptic calendar is properly used if supplied to the setDate() method.  Note that when trying to set or retrieve dates before the Julian-Gregorian cutover with PreparedSatement methods, a proleptic GregorianCalendar should always be explicitly supplied to the setDate() and getDate() method. For details, see Known Issues and Limitations (http://dev.mysql.com/doc/connector-j/8.0/en/connector-j- usagenotes-known-issues-limitations.html). (Bug #18749544, Bug #72609)

Enjoy and thanks for the support!

On behalf of the MySQL Release Team,
Nawaz Nazeer Ahamed

MySQL Connector/Python 8.0.13 has been released

Dear MySQL users,

MySQL Connector/Python 8.0.13 is the third GA release version of the
MySQL Connector Python 8.0 series. This series adds support for Python
3.7. The X DevAPI enables application developers to write code that
combines the strengths of the relational and document models using a
modern, NoSQL-like syntax that does not assume previous experience
writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information
about how the X DevAPI is implemented in MySQL Connector/Python, and its
usage, see http://dev.mysql.com/doc/dev/connector-python.

For general documentation about how to get started using MySQL
as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

To download MySQL Connector/Python 8.0.13, see the “General Available
(GA) releases” tab at http://dev.mysql.com/downloads/connector/python/

Enjoy!

Changes in MySQL Connector/Python 8.0.13 (2018-10-22, General Availability) Functionality Added or Changed * Added Python 3.7 support. (Bug #27081809, Bug #87818) * To go with the existing mysqlx.get_session(conn_str) method, a new mysqlx.get_client(conn_str, options) method was added that creates a connection pool handler that provides a get_session() method to create and retrieve connections from the pool. The collection pooling options are: + enabled: enables or disables connection pooling; boolean and defaults to true. + max_size: maximum number of connections available in the pool; positive integer and defaults to 25. + max_idle_time: maximum number of milliseconds a connection can be idle in the queue before being closed; integer >= 0 and defaults to 0 (infinite). + queue_timeout: maximum number of milliseconds a request will wait for a connection to become available; integer >= 0 and defaults to 0 (infinite). This is different than connect_timeout that's used for non-pooling. In a pooling scenario there are already connections in the pool, so queue_timeout controls how long to wait for a connection in the pool. Example usage: client = mysqlx.get_client( { 'host': 'localhost', 'port': 33060, 'user': 'mike', 'password': 'password' }, { pooling: { enabled: true, max_idle_time: 5000, max_size: 25, queue_timeout: 20000 } } ) Closing a session attached to the pool makes the connection available in the pool for subsequent get+session() calls, while closing (destroying) the pool effectively closes all server connections. * Added a connection-timeout connection timeout query parameter. This defines the length of time (milliseconds) the client waits for a MySQL server to become available in the given network addresses. It was added to both the mysqlx.get_session() (non-pooling sessions) and mysqlx.get_client() (pooling sessions) interfaces. This option defaults to 10000 (10 seconds). The value 0 disables the timeout so the client will wait until the underlying socket (platform dependent) times out. Example usages: mysqlx.get_session("root@localhost?connect-timeout=0"); mysqlx.get_session("root@[localhost:33060, 127.0.0.1:33060]?connect-ti meout=5000"); In a multi-host scenario, the connect-timeout value applies to each individual host. Bugs Fixed * On Windows, the 32-bit MSI failed to install. The registry key path was updated to allow the CEXT prerequisite check to execute and pass. (Bug #28395599, Bug #28464866) * Subsequent collection.add() method calls would leak memory if the C extension was enabled. (Bug #28278352) * Missing bind() parameters could cause an unclear error message or unexpectedly halt. (Bug #28037275) * The username and password fields are now quoted to allow special characters when making X DevAPI connections. (Bug #27528819, Bug #89614)

Enjoy and thanks for the support!

On behalf of the MySQL Release Team,
Nawaz Nazeer Ahamed

MySQL Workbench 8.0.13 has been released

Dear MySQL users, The MySQL developer tools team announces 8.0.13 as our general available (GA) for MySQL Workbench 8.0. For the full list of changes in this revision, visit http://dev.mysql.com/doc/relnotes/workbench/en/changes-8-0.html For discussion, join the MySQL Workbench Forums: http://forums.mysql.com/index.php?152 The release is now available in source and binary form for a number of platforms from our download pages at: http://dev.mysql.com/downloads/tools/workbench/ Enjoy!

One Billion Tables in MySQL 8.0 with ZFS

The short version

I created > one billion InnoDB tables in MySQL 8.0 (tables, not rows) just for fun. Here is the proof:

$ mysql -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1425329 Server version: 8.0.12 MySQL Community Server - GPL Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select count(*) from information_schema.tables; +------------+ | count(*) | +------------+ | 1011570298 | +------------+ 1 row in set (6 hours 57 min 6.31 sec)

Yes, it took 6 hours and 57 minutes to count them all!

Why does anyone need one billion tables?

In my previous blog post, I created and tested MySQL 8.0 with 40 million tables (that was a real case study). The One Billion Tables project is not a real world scenario, however. I was challenged by Billion Tables Project (BTP) in PostgreSQL, and decided to repeat it with MySQL, creating 1 billion InnoDB tables.

As an aside: I think MySQL 8.0 is the first MySQL version where creating 1 billion InnoDB tables is even practically possible.

Challenges with one billion InnoDB tables Disk space

The first and one of the most important challenges is disk space. InnoDB allocates data pages on disk when creating .ibd files. Without disk level compression we need > 25Tb of disk. The good news: we have ZFS which provides transparent disk compression. Here’s how the disk utilization looks:

Actual data (apparent-size):

# du -sh --apparent-size /mysqldata/ 26T /mysqldata/

Compressed data:

# du -sh /mysqldata/ 2.4T /mysqldata/

Compression ratio:

# zfs get compression,compressratio ... mysqldata/mysql/data compressratio 7.14x - mysqldata/mysql/data compression gzip inherited from mysqldata/mysql

(Looks like the compression ratio reported is not 100% correct, we expect ~10x compression ratio.)

Too many tiny files

This is usually the big issue with databases that create a file per table. With MySQL 8.0 we can create a shared tablespace and “assign” a table to it. I created a tablespace per database, and created 1000 tables in each database.

The result:

mysql> select count(*) from information_schema.schemata; +----------+ | count(*) | +----------+ | 1011575 | +----------+ 1 row in set (1.31 sec)

Creating tables

Another big challenge is how to create tables fast enough so it will not take months. I have used three approaches:

  1. Disabled all possible consistency checks in MySQL, and decreased the innodb page size to 4K (these config options are NOT for production use)
  2. Created tables in parallel: as the mutex contention bug in MySQL 8.0 has been fixed, creating tables in parallel works fine.
  3. Use local NVMe cards on top of an AWS ec2 i3.8xlarge instance

my.cnf config file (I repeat: do not use this in production):

[mysqld] default-authentication-plugin = mysql_native_password performance_schema=0 datadir=/mysqldata/mysql/data socket=/mysqldata/mysql/data/mysql.sock log-error = /mysqldata/mysql/log/error.log skip-log-bin=1 innodb_log_group_home_dir = /mysqldata/mysql/log/ innodb_doublewrite = 0 innodb_checksum_algorithm=none innodb_log_checksums=0 innodb_flush_log_at_trx_commit=0 innodb_log_file_size=2G innodb_buffer_pool_size=100G innodb_page_size=4k innodb_flush_method=nosync innodb_io_capacity_max=20000 innodb_io_capacity=5000 innodb_buffer_pool_instances=32 innodb_stats_persistent = 0 tablespace_definition_cache = 524288 schema_definition_cache = 524288 table_definition_cache = 524288 table_open_cache=524288 table_open_cache_instances=32 open-files-limit=1000000

ZFS pool:

# zpool status pool: mysqldata state: ONLINE scan: scrub repaired 0B in 1h49m with 0 errors on Sun Oct 14 02:13:17 2018 config: NAME STATE READ WRITE CKSUM mysqldata ONLINE 0 0 0 nvme0n1 ONLINE 0 0 0 nvme1n1 ONLINE 0 0 0 nvme2n1 ONLINE 0 0 0 nvme3n1 ONLINE 0 0 0 errors: No known data errors

A simple “deploy” script to create tables in parallel (includes the sysbench table structure):

#/bin/bash function do_db { db_exist=$(mysql -A -s -Nbe "select 1 from information_schema.schemata where schema_name = '$db'") if [ "$db_exist" == "1" ]; then echo "Already exists: $db"; return 0; fi; tbspace="create database $db; use $db; CREATE TABLESPACE $db ADD DATAFILE '$db.ibd' engine=InnoDB"; #echo "Tablespace $db.ibd created!" tables="" for i in {1..1000} do table="CREATE TABLE sbtest$i ( id int(10) unsigned NOT NULL AUTO_INCREMENT, k int(10) unsigned NOT NULL DEFAULT '0', c varchar(120) NOT NULL DEFAULT '', pad varchar(60) NOT NULL DEFAULT '', PRIMARY KEY (id), KEY k_1 (k) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 tablespace $db;" tables="$tables; $table;" done echo "$tbspace;$tables" | mysql } c=0 echo "starting..." c=$(mysql -A -s -Nbe "select max(cast(SUBSTRING_INDEX(schema_name, '_', -1) as unsigned)) from information_schema.schemata where schema_name like 'sbtest_%'") for m in {1..100000} do echo "m=$m" for i in {1..30} do let c=$c+1 echo $c db="sbtest_$c" do_db & done wait done

How fast did we create tables? Here are some stats:

# mysqladmin -i 10 -r ex|grep Com_create_table ... | Com_create_table | 6497 | | Com_create_table | 6449

So we created ~650 tables per second. The average, above, is per 10 seconds.

Counting the tables

It took > 6 hours to do “count(*) from information_schema.tables”! Here is why:

  1. MySQL 8.0 uses a new data dictionary (this is great as it avoids creating 1 billion frm files). Everything is stored in this file:
    # ls -lah /mysqldata/mysql/data/mysql.ibd -rw-r----- 1 mysql mysql 6.1T Oct 18 15:02 /mysqldata/mysql/data/mysql.ibd
  2. The information_schema.tables is actually a view:

mysql> show create table information_schema.tables\G *************************** 1. row *************************** View: TABLES Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`mysql.infoschema`@`localhost` SQL SECURITY DEFINER VIEW `information_schema`.`TABLES` AS select `cat`.`name` AS `TABLE_CATALOG`,`sch`.`name` AS `TABLE_SCHEMA`,`tbl`.`name` AS `TABLE_NAME`,`tbl`.`type` AS `TABLE_TYPE`,if((`tbl`.`type` = 'BASE TABLE'),`tbl`.`engine`,NULL) AS `ENGINE`,if((`tbl`.`type` = 'VIEW'),NULL,10) AS `VERSION`,`tbl`.`row_format` AS `ROW_FORMAT`,internal_table_rows(`sch`.`name`,`tbl`.`name`,if(isnull(`tbl`.`partition_type`),`tbl`.`engine`,''),`tbl`.`se_private_id`,(`tbl`.`hidden` <> 'Visible'),`ts`.`se_private_data`,coalesce(`stat`.`table_rows`,0),coalesce(cast(`stat`.`cached_time` as unsigned),0)) AS `TABLE_ROWS`,internal_avg_row_length(`sch`.`name`,`tbl`.`name`,if(isnull(`tbl`.`partition_type`),`tbl`.`engine`,''),`tbl`.`se_private_id`,(`tbl`.`hidden` <> 'Visible'),`ts`.`se_private_data`,coalesce(`stat`.`avg_row_length`,0),coalesce(cast(`stat`.`cached_time` as unsigned),0)) AS `AVG_ROW_LENGTH`,internal_data_length(`sch`.`name`,`tbl`.`name`,if(isnull(`tbl`.`partition_type`),`tbl`.`engine`,''),`tbl`.`se_private_id`,(`tbl`.`hidden` <> 'Visible'),`ts`.`se_private_data`,coalesce(`stat`.`data_length`,0),coalesce(cast(`stat`.`cached_time` as unsigned),0)) AS `DATA_LENGTH`,internal_max_data_length(`sch`.`name`,`tbl`.`name`,if(isnull(`tbl`.`partition_type`),`tbl`.`engine`,''),`tbl`.`se_private_id`,(`tbl`.`hidden` <> 'Visible'),`ts`.`se_private_data`,coalesce(`stat`.`max_data_length`,0),coalesce(cast(`stat`.`cached_time` as unsigned),0)) AS `MAX_DATA_LENGTH`,internal_index_length(`sch`.`name`,`tbl`.`name`,if(isnull(`tbl`.`partition_type`),`tbl`.`engine`,''),`tbl`.`se_private_id`,(`tbl`.`hidden` <> 'Visible'),`ts`.`se_private_data`,coalesce(`stat`.`index_length`,0),coalesce(cast(`stat`.`cached_time` as unsigned),0)) AS `INDEX_LENGTH`,internal_data_free(`sch`.`name`,`tbl`.`name`,if(isnull(`tbl`.`partition_type`),`tbl`.`engine`,''),`tbl`.`se_private_id`,(`tbl`.`hidden` <> 'Visible'),`ts`.`se_private_data`,coalesce(`stat`.`data_free`,0),coalesce(cast(`stat`.`cached_time` as unsigned),0)) AS `DATA_FREE`,internal_auto_increment(`sch`.`name`,`tbl`.`name`,if(isnull(`tbl`.`partition_type`),`tbl`.`engine`,''),`tbl`.`se_private_id`,(`tbl`.`hidden` <> 'Visible'),`ts`.`se_private_data`,coalesce(`stat`.`auto_increment`,0),coalesce(cast(`stat`.`cached_time` as unsigned),0),`tbl`.`se_private_data`) AS `AUTO_INCREMENT`,`tbl`.`created` AS `CREATE_TIME`,internal_update_time(`sch`.`name`,`tbl`.`name`,if(isnull(`tbl`.`partition_type`),`tbl`.`engine`,''),`tbl`.`se_private_id`,(`tbl`.`hidden` <> 'Visible'),`ts`.`se_private_data`,coalesce(cast(`stat`.`update_time` as unsigned),0),coalesce(cast(`stat`.`cached_time` as unsigned),0)) AS `UPDATE_TIME`,internal_check_time(`sch`.`name`,`tbl`.`name`,if(isnull(`tbl`.`partition_type`),`tbl`.`engine`,''),`tbl`.`se_private_id`,(`tbl`.`hidden` <> 'Visible'),`ts`.`se_private_data`,coalesce(cast(`stat`.`check_time` as unsigned),0),coalesce(cast(`stat`.`cached_time` as unsigned),0)) AS `CHECK_TIME`,`col`.`name` AS `TABLE_COLLATION`,internal_checksum(`sch`.`name`,`tbl`.`name`,if(isnull(`tbl`.`partition_type`),`tbl`.`engine`,''),`tbl`.`se_private_id`,(`tbl`.`hidden` <> 'Visible'),`ts`.`se_private_data`,coalesce(`stat`.`checksum`,0),coalesce(cast(`stat`.`cached_time` as unsigned),0)) AS `CHECKSUM`,if((`tbl`.`type` = 'VIEW'),NULL,get_dd_create_options(`tbl`.`options`,if((ifnull(`tbl`.`partition_expression`,'NOT_PART_TBL') = 'NOT_PART_TBL'),0,1))) AS `CREATE_OPTIONS`,internal_get_comment_or_error(`sch`.`name`,`tbl`.`name`,`tbl`.`type`,`tbl`.`options`,`tbl`.`comment`) AS `TABLE_COMMENT` from (((((`mysql`.`tables` `tbl` join `mysql`.`schemata` `sch` on((`tbl`.`schema_id` = `sch`.`id`))) join `mysql`.`catalogs` `cat` on((`cat`.`id` = `sch`.`catalog_id`))) left join `mysql`.`collations` `col` on((`tbl`.`collation_id` = `col`.`id`))) left join `mysql`.`tablespaces` `ts` on((`tbl`.`tablespace_id` = `ts`.`id`))) left join `mysql`.`table_stats` `stat` on(((`tbl`.`name` = `stat`.`table_name`) and (`sch`.`name` = `stat`.`schema_name`)))) where (can_access_table(`sch`.`name`,`tbl`.`name`) and is_visible_dd_object(`tbl`.`hidden`)) character_set_client: utf8 collation_connection: utf8_general_ci

and the explain plan looks like this:

mysql> explain select count(*) from information_schema.tables \G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: cat partitions: NULL type: index possible_keys: PRIMARY key: name key_len: 194 ref: NULL rows: 1 filtered: 100.00 Extra: Using index *************************** 2. row *************************** id: 1 select_type: SIMPLE table: tbl partitions: NULL type: ALL possible_keys: schema_id key: NULL key_len: NULL ref: NULL rows: 1023387060 filtered: 100.00 Extra: Using where; Using join buffer (Block Nested Loop) *************************** 3. row *************************** id: 1 select_type: SIMPLE table: sch partitions: NULL type: eq_ref possible_keys: PRIMARY,catalog_id key: PRIMARY key_len: 8 ref: mysql.tbl.schema_id rows: 1 filtered: 11.11 Extra: Using where *************************** 4. row *************************** id: 1 select_type: SIMPLE table: stat partitions: NULL type: eq_ref possible_keys: PRIMARY key: PRIMARY key_len: 388 ref: mysql.sch.name,mysql.tbl.name rows: 1 filtered: 100.00 Extra: Using index *************************** 5. row *************************** id: 1 select_type: SIMPLE table: ts partitions: NULL type: eq_ref possible_keys: PRIMARY key: PRIMARY key_len: 8 ref: mysql.tbl.tablespace_id rows: 1 filtered: 100.00 Extra: Using index *************************** 6. row *************************** id: 1 select_type: SIMPLE table: col partitions: NULL type: eq_ref possible_keys: PRIMARY key: PRIMARY key_len: 8 ref: mysql.tbl.collation_id rows: 1 filtered: 100.00 Extra: Using index

Conclusions
  1. I have created more than 1 billion real InnoDB tables with indexes in MySQL 8.0, just for fun, and it worked. It took ~2 weeks to create.
  2. Probably MySQL 8.0 is the first version where it is even practically possible to create billion InnoDB tables
  3. ZFS compression together with NVMe cards makes it reasonably cheap to do, for example, by using i3.4xlarge or i3.8xlarge instances on AWS.

Percona Live Europe Presents … In Their Own Words

For those who are looking forward to Percona Live Europe in just two weeks time—and for those yet to make up their minds—some of our presenters have shared some insight into their talks and what they’re most looking forward to themselves. Make no mistake, this is one of the most exciting events in the conference calendar for those of us who work with open source databases.

This year, our conference previews are being hosted over on the Percona community blog and the posts have been written by the presenters.

Percona Live Europe presents…

Here are the first six posts in this series of Percona Live Europe presents. There are more to come, so do come back over the next few days to see if any of the writers can help you pinpoint the talks that you are most interested in attending this year:

  • Dinesh Joshi will be taking a look at boosting Apache Cassandra’s performance using Netty
  • Federico Razzoli on why he’s investigating MariaDB system versioned tables
  • Jaime Crespo of Wikimedia Foundation will be presenting a entry level (but detailed) tutorial on query optimization, and a break out talk on TLS security, you can find out more in his blog post
  • Tiago Jorge of Oracle on his talk about MySQL 8.0 replication
  • There’s going to be an ElasticSearch 101 tutorial presented by three of the team from ObjectRocket—Antonios Giannopoulos tells you more about that stellar opportunity—while last but by no means least…
  • Arjen Lentz, new CEO of MariaDB Foundation, is keen to share with you the latest information on MariaDB 10.3

Tantalized? Keep meaning to book your seat? There’s not long left now, so head straight to the registration page and book your place. Percona Live Europe will be in  Frankfurt from November 5-7 2018.

About the community blog

We’re really pleased that the community blog is gaining some great support. It offers a platform for all to write on the general topic of open source databases. Commercial and non-commercial. Those who are already prolific bloggers, and those who maybe only want to write a blog or two on a topic that they feel strongly about. If you’d like to join us and write for the community blog, please get in touch! You can email me.

Angular 7 Features And Updates

Angular 7 Features And Updates is today’s leading topic. Yes, it is true that  Angular v7 is here and the wait is finally over, and we should be extra enthusiastic about this one since it’s a significant release that implements changes, new features, and improvements throughout the entire platform. It is released with Angular Material 7 and Angular CLI 7Angular 7 is released with improved application performance. Angular Material 7 and CDK have new features like Virtual ScrollingDrag and DropCLI prompts the new feature in Angular CLI 7.

If you want to learn more about Angular, then check out this Angular 7 – The complete Guide course. Angular 7 Features And Updates

There are lots of new features in Angular 7, and we see all one by one.

A new ng-compiler

The new compiler is capable of excellent 8-phase rotating ahead-of-time(AOT) compilation. Most Angular applications can expect a massive reduction (95-99%) in bundle sizes. When the actual size of the Angular bundle becomes less than what most languages would take to store the string Angular, you know it’s significant progress.

The ngcc Angular node_module compatibility compiler – The ngcc is a tool which “upgrades” node_module compiled with non-ivy ngc into ivy compliant format.

Angular Compatibility Compiler(NGCC) will convert node_modules compiled with Angular Compatibility Compiler (ngcc), into node_modules which appear to have been composed with TSC compiler transformer (ngtsc) and this compiler change will allow such “legacy” packages to be used by an Ivy rendering engine.

CLI prompts

The CLI will now prompt users when running common commands like ng new or ng add @angular/material to help you discover built-in features like routing or SCSS support. And the great news, it’s customizable! Add a schematic.json using the Schematic CLI, and you can tell the Angular CLI which prompts to execute.

 

Angular DoBootstrap

Angular 7 added a new lifecycle hook that is called ngDoBootstrap and an interface that is called DoBootstrap.

// lifecycle hook class AppModule implements DoBootstrap { ngDoBootstrap(appRef: ApplicationRef) { appRef.bootstrap(AppComponent); } } Application performance

The Angular team has discovered that many developers included the reflect-metadata polyfill in the production. So that is only needed in the development and not in production. So they decided that to fix this, part of the update to v7 will automatically remove it from your polyfills.ts file, and then include it as a build step when building your application in JIT mode. So lifting this polyfill from production builds by default. 

The Ivy rendering engine is a new backward-compatible Angular renderer main focused on

  1. Speed Improvements
  2. Size Reduction
  3. Increased Flexibility

This Ivy Rendering feature will reduce the code size and makes compilation faster.

The Angular 7 upgrade is faster than its previous version (less than 10 minutes for many apps according to the official announcement).  Angular 7 framework is rapid and the virtual scrolling CDK module detailed above makes apps run with better performance. New projects are also now defaulted using Budget Bundles which work to notify you when your app is reaching size limits. By default, you get warnings when you reach 2MB and errors at 5MB. And when you need a little more space, jump in your angular.json and adjust as necessary.

The Angular Material CDK ScrollingModule

As many mobile frameworks have started to make a move toward dynamically loading data such as images or long lists, Angular has followed suit by adding the ScrollingModule to allow for virtual scrolling. As elements gain or lose visibility, they are virtually loaded and unloaded from the DOM. Performance is significantly increased to the user’s eyes. Next time you have a potentially large list of items for your users to scroll, stick it in a cdk-virtual-scroll-viewportcomponent and take advantage of the performance boost!

DragDropModule

Now you can remain entirely within the Angular Material module and implement drag and drop support including reordering lists and transferring items between lists. The CDK includes automatic rendering, drag handlers, drop handlers and even the ability to move data. Don’t like the standard drag animation? No sweat. It’s Angular; it’s yours to override.

Ivy Renderer

The next generation ready-when-it’s-ready renderer…still isn’t quite ready. The Angular team won’t commit to a final timeline, but the development is still active and progressing. They note backward compatibility validation is beginning and while no official team member has commented, a few fervent followers of the commits are expecting a complete beta to launch sometime during V7’s lifespan with a possible official release alongside version 8. Follow the progress yourself on the GitHub Ivy Renderer issue under the official Angular repo. The best news? They fully expect that Ivy can be released in a minor release as long as it’s fully tested and validated. So who knows? Maybe we’ll see it in v7 after all.

Dependency Updates

The 7.0.0 release features updated dependencies on major 3rd party projects:

  1. TypeScript 3.1
  2. RxJS 6.3
  3. Node 10 — support for Node 10 added, and support for 8 continues.

Finally, Angular 7 Features And Updates is over.

The post Angular 7 Features And Updates appeared first on AppDividend.

Percona Live Europe Presents: Need for speed – Boosting Apache Cassandra’s performance using Netty

My talk is titled Need for speed: Boosting Apache Cassandra’s performance using Netty. Over the years that I have worked in the software industry, making code run fast has fascinated me. So, naturally when I first started contributing to Apache Cassandra, I started looking opportunities to improve its performance. My talk takes us through some interesting challenges within a distributed system like Apache Cassandra and various techniques to significantly improve its performance. Talking about performance is incredibly exciting because you can easily quantify and see the results. Making improvements to the database’s performance not only improves the user experience but also reflects positively on the organization’s bottom line. It also has the added benefit of pushing the boundaries of scale. Furthermore, my talk spans beyond Apache Cassandra and is generally applicable for writing performant networking applications in Java.

Who’d benefit most from the presentation?

My talk is oriented primarily towards developers and operators. Although Apache Cassandra is written in Java and we talk about Netty, there is plenty in the talk that is generic and the lessons learned could be applied towards any Distributed System. I think developers with various experience levels would benefit from the talk. However, intermediate developers would benefit the most.

What I’m most looking forward to at PLE ’18…

There are many interesting sessions at the conference. Here are some of the interesting sessions –

Performance Analyses Technologies for Databases

As I mentioned, I am a big performance geek and in this talk Peter is going to talk about various methods to data infrastructure performance analysis including monitoring.

Securing Access to Facebook’s Databases

This is an interesting session from a security standpoint. Andrew is talking about securing access to MySQL. As most people know Facebook has a huge MySQL deployment and as security and privacy has become a prime concern, we see a lot of movement towards encryption. This talk is going to be particularly interesting because Facebook is using x509 client certs to authenticate. This is a non-trivial challenge for anybody at scale.

TLS for MySQL at large scale

This talk from Wikipedia is along similar lines as the previous one. It just goes to emphasize the importance of security in today’s climate. What’s interesting is that Wikipedia and Facebook, both are talking about it! I am curious to find out what sort of privacy challenges Wikipedia is solving.

Advanced MySQL Data at Rest Encryption in Percona Server

Another security related talk! This one’s about encryption at rest. This is interesting in an of itself as we tend to talk a lot about security in transit and less often about security of data at rest. I hope to learn more about the cost of implementing encryption at rest and it’s impact on the database performance, operations as well as security.

Artificial Intelligence Database Performance Tuning

I think this is an exciting time for the database industry as we’ve not only seen large increase in data volumes but also user expectations have gone up around performance. So, can AI help us tune our databases? Traditionally, the domain of an experienced DBA, I think AI can help us deliver better performance. This talk is about using Genetic Algorithms to tune the database performance. I am curious to find out how these algorithms are applied to tune databases.

The post Percona Live Europe Presents: Need for speed – Boosting Apache Cassandra’s performance using Netty appeared first on Percona Community Blog.

MySQL Shell 8.0.13 – What’s New?

The MySQL Development team is proud to announce a new version of the MySQL Shell which in addition to the usual bug fixes and enhancements to the existing components,  offers new features we expect are quite useful in your day to day work.…

On Some Recent MySQL Optimizer Bugs

Yet another topic I missed in my blog post on problematic MySQL features back in July is MySQL optimizer. Unlike with XA transactions, it was done in purpose, as known bugs, limitations and deficiencies of MySQL optimizer is a topic for a series of blog posts if not a separate blog. At the moment the list of known active bug reports in optimize category consists of 380(!) items (mostly "Verified"), aside from feature requests and bugs not considered "production" ones by current "rules" of MySQL public bugs database. I try to check optimizer bugs often in my posts, I reported many of them, but I am still not ready to cover this topic entirely.

What I can do in frames of one blog post is a quick review of some "Verified" optimizer bugs reported over last year. I'll present them one by one in a list, with some comments (mostly related to my checks of the same test case with MariaDB 10.3.7 that I have at hand) and, hopefully, some conclusions about current state of MySQL optimizer.

I'll try to shed some light on current state of MySQL optimizer, but it's huge and dark area, with many details hidden... So, here is the list, starting from most recently reported bugs:
  • Bug #92654 - "GROUP BY fails for queries when a temporary table is involved". This bug affects recent MySQL 8.0.12 and 5.7.23, but does not affect MariaDB 10.3, for example, from what I see:
    MariaDB [test]> insert into domain_tree values (1), (2), (3);
    Query OK, 3 rows affected (0.080 sec)
    Records: 3  Duplicates: 0  Warnings: 0

    MariaDB [test]> insert into host_connection_info values (1), (3);
    Query OK, 2 rows affected (0.054 sec)
    Records: 2  Duplicates: 0  Warnings: 0

    MariaDB [test]> SELECT
        ->   COUNT(1),
        ->   host_connection_status.connection_time
        -> FROM
        ->   (SELECT id
        ->    FROM domain_tree) AS hosts_with_status
        ->   LEFT OUTER JOIN
        ->   (SELECT
        ->      domain_id,
        ->      'recent' AS connection_time
        ->    FROM
        ->      host_connection_info) AS host_connection_status
        ->     ON hosts_with_status.id = host_connection_status.domain_id
        -> GROUP BY host_connection_status.connection_time;
    +----------+-----------------+
    | COUNT(1) | connection_time |
    +----------+-----------------+
    |        1 | NULL            |
    |        2 | recent          |
    +----------+-----------------+
    2 rows in set (0.003 sec)
  • Bug #92524 - "Left join with datetime join condition produces wrong results". The bug was reported by Wei Zhao, who contributed a patch. Again, MariaDB 10.3 is not affected:
    MariaDB [test]> select B.* from h1 left join g B on h1.a=B.a where B.d=str_to_date('99991231',"%Y%m%d") and h1.a=1;
    +---+---------------------+
    | a | d                   |
    +---+---------------------+
    | 1 | 9999-12-31 00:00:00 |
    +---+---------------------+
    1 row in set (0.151 sec)

    MariaDB [test]> select B.* from h1 left join g B on h1.a=B.a and B.d=str_to_date
    ('99991231',"%Y%m%d") where h1.a=1;
    +---+---------------------+
    | a | d                   |
    +---+---------------------+
    | 1 | 9999-12-31 00:00:00 |
    +---+---------------------+
    1 row in set (0.002 sec)
  • Bug #92466 - "Case function error on randomly generated values". See also related older Bug #86624 - "Subquery's RAND() column re-evaluated at every reference". These are either regressions comparing to MySQL 5.6 (and MariaDB), or unclear and weird change in behavior that can be workarounded with some tricks (suggested by Oracle developers) to force materialization of derived table. Essentially, result depends on execution plan - what else could we dream about?
  • Bug #92421 - "Queries with views and operations over local variables don't use indexes". Yet another case when MySQL 5.6 worked differently. As Roy Lyseng explained in comments:
    "... this is due to a deliberate choice that was taken when rewriting derived tables and views in 5.7: When a user variable was assigned a value in a query block, merging of derived tables was disabled.
    ...
    In 8.0, you can override this with a merge hint: /*+ merge(v_test) */, but this is unfortunately not implemented in 5.7.
    "
  • Bug #92209 - "AVG(YEAR(datetime_field)) makes an error result because of overflow". All recent MySQL versions and MariaDB 10.3.7 are affected.
  • Bug #92020 - "Introduce new SQL mode rejecting queries with results depending on query plan". Great feature request by Sveta Smirnova that shows current state of optimizer development properly. We need a feature for MySQL to stop accepting queries that may return different results depending on the execution plan. So, current MySQL considers different results when different execution plans are used normal! Sveta refers to her Bug #91878 - "Wrong results with optimizer_switch='derived_merge=ON';" as an example. MariaDB 10.3 is NOT affected by that bug.
  • Bug #91418 - "derived_merge causing incorrect results with distinct subquery and uuid()". From what I see in my tests, MariaDB 10.3.7 produce wrong results with derived_merge both ON and OFF, unfortunately.
  • Bug #91139 - "use index dives less often". In MySQL 5.7+ the default value of eq_range_index_dive_limit increased from to 10 to 200, and this may negatively affect performance. As Mark Callaghan noted, when there is only one possible index exists optimizer doesn't need to evaluate the query to figure out how to evaluate the query.
  • Bug #90847 - "Query returns wrong data if order by is present". This is definitely a corner case, but still. MariaDB 10.3 returns correct result in my tests.
  • Bug #90398 - "Duplicate entry for key '<group_key>' error". I can not reproduce the last public test case on MariaDB 10.3.
  • Bug #89419 - "Incorrect use of std::max". It was reported based on code analysis by Zsolt Parragi. See also Bug #90853 - "warning: taking the max of a value and unsigned zero is always equal to the other value [-Wmax-unsigned-zero]". Proper compiler detects this.
  • Bug #89410 - "delete from ...where not exists with table alias creates an ERROR 1064 (42000)". MariaDB 10.3 is also affected. Both Oracle and PostrgeSQL accepts the syntax, while in MySQL and MariaDB we can use multi-table delete syntax-based workaround as suggested by Roy Lyseng.
  • Bug #89367 - "Storing result in a variable(UDV) causes query on a view to use derived tables", was reported by Jaime Sicam. This is a kind of regression in MySQL 5.7. MariaDB 10.3 and MySQL 8.0 are not affected. Let me quote a comment by Roy Lyseng:
    "In 5.7, we added a heuristic so that queries that assign user variables are by default materialized and not merged. However, we should have let the ALGORITHM=MERGE override this decision. This is a bug."
  • Bug #89182 - "Optimizer unable to ignore index part entering less optimal query plan". Nice report from Przemyslaw Malkowski. One of many case when "ref" vs "range" decision seems to be wrong based on costs. Looks like optimizer still have parts that are heuristics/rules based and/or do not take costs into account properly.
  • Bug #89149 - "SELECT DISTINCT on multiple TEXT columns is slow". Yet another regression in MySQL 5.7+.
That's all optimizer bugs reported in 2018 and still "Verified" that I wanted to discuss.

From the list above I can conclude the following:
  1. There are many simple enough cases when queries provide wrong results or get not optimal execution plans in MySQL. For many of them MariaDB's optimizer does a better job.
  2. Behavior of optimizer for some popular use cases changed after MySQL 5.6, so take extra care to check queries and their results after upgrade to MySQL 5.7+.
  3. derived_merge optimization seems to cause a lot of problems for users in MySQL 5.7 and 8.0.
  4. It seems optimizer developers care enough to comment on bugs, suggest workarounds and explain decisions made.

Combining tiered and leveled compaction

There are simple optimization problems for LSM tuning. For example use leveled compaction to minimize space amplification and use tiered to minimize write amplification. But there are interesting problems that are harder to solve:
  1. maximize throughput given a constraint on write and/or space amplification
  2. minimize space and/or write amplification given a constraint on read amplification
To solve the first problem use leveled compaction if it can satisfy the write amp constraint, else use tiered compaction if it can satisfy the space amp constraint, otherwise there is no solution. The lack of a solution might mean the constraints are unreasonable but it can also mean we need to enhance LSM implementations to support more diversity in LSM tree shapes. Even when there is a solution using leveled or tiered compaction there are solutions that would do much better were an LSM to support more varieties of tiered+leveled and leveled-N.
When I mention solved above I leave out that there is more work to find a solution even when tiered or leveled compaction is used. For both there are decisions about the number of levels and per-level fanout. If minimizing write amp is the goal then that is a solved problem. But there are usually more things to consider.
Tiered+leveled
I defined tiered+leveled and leveled-N in a previous post. They occupy the middle ground between tiered and leveled compaction with better read efficiency than tiered and better write efficiency than leveled. They are not supported today by popular LSM implementations but I think they can and should be supported. 
While we tend to explain compaction as a property of an LSM tree (all tiered or all leveled) it is really a property of a level of an LSM tree and RocksDB already supports hybrids, combinations of tiered and leveled. For tiered compaction in RocksDB all levels except the largest use tiered. The largest level is usually configured to use leveled to reduce space amp. For leveled compaction in RocksDB all levels except the smallest use leveled and the smallest (L0) uses tiered.
So tiered+leveled isn't new but I think we need more flexibility. When a string of T and L is created from the per-level compaction choices then the regex for the strings that RocksDB supports is T+L or TL+. I want to support T+L+. I don't want to support cases where leveled is used for a smaller level and tiered for a larger level. So I like TTLL but not LTTL. My reasons for not supporting LTTL are:
  1. The benefit from tiered is less write amp and is independent of the level on which it is used. The reduction in write amp is the same whether tiered is used for L1, L2 or L3.
  2. The cost from tiered is more read and space amp and that is dependent on the level on which it is used. The cost is larger for larger levels. When space amp is 2 more space is wasted on larger levels than smaller levels. More IO read amp is worse for larger levels because they have a lower hit rate than smaller levels and more IO will be done. More IO implies more CPU cost from decompression and the CPU overhead of performing IO.
From above the benefit from using T is the same for all levels but the cost increases for larger levels so when T and L are both used then T (tiered) should be used on the smaller levels and L (leveled) on the larger levels.
Leveled-N
I defined leveled-N in a previous post. Since then a co-worker, Maysam Yabandeh, explained to me that a level that uses leveled-N can also be described as two levels where the smaller uses leveled and the larger uses tiered. So leveled-N might be syntactic sugar in the LSM tree configuration language.
For example with an LSM defined using the triple syntax from here as (compaction type, fanout, runs-per-level) then this is valid: (T,1,8) (T,8,2) (L,8,2) (L,8,1) and has total fanout of 512 (8 * 8 * 8). The third level (L,8,2) uses leveled-N with N=2. Assuming we allow LSM trees where T follows L then the leveled-N level can be replaced with two levels: (L,8,1) (T,1,8). Then the LSM tree is defined as (T,1,8) (T,8,2) (L,8,1) (T,1,8) (L,8,1). These LSM trees have the same total fanout and total read/write/space amp. Compaction from (L,8,1) to (T,1,8) is special. It has zero write amp because it is done by a file move rather than merging/writing data so all that must be updated is LSM metadata to record the move.
So in general I don't support T after L but I do support it in the special case. Of course we can pretend the special case doesn't exist if we use the syntactic sugar provided by leveled-N. But I appreciate that Maysam discovered this.

Percona Live Europe Presents: The Latest MySQL Replication Features

Considering the modern world of technology, where distributed system play a key role, replication in MySQL® is at the very heart of that change. It is very exciting to deliver this presentation and to be able to show everyone the greatest and the latest features that MySQL brings in order to continue the success that it has always been in the past.

The talk is suitable for anyone that’s interested in knowing what Oracle is doing with MySQL replication. Old acquaintances will get familiarized about new features already delivered and being considered and newcomers to the MySQL ecosystem will see how great MySQL Replication has grown to be and how it fits in their business..

What I’m most looking forward to at Percona Live Europe…

We are always eager to get feedback about the product.

Moreover, MySQL being MySQL has a very large user base and, as such, is deployed and used in many different ways. It is very appealing and useful to continuously learn how our customers and users are making the most out of the product. Especially when it comes to replication, since MySQL replication infrastructure is anenabler for advanced and complex setups, making it a powerful and indispensable tool in virtually any setup nowadays.

The post Percona Live Europe Presents: The Latest MySQL Replication Features appeared first on Percona Community Blog.

Effective Monitoring of MySQL with SCUMM Dashboards Part 1

We added a number of new dashboards for MySQL in our latest release of ClusterControl 1.7.0. - and in our previous blog, we showed you How to Monitor Your ProxySQL with Prometheus and ClusterControl.

In this blog, we will look at the MySQL Overview dashboard.

So, we have enabled the Agent Based Monitoring under the Dashboard tab to start collecting metrics to the nodes. Take note that when enabling the Agent Based Monitoring, you have the options to set the “Scrape Interval (seconds)” and “Data retention (days)”. Scraping Interval is where you want to set how aggressively Prometheus will harvest data from the target and Data Retention is how long you want to keep your data collected by Prometheus before it’s deleted.

When enabled, you can identify which cluster has agents and which one has agentless monitoring.

Compared to the agentless approach, the granularity of your data in graphs will be higher with agents.

The MySQL Graphs

The latest version of ClusterControl 1.7.0 (which you can download for free - ClusterControl Community) has the following MySQL Dashboards for which you can gather information for your MySQL servers. These are MySQL Overview, MySQL InnoDB Metrics, MySQL Performance Schema, and MySQL Replication.

We’ll cover in details the graphs available in the MySQL Overview dashboard.

MySQL Overview Dashboard

This dashboard contains the usual important variables or information regarding the health of your MySQL node. The graphs contained on this dashboard are specific to the node selected upon viewing the dashboards as seen below:

It consists of 26 graphs, but you might not need all of these when diagnosing problems. However, these graphs provides a vital representation of the overall metrics for your MySQL servers. Let’s go over the basic ones, as these are probably the most common things that a DBA will routinely look at.

The first four graphs shown above along with the MySQL’s uptime, query per-seconds, and buffer pool information are the most basic pointers we might need. From the graphs displayed above, here are their representations:

  • MySQL Connections
    This is where you want to check your total client connections thus far allocated in a specific period of time.
  • MySQL Client Thread Activity
    There are times that your MySQL server could be very busy. For example, it might be expected to receive surge in traffic at a specific time, and you want to monitor your running threads activity. This graph is really important to look at. There can be times your query performance could go south if, for example, a large update causes other threads to wait to acquire lock. This would lead to an increased number of your running threads. The cache miss rate is calculated as Threads_created/Connections.
  • MySQL Questions
    These are the queries running in a specific period of time. A thread might be a transaction composed of multiple queries and this can be a good graph to look at.
  • MySQL Thread Cache
    This graph shows the thread_cache_size value, threads that are cached (threads that are reused), and threads that are created (new threads). You can check on this graph for such instances like you need to tune your read queries when noticing a high number of incoming connections and your threads created increases rapidly. For example, if your Threads_running / thread_cache_size > 2 then increasing your thread_cache_size may give a performance boost to your server. Take note that creation and destruction of threads are expensive. However, in the recent versions of MySQL (>=5.6.8), this variable has autosizing by default which you might consider it untouched.

The next four graphs are MySQL Temporary Objects, MySQL Select Types, MySQL Sorts, and MySQL Slow Queries. These graphs are related to each other specially if you are diagnosing long running queries and large queries that needs optimization.

  • MySQL Temporary Objects
    This graph would be a good source to rely upon if you want to monitor long running queries that would end up using disk instead of temporary tables or files going in-memory. It’s a good place to start looking for periodical occurrence of queries that could add up to create disk space issues especially during odd times.
  • MySQL Select Types
    One source of bad performance is queries that are using full joins, table scans, select range that is not using any indexes. This graph would show how your query performs and what amongst the list from full joins, to full range joins, select range, table scans has the highest trends.
  • MySQL Sorts
    Diagnosing those queries that are using sorting, and the ones that take much time to finish.
  • MySQL Slow Queries
    Trends of your slow queries are collected here on this graph. This is very useful especially on diagnosing how often your queries are slow. What are things that need to be tuned? It could be too small buffer pool, tables that lack indexes and goes a full-table scan, logical backups running on unexpected schedule, etc. Using our Query Monitor in ClusterControl along with this graph is beneficial, as it helps determine slow queries.

The next graphs we have cover is more of the network activity, table locks, and the underlying internal memory that MySQL is consuming during the MySQL’s activity.

  • MySQL Aborted Connections
    The number of aborted connections will render on this graph. This covers the aborted clients such as where the network was closed abruptly or where the internet connection was down or interrupted. It also records the aborted connects or attempts such as wrong passwords or bad packets upon establishing a connection from the client.
  • MySQL Table Locks
    Trends for tables that request for a table lock that has been granted immediately and for tables that request for a lock that has not been acquired immediately. For example, if you have table-level locks on MyISAM tables and incoming requests of the same table, these cannot be granted immediately.
  • MySQL Network Traffic
    This graph shows the trends of the inbound and outbound network activity in the MySQL server. “Inbound” is the data received by the MySQL server while “Outbound” is the data sent or transferred by the server from the MySQL server.This graph is best to check upon if you want to monitor your network traffic especially when diagnosing if your traffic is moderate but you’re wondering why it has a very high outbound transferred data, like for example, BLOB data.
  • MySQL Network Usage Hourly
    Same as the network traffic which shows the Received and Sent data. Take note that it’s based on ‘per hour’ and labeled with ‘last day’ which will not follow the period of time you selected in the date picker.
  • MySQL Internal Memory Overview
    This graph is familiar for a seasoned MySQL DBA. Each of these legends in the bar graph are very important especially if you want to monitor your memory usage, your buffer pool usage, or your adaptive hash index size.

The following graphs show the counters that a DBA can rely upon such as checking the statistics for example, the statistics for selects, inserts, updates, the number of master status that has been executed, the number of SHOW VARIABLES that has been executed, check if you have bad queries doing table scans or tables not using indexes by looking over the read_* counters, etc.


  • Top Command Counters (Hourly)
    These are the graphs you would likely have to check whenever you would like to see the statistics for your inserts, deletes, updates, executed commands such as gathering the processlist, slave status, show status (health statistics of the MySQL server), and many more. This is a good place if you want to check what kind of MySQL command counters are topmost and if some performance tuning or query optimization is needed. It might also allow you to identify which commands are running aggressively while not needing it.
  • MySQL Handlers
    Oftentimes, a DBA would go over these handlers and check how the queries are performing in your MySQL server. Basically, this graph covers the counters from the Handler API of MySQL. Most common handler counters for a DBA for the storage API in MySQL are Handler_read_first, Handler_read_key, Handler_read_last, Handler_read_next, Handler_read_prev, Handler_read_rnd, and Handler_read_rnd_next. There are lots of MySQL Handlers to check upon. You can read about them in the documentation here.
  • MySQL Transaction Handlers
    If your MySQL server is using XA transactions, SAVEPOINT, ROLLBACK TO SAVEPOINT statements. Then this graph is a good reference to look at. You can also use this graph to monitor all your server’s internal commits. Take note that the counter for Handler_commit does increment even for SELECT statements but differs against insert/update/delete statements which goes to the binary log during a call to COMMIT statement.

The next graph will show trends about process states and their hourly usage. There are lots of key points here in the bar graph legend that a DBA would check. Encountering disk space issues, connection issues and see if your connection pool is working as expected, high disk I/O, network issues, etc.

  • Process States/Top Process States Hourly
    This graph is where you can monitor the top thread states of your queries running in the processlist. This is very informative and helpful for such DBA tasks where you can examine here any outstanding statuses that need resolution. For example, opening tables state is very high and its minimum value is almost near to the maximum value. This could indicate that you need to adjust the table_open_cache. If the statistics is high and you’re noticing a slow down of your server, this could indicate that your server is disk-bound and you might need to consider increasing your buffer pool. If you have a high number of creating tmp table then you might have to check your slow log and optimize the offending queries. You can checkout the manual for the complete list of MySQL thread states here.

The next graph we’ll be checking is about query cache, MySQL table definition cache, how often MySQL opens system files.


Related resources  Introducing SCUMM: the agent-based database monitoring infrastructure in ClusterControl  How to Monitor MySQL or MariaDB Galera Cluster with Prometheus Using SCUMM  Download ClusterControl
  • MySQL Query Cache Memory/Activity
    These graphs are related to each other. If you have query_cache_size <> 0 and query_cache_type <> 0, then this graph can be of help. However, in the newer versions of MySQL, the query cache has been marked as deprecated as the MySQL query cache is known to cause performance issues. You might not need this in the future. The most recent version of MySQL 8.0 has drastic improvements; it tends to increase performance as it comes with several strategies to handle cache information in the memory buffers.
  • MySQL File Openings
    This graph shows the trend for the opened files since the MySQL server’s uptime but it excludes files such as sockets or pipes. It does also not include files that are opened by the storage engine since they have their own counter that is Innodb_num_open_files.
  • MySQL Open Files
    This graph is where you want to check your InnoDB files currently held open, the current MySQL open files, and your open_files_limit variable.
  • MySQL Table Open Cache Status
    If you have very low table_open_cache set here, this graph will tell you about those tables that fail the cache (newly opened tables) or miss due to overflow. If you encounter a high number or too much “Opening tables” status in your processlist, this graph will serve as your reference to determine this. This will tell you if there’s a need to increase your table_open_cache variable.
  • MySQL Open Tables
    Relative to MySQL Table Open Cache Status, this graph is useful in certain occasions like you want to identify if there’s a need to increase of your table_open_cache or lower it down if you notice a high increase of open tables or Open_tables status variable. Note that table_open_cache could take a large amount of memory space so you have to set this with care especially in production systems.
  • MySQL Table Definition Cache
    If you want to check the number of your Open_table_definitions and Opened_table_definitions status variables, then this graph is what you need. For newer versions of MySQL (>=5.6.8), you might not need to change the value of this variable and use the default value since it has autoresizing feature.
Conclusion

The SCUMM addition in the latest version of ClusterControl 1.7.0 provides significant new benefits for a number of key DBA tasks. The new graphs can help easily pinpoint the cause of issues that DBAs or sysadmins would typically have to deal with and help find appropriate solutions faster.

We would love to hear your experience and thoughts on using ClusterControl 1.7.0 with SCUMM (which you can download for free - ClusterControl Community).

In part 2 of this blog, I will discuss Effective Monitoring of MySQL Replication with SCUMM Dashboards.

Tags:  MySQL monitoring dashboards clustercontrol scumm

ProxySQL 1.4.11 and Updated proxysql-admin Tool Now in the Percona Repository

ProxySQL 1.4.11, released by ProxySQL, is now available for download in the Percona Repository along with an updated version of Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and its forks (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

The ProxySQL 1.4.11 source and binary packages available at https://percona.com/downloads/proxysql include ProxySQL Admin – a tool, developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 1.4.11 are available as well: https://hub.docker.com/r/percona/proxysql/. You can download the original ProxySQL from https://github.com/sysown/proxysql/releases. The documentation is hosted on GitHub in the wiki format.

Improvements
  • mysql_query_rules_fast_routing is enabled in ProxySQL Cluster. For more information, see #1674 at GitHub.
  • In this release, rmpdb checksum error is ignored when building ProxySQL in Docker.
  • By default, the permissions for proxysql.cnf are set to 600 (only the owner of the file can read it or make changes to it).
Bugs Fixed
  • Fixed the bug that could cause crashing of ProxySQL if IPv6 listening was enabled. For more information, see #1646 at GitHub.

ProxySQL is available under Open Source license GPLv3.

Where you can find MySQL in October - December 2018 - part 2.

As a continue of the previous blog announcement posted on Oct 16, please find below a list of conferences & events where you can find MySQL team &/or MySQL Community during the period of Oct-Dec 2018:

November 2018 - cont.

  • PerconaLive, Frankfrurt Germany, November 5-7, 2018
    • We are happy to inform that MySQL is going to be Silver sponsor of PerconaLive Europe 2018! You will be able to find us on the MySQL booth in the expo hall as well as attend multiple MySQL talks given by our colleagues. The talks are scheduled as follows:
      • Tutorial on: "MySQL InnoDB Cluster in a Nutshell : The Saga Continues with 8.0" by Frédéric Descamps, the MySQL Community Manager, scheduled for Monday, @9:00am-12:00pm.
      • "MySQL 8.0 Performance: Scalability & Benchmarks" by Dimitri KRAVTCHUK, the Performance Architect, talk is scheduled for Tuesday @12:20-13:10.
      • "MySQL Group Replication: the magic explained" by Frédéric Descamps, the MySQL Community Manager, scheduled for Tuesday @14:20-15:10.
      • "Upgrading to MySQL 8.0 and a More Automated Experience" by Dmitry Lenev, the Senior Software Developer, scheduled for Tuesday @ 15:20-16:10.
      • "More SQL in MySQL 8.0" by Norvald Ryeng, the Senior Developer Manager - Optimizer Team, scheduled for Tuesday 17:25-17:50.
      • "The Latest MySQL Replication Features" by Jorge Tiago, the Senior Software Developer, scheduled for Wednesday @14:20-15:10.
      • "Developing Applications with Node.js and the MySQL Document Store" by Johannes Schlüter, the Principal Software Developer, scheduled for Wednesday @17:00-17:25.
    • Come to listen MySQL sessions & stop at our booth to talk to our staff at PerconaLive 2018!
  • PHP.RUHR, Dortmund, Germany, November 8, 2018
    • MySQL team is going to be part of this technical/PHP show. You can come to listen a MySQL talk in the Mainstage: Developer as: "MySQL 8.0 - The new Developer API" given by Mario Beck, the MySQL Sales Consulting Manger for EMEA. There also will be a Q&A session in the expo area about MySQL, where you can come to ask questions.
  • Highload++, Moscow, Russia, November 8-9, 2018
    • MySQL team is going to be part of this technical conference in Moscow. You can find us on our MySQL booth in the expo area as well as you should be able to find MySQL talk in the schedule (still not heard about its acceptance). We are looking forward to talking to you @ Highload++ this year!!!
  • SeaGL, Seattle, US, November 9-10, 2018
    • MySQL community team supports this GNU/Linux conference in Seattle. 
  • PHP[World], Washington DC, US, November 14-15, 2018 
    • MySQL is a Workshop sponsor this year. Unfortunately no talk this year, but we are in the waiting list for lightening talk. Please watch the organizers' website.
  • BGOUG, Pravets, Bulgaria, November 14-15, 2018
    • As a tradition MySQL is a conference sponsor of this Bulgarian Oracle User Group regular conference. This time with a following talk:
      • "Data Masking in MySQL Enterprise 5.7 and 8" given by Georgi Kodinov, the Senior Software Developer Manager for MySQL. 
    • Please come to listen Georgi's talk as well as ask him questions.
  • DOAG, Nuremberg, Germany, November 20-23, 2018
    • Same as in previous years we are going to be part of this conference organized by German Oracle User Group. You can find MySQL staff at the Oracle booth (Place 320) as well as attend several MySQL talks. You can find the talks here.
  • Feira do Conhecimento, Fortaleza, Brazil, November 21-24, 2018
    • The Science, Technology and Higher Education Secretariat (Secitece) will hold the second edition of the Knowledge Fair - Science, Technology, Innovation and Business at the Ceará Event Center (East Pavilion) and we are happy to be part of this event!! A Local MySQL Sales representative will be there to answer all MySQL related questions as well as a MySQL talk on "Innovation, Business & Technology" is approved. Please watch the organizers' website for further updates.
  • PyCon HK, Hong Kong, November 23-24, 2018
    • MySQL is a Bronze sponsor of this Python show and again this year without booth, but with a MySQL talk on "NoSQL Development for MySQL Document Store using Python" by Ivan Ma, the MySQL Principal Sales Consultant.

December 2018

  • Tech18, UKOUG, Liverpool, UK, December 3-5, 2018
    • As a tradition MySQL will be part of this Oracle User Group Conference in the UK. You will be able to find our staff at the Oracle booth in the expo area.
  • IT.Tage 2018, Frankfurt, Germany, December 10-13, 2018
    • Our pleasure to announce that this year MySQL is again part of the IT.Tage event. This time together with Oracle Developer & Linux team. You will have an opportunity to find us all at the shared Oracle booth as well as attend several Oracle's talks. 
    • For MySQL do not miss the opportunity to listen what is new in MySQL 8 during following session:
      • "MySQL 8 - MySQL as a Document Store Database" given by Carsten Thalheimer, the MySQL Principal Sales Consultant, scheduled for December 12, 2018 @ 14:30-15:15 in the Database track. 
    • We are looking forward to talking to you there!
  • OpenSource Conference Fukuoka, Fukuoka, Japan (December 8, 2018)
    • MySQL is Gold sponsor here. You will be able to find us at MySQL booth in expo area as well as find a MySQL talk on "State of Dolphin" given by Yoshiaki Yamasaki, the MySQL Senior Sales Consultant Asia Pacific and Japan region, during the talk general product updates will be covered.
  • OpenSource Enterprise, Tokyo, Japan, (December 14, 2018)
    • Again, MySQL is a Gold sponsor with MySQL booth & talk on the same topic: "State of Dolphin" by Yoshiaki Yamasaki. 

We will continue updating you about upcoming events & conferences where you can find MySQL team at. 

 

 

 

Picking a Deployment Method: Staging versus INI

Continuent Clustering is an extraordinarily flexible tool, with options at every layer of operation.

In this blog post, we will describe and discuss the two different methods for installing, updating and upgrading Continuent Clustering software.

When first designing a deployment, the question of installation methodology is answered by inspecting the environment and reviewing the customer’s specific needs.

Staging Deployment Methodology

All for One and One for All

Staging deployments were the original method of installing Continuent Clustering, and relied upon command-line tools to configure and install all cluster nodes at once from a central location called the staging server.

This staging server (which could be one of the cluster nodes) requires SSH access to all database nodes.

The simple steps include:

  1. cd to the top-level software staging directory. The default directory for extracting new tarballs is under /opt/continuent/software
  2. extract the software tarball
  3. cd into the newly-extracted directory
  4. configure with tools/tpm configure
  5. install using the tools/tpm install command

During execution, the tools/tpm install command uses SSH to install the Continuent software on all nodes in the cluster in parallel. For example, you could install a 4-site composite cluster with 3 nodes per site across multiple global regions with a single command and all 12 nodes would be handled at once.

Future configuration updates would be handled the same way (i.e. via tools/tpm update from the extracted software directory), and normally would affect all the cluster nodes at once.

The --no-connectors option may also be used to allow for the graceful cycling of the Connector connectivity layer.

INI Deployment Methodology

The Strength of Individuality

The INI methodology was developed with two situations in mind – no SSH access, and the ability to support per-node configuration files for automation tools like Chef, Puppet and Ansible.

In this deployment model, an INI file is placed on every host associated with the cluster first (/etc/tungsten/tungsten.ini by default). This is the equivalent to running the tools/tpm configure step from the staging method.

In the INI world, any tpm install or tpm update reads the configuration for that one node and executes the install or update ON THAT NODE ONLY. The presence of an INI file tells tpm to limit install and update actions to the current host.

In Continuent Clustering v4.0 and later, tpm will automatically search all INI files within the /etc/tungsten and current directories.

Once the INI file is populated on all cluster nodes, the procedure is similar to the staging method, except for the fact that it must occur on EVERY node individually.

The simple steps include:

  1. cd to the top-level software staging directory. The default directory for extracting new tarballs is under /opt/continuent/software
  2. extract the software tarball
  3. cd into the newly-extracted directory
  4. install using the tools/tpm install command

Note that the tpm configure step is gone since the configuration is now stored in the INI file.

Staging versus INI – Differences What’s the difference between the two methods of deployment?

Here are some of the key differences between Staging and INI deployments:

Staging

  • there is just one staging directory needed which resides on a single host (which may be a member cluster node or a separate host)
  • configuration is performed centrally in the staging directory on a single host
  • requires passwordless SSH
  • installs and updates multiple cluster nodes
  • provides a one-command cluster upgrade path

INI

  • every node has a locally-extracted staging directory
  • configuration is performed locally on each node via the /etc/tungsten/tungsten.ini file
  • does NOT require passwordless SSH
  • installs and updates one single cluster node at a time
  • installs and updates must be performed on every node one-by-one
  • there is NO one-command cluster upgrade path
  • ideal for automated environments
  • possible to have multiple configuration files in /etc/tungsten

Click here to read more about “Comparing Staging and INI tpm Methods”.

Staging versus INI – Decision We always recommend the INI deployment for many reasons:

– INI deployments have a very clear standard configuration location
– INI deployments are very easy to tweak on a per-host basis
– INI deployments are cloud and devops-friendly
– INI is required for certain topologies (i.e. Composite Multimaster)

Deployment Type – How Do I Know? Determining the deployment type after install

To determine which method your current deployment uses, check the output of:

shell&gt; tpm query deployments | grep deployment_external_configuration_type "deployment_external_configuration_type": "ini",

If you see the value of "ini", then the INI method was used to deploy your cluster.

~or~

shell&gt; ls -l /etc/tungsten/* -rw-rw-r-- 1 tungsten tungsten 1347 Oct 5 16:06 /etc/tungsten/tungsten.ini

If you use the ls command and see at least one .ini file, then you are using the INI method to configure your cluster.

Staging Directory – How Do I Know? Determining the staging directory location

To determine the staging directory location from where your cluster was installed, check the output of:

shell&gt; tpm query staging # Installed from tungsten@staging-host:/opt/continuent/software/tungsten-clustering-5.0.1-136

When using the INI method of configuration, the staging-host in the above example output would display the host name of the local host.

Continuent Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit https://www.continuent.com/solutions

Want to learn more or run a POC? Contact us.

MySQL Enterprise Monitor 8.0.3 has been released

The MySQL Enterprise Tools Development Team is pleased to announce the maintenance release of MySQL Enterprise Monitor 8.0.3 is now available for download on the My Oracle Support (MOS) web site. It will also be available for download via the Oracle Software Delivery Cloud in a few days.

If you are not familiar with MySQL Enterprise Monitor, it is the best-in-class tool for monitoring and management of your MySQL assets and is part of MySQL Enterprise Edition and MySQL Cluster Carrier Grade Edition subscriptions. You are invited to give it a try using our 30-day free customer trial. Go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact.

To download MySQL Enterprise Monitor, go to My Oracle Support, choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet. You will also find the binaries on the Oracle Software Delivery Cloud soon. Type "MySQL Enterprise Monitor 8.0.3" in the search box.

You can find the highlights of MySQL Enterprise Monitor 8.0 on the "What's New" section of the manual and more information on the contents of the 8.0.3 maintenance release in the change log.

Please open a bug or a ticket on My Oracle Support to report problems, request features, or give us general feedback about how this release meets your needs.

Thanks and Happy Monitoring!

Useful links

What's New in 8.0
Change log
Installation documentation
Product information
Frequently Asked Questions

Advice on pt-stalk for connection saturated servers

Given an environment where a high volume web application is prone to opening many connections to backend resources, using a utility like pt-stalk is important.  When performance or availability affecting events like innodb lock waits or connection saturation occur, pt-stalk helps give you information you may need in troubleshooting what was happening.

The tendency may be to create multiple pt-stalks for various conditions.  This can be a poor decision when your server is dealing with both lock contention and high connections. When pt-stalk triggers, it triggers multiple simultaneous connections to MySQL to get the full processlist, lock waits, innodb transactions, slave status and other attributes.  Pt-stalk has the concept of sleeping a number of seconds after triggering, but once that time expires, the trigger may fire again, compounding the issue.  Put simply, pt-stalk can absorb the last few remaining connections on your database, particularly if you use extra_port (and run pt-stalk on the extra_port) and have a relatively low number of extra_max_connections.

Advice:  Stick to using only one of the built-in functions (like “processlist”) if it triggering when your processlist is large is enough for you.  Alternatively, write your own trg_plugin() function encompassing multiple tests that are relevant to your environment, if you need more than one check.

Unfortunately I cannot share the one I just wrote at this time (will need to write a more generic one to share later).  It checks processlist length, replication lag, innodb_trx wait time, and innodb_lock_waits, so that I could fold four of our more relevant checks into 1 pt-stalk and avoid the “connection stack_up” when MySQL was having an issue and mutiple stalks were firing.

Percona Live Europe Presents: MariaDB System-Versioned Tables

System-versioned tables, or temporal tables, are a typical feature of proprietary database management systems like DB2, Oracle and SQL Server. They also appeared at some point in PostgreSQL, but only as an extension; and also in CockroachDB, but in a somewhat limited fashion.

The MariaDB® implementation is the first appearance of temporal tables in the MySQL ecosystem, and the most complete implementation in the open source world.

My presentation will be useful for analysts, and some managers, who will definitely benefit from learning how to use temporal tables. Statistics about how data evolves over time is an important part of their job. This feature will allow them to query data as it was at a certain point in time. Or to query how data changed over a period, including rows that were added, deleted or modified.

Developers will also find this feature useful, if they deal with data versioning or auditing. Recording the evolution of data into a database is not easy – several solutions are possible, but none is perfect. Streaming data changes to some event-based technology is also complex, and sometimes it’s simply a waste of resources. System-versioned tables are a good solution for many use cases.

And of course, DBA’s. Those guys will need to know what this feature is about, suggest it when appropriate, and maintain it in production systems.

More generally, many people are interested in understanding MariaDB’s unique features, as well as its MySQL ones. Their approach allows them to choose “the right tool for the right purpose”.

What I’m looking forward to…

I am excited about Percona Live agenda. A session that I definitely want to attend is MySQL Replication Crash Safety. I find extremely useful and interesting the talks about technology limitations and flaws. Jean-François has a long series of writings on MySQL replication and crash-safety, and I have questions for him.

I also like the evolution that PMM and its components had over the years. I want to understand how to use them at best in my new job, so I am glad to see that there will be several sessions on the topic. I plan to attend some sessions about PMM and Prometheus.

Performance Analyses Technologies for Databases makes me think to the cases when I saw a technology evaluated in an inappropriate way, and the talks I had with people impressed by some blog posts showing impressive benchmarks which didn’t fully understand. I will definitely attend.

And finally, I plan to learn something about ClickHouse, MyRocks and TiDB.

See you there!

The post Percona Live Europe Presents: MariaDB System-Versioned Tables appeared first on Percona Community Blog.

Continuent Clustering 5.3.3/5.3.4 and Tungsten Replicator 5.3.3/5.3.4 Released

Continuent is pleased to announce that Continuent Clustering 5.3.4 and Tungsten Replicator 5.3.4 are now available!

Release 5.3.4 was released shortly after 5.3.3 due to a specific bug in our reporting tool tpm diag. All of the major changes except this one fix are in the 5.3.3 release.

Our 5.3.3/4 release fixes a number of bugs and has been released to improve stability in certain parts of the products.

Highlights common to both products:

  • Fixed an issue with LOAD DATA INFILE
  • Replicator now outputs the filename of the file when using thl to show events
  • tpm diag has been improved the way we extract MySQL and support Net:SSH options

Highlights in the clustering product:

  • Tungsten Manager stability has been improved by identifying some memory leaks.
  • Tungsten Connector has fixed a number of bugs relating to bridge mode, connectivity to hosts and memory usage on the manager service
  • tpm has been improved to provide backup information

Highlights for the replicator product only:

  • trepctl logging did not include the right information for multiple services

Release notes:

https://docs.continuent.com/tungsten-clustering-5.3/release-notes-5-3-4.html

https://docs.continuent.com/tungsten-clustering-5.3/release-notes-5-3-3.html

https://docs.continuent.com/tungsten-replicator-5.3/release-notes-5-3-3.html

https://docs.continuent.com/tungsten-replicator-5.3/release-notes-5-3-4.html

 

Pages