Planet MySQL

XFS and EXT4 Testing Redux

In my concluded testing post, I declared EXT4 my winner vs XFS for my scenario. My coworker, @keyurdg, was unwilling to let XFS lose out and made a few observations:

  • XFS wasn’t *really* being formatted optimally for the RAID stripe size
  • XFS wasn’t being mounted with the inode64 option which means that all of the inodes are kept in the first 2TB. (Side note: inode64 option is default in newer kernels but not on CentOS 6’s 2.6.32)
  • Single threaded testing isn’t entirely accurate because although replication is single threaded, the writes are collected in InnoDB and then writes it to disk using multiple threads governed by innodb_write_io_threads.

Armed with new data, I have – for real – the last round of testing.

To keep things a bit simpler, I will be comparing each file system on 2TB and 27TB, with 4 threads, which matches the default value for innodb_write_io_threads in MySQL 5.5.

FS RAID Size Mount Options Transfer/s Requests/s Avg/Request 95%/Request xfs 10 2T noatime,nodiratime,nobarrier,inode64 62.588Mb/sec 4005.66 0.88ms 0.03ms ext4 10 2T noatime,nodiratime,nobarrier 58.667Mb/sec 3754.66 0.87ms 0.19ms FS RAID Size Mount Options Transfer/s Requests/s Avg/Request 95%/Request xfs 10 27T noatime,nodiratime,nobarrier,inode64 64.47Mb/sec 4126.06 0.84ms 0.02ms ext4 10 27T noatime,nodiratime,nobarrier 49.379Mb/sec 3160.26 1.06ms 0.24ms

XFS finally wins out clearly over EXT4. XFS being dramatically slower on 27T earlier really shows how much the worse the performance between inode32 and inode64 is and explains why it was that much better on 2T. Fixing the formatting options pushed XFS over the top easily.

All that’s left to do is setup multiple instances until replication can’t keep up anymore.



PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL: The most popular open source database for WWW

(Note : This an Article from last year when MySQL5.6 was released)

While Database technology is one of the oldest branches of computer science, it remains a fundamental computer technology that continues to attract new research. The current focus of Databases technology is towards adapting hot new tends like multi-core chips, solid state devices, NOSQL and Cloud. So what does a contemporary internet developer look for in a database for the internet era? And why does MySQL remain the most popular database for the web?

 

For a database to be useful while developing products for the Web, the most important requirements are that it should be quick and easy to download, quickly to set up, powerful enough to get the job done, be fast and flexible to use and finally be scalable on the newest hardware. Compatibility with the latest technologies like the cloud also remains foremost in the minds of developers since they need their products to be future proof.

You may notice, some of these requirements are common for any web product to be successful.

Let us go into the details of each of the above requirements, discuss their importance and look at how MySQL addresses them.

Quick set-up

In the instant web world of today with so many technologies competing for attention, you should either be available instantly on the cloud or should be quick and intuitive to be downloaded and setup. While 4-5 years ago someone may have given an application 10 minutes to download and set up, today, expect technologies to be ready for use within three minutes. The download process needs to be fast and the setup needs to be intuitive. A person using a technology for the first time does not want to be bothered with arcane questions to set up the software. The software should have reasonable defaults which can be tweaked later as the application is found suitable and is deemed worthy of some user time. It has been our endeavor to ensure that MySQL can indeed be set up, and the first query run within three minutes.

Many users and companies are looking to use the database as a service provided by a cloud provider. MySQL is the leading database in the cloud, offered by the vast majority of cloud services providers. Almost all cloud service providers like Amazon, Rackspace, Eucalyptus and Google provide MySQL as a service. It is a part of many cloud stacks including openstack and cloudstack which are available as a service. In many cases MySQL is the underlying database for cloud services provisioning and implementation. Being the ubiquitous cloud database means that MySQL is the best fit for the cloud and also the nimblest database to take keep up with the advances in cloud technology.

This assures a new user that MySQL is an easy database to use. It gives them the confidence that there will be a return for the time spent to adapt their software and infrastructure to the requirements for MySQL.

Basic features

When we talk about software being powerful there are many aspects that need to be covered.

The first requirement to be considered powerful is that all the basics must be well implemented. This creates a strong foundation on which this power is built. In terms of databases this means that all aspects of ACID (atomicity, consistency, isolation and durability) are implemented. ACID is provided in most databases by having strong transactional capabilities. This is usually achieved by implementing the 2 phase commit protocol. As an over-simplification we can say the transaction is done in two phases first it is written to a log file and then this log file is flushed to the disk.

This logging needs to happen at the lowest layer in a database called the Storage Layer.

Besides the logging, there are many crucial database level algorithms that need to be implemented in a balanced way to ensure that a Web database is able to shoulder the load that large websites expect to generate. A good transactional storage engine may need to implement its own threading, memory management and disk management

Storage layer and engines

In MySQL the storage layer of the database is abstracted. See “Storage engines Layer” in fig. MySQL Architecture. Though the default Storage engine is InnoDB there can be any number of storage engines that can be plugged in at this layer.

InnoDB is an extremely powerful transactional storage engine that has withstood the test of time by handling databases having terabytes of data and used by millions of users.

Since MySQL has an abstract layer and then the real storage engine, the logging is done at 2 levels: first at the abstract storage engine layer and then per storage engine based on the capability of the storage engine. At the abstract storage engine layer the log is called the binary log. At the InnoDB engine level, it can be oversimplified and called the redo log.

The binary log can be used to provide some level of atomicity, consistency and durability between engines but since engines are so varied it is a complex topic and beyond the scope of this article. Later, we will cover another very important feature that uses the binary log for its implementation. Observant readers might wonder about the missing I- Isolation. In a multi-user, multi-threaded database there are thousands maybe millions of transactions that are happening per minute. Isolation is a way to ensure that these transactions are isolated from each other. The database system produces an output as if the transactions are happening serially and not in parallel. There are two high level algorithms that provide isolation, either with locks or via multi-version concurrency control (MVCC). Locking restricts or manages the modification of the same data by multiple transactions. This can lead to a delay in accessing data locked by a long running transaction. MVCC produces a new version of data for each new transaction that is initiated on that data. If any transaction commits, all the other transactions having an older version of the data may need to abort or take the new data into consideration. The InnoDB engine provides Isolation by implementing MVCC.

Besides these fundamentals, the database engine should provide common database features like stored procedures and triggers and a reasonable adherence to the SQL standards.

Development and management tools

Any software is as powerful as the tools that are provided to make the software more usable.

For databases it is important that users can visually model their database using entity relationship (E-R) diagrams to manage the database objects, manage users, and have an easy visual way to modify their existing database. For large installations it is not only important to manage a single instance of the database but also be able to manage multiple database installations. DBA (database administrator) today need to know exactly what is happening with their database in real-time. Is there an installation that is stalled? Is there a user, process or query that is hogging time? An answer to such question in minutes rather than milliseconds may translate to downtime for a database, involving loss of reputation and money. If a large website goes down it is front page news.

However well the data and database is managed, database users need a way to ensure that there is insurance from disaster in the form of a backup of data.

MySQL is shipped with a modern visual tool called MySQL Workbench, which allows users to model and manage their databases as well as their users. For large installations MySQL provides a paid tool called MySQL Enterprise Monitor to manage multiple installations and look at moving graphs of ongoing database activity in the data-center. There are also other tools available from multiple vendors to manipulate and manage MySQL installations. There are multiple free and paid tools available for backing up and restoring a MySQL database.

Some of the tools mentioned above are built on a strong foundation called Performance Schema. Performance Schema is (PS) — a framework that has the meta-data about all that is happening in the database as well as constructs to allow a user to view and filter this data. Performance Schema captures information about users and queries down to threads, mutexes and their corresponding wait times and process times. It contains constructs to allow this fine grained information to be extracted using events and filters. DBAs with a strong knowledge of PS can find out if anything out of the ordinary is happening with their MySQL database. Users can set up the amount of monitoring that they need and the performance schema will populate only the requested data. Since PS is expected to generate a huge amount of data; users can limit the total data stored. The data is transiently stored in cyclic tables in a limited amount of memory.

As computer technology progresses some things change while others remain the same. The number of computer languages available to a programmer is among the things that have changed while the popularity of the old C, C++ and Java remain almost the same. Programmers need to be able to access a database using the language of their choice. The latest enhancements to the database should be made available in the language that they are using. We are fortunate that the hottest new languages want to adopt MySQL. MySQL has excellent drivers for Node.js, Ruby and GoLang. To maintain hundreds of such language connectors is a huge drain on any engineering organization. MySQL is made powerful by the number of computer languages that programmers can use to connect to it. The topmost layer in MySQL Architecture shows the client layer where the connectors exist. Whichever language you may use to write your program, chances are; there will be a connector (or client API) that allows you to connect to MySQL and exchange data with it. The MySQL community is a great source of strength in this area. There are a huge number of language connectors not only written but which continue to be maintained by the MySQL community. MySQL is very lucky to have great community developed drivers for Ruby and GoLang MySQL continues to officially maintain the ODBC, C++, Java, Net and (the most recent) Python connectors.

The community

Any Web product benefits from user and community attention. Attention generates and maintains the cycle of adoption, growth and stability. In the connected socially networked world of today, users expect quick answers to their problems. A large user base also ensures that any problem that a user experiences, may have already been manifested and resolved. Our user community is a huge blessing for MySQL. An open and well informed MySQL user community is ready to answer any questions users may have, related to their MySQL installation. There are a multitude of forums where myriad discussions are recorded about users having difficulties, along with the solutions to those problems. The community generates ideas and sometimes code to improve MySQL, files bugs and most importantly is the sounding board that encourages MySQL developers to do better and compliments them for a job well done.

Redundancy, replication and sharding

As a database grows popular, larger and more highly trafficked websites rely on it. For large and distributed websites, high availability is extremely important. Redundancy needs to be built into all layers of Web architecture, including the database layer. MySQL provides this via MySQL Replication a way to define and manage a series of master-slave set-ups. For some large installations where the read load is much higher that the write load, it makes sense to distribute this read load to many, sometimes hundreds if not thousands; of slave replicas of the master server. MySQL replication has many options to configure these master slave set-ups to ensure that a slave automatically takes over as a master if the master fails. Replication leverages the binary log to ensure that database events are transported from the master to the slave/s database instance. MySQL provides for replication that is row or statement based, synchronous or asynchronous, instant or time delayed. The capabilities are powerful while the administration is easy using tools like Workbench and MySQL Utilities. Database replication is a vast and complex topic with a huge amount of current database research being focused on it. It’s no wonder then that the latest version of MySQL has a number of enhancements for MySQL Replication like check-sums, global transaction ID, crash safe slaves etc.

A different, sometimes complementary, approach to handling large database implementations is to partition or shard the data. A single large table could be broken up into smaller portions depending upon the commonality of the data. For example a user table could have separate partitions or shards based on the nationality of the user. This is especially useful when the most frequent queries require data from a single partition. MySQL has extensive support for table partitioning at the InnoDB storage engine level. The latest labs releases demonstrate how MySQL can be sharded across multiple machines. The entire complexity of identifying the shard on which the required data resides is the responsibility of the new sharding infrastructure called MySQL Fabric.

Security

No real world software is complete without adequate attention to security. A database must ensure that it is secure. In the world of multi-tenancy databases special care needs to be taken to ensure that a user has sufficient privileges before any access is given to the data. MySQL implements user privileges at the database, table and column levels. There are also privileges for objects like views, procedures etc.  Besides the normal create, modify and delete privileges, DBAs can also restrict the quantum of a privilege given to a user by, for example limiting the number of queries the user can run per hour or the number of connections the user may open.

MySQL also provides integration with third-party authentication modules like PAM. Recent releases give the MySQL DBA extensive password control to ensure adequate strength of passwords and expiration rules.

Speed

Let us now talk about the speed and efficiency required from a database system for the Web. The easiest way to increase the speed of access to the data is by adding indexes. Indexes should not only be created on simple columns with integers and strings but complex columns with large text fields (called full text or CLOB). The indexation needs to understand the sorting rules for the different world languages and character sets. Users with large databases need the option to create sub-indexes or secondary indexes on the primary index. Indexes should be quick to create and quick to update when new data is added. MySQL provides easy index creation and modification with some special techniques for fast index creation. The latest version of MySQL provides options to create a full-text index for all language collations. On the fly creation and deletion of indexes is also offered by the latest version of MySQL.

The speed of a query depends on the amount of data that needs to be fetched for a query, for complex queries the sequence of fetching the data may matter. The optimizer uses statistics about the data contained in the database to determine the most efficient sequence for fetching data. Optimizers are complex and constantly changing because of the complex filtering, increasing the size and wider distribution of data that needs to be gathered for the query. MySQL database’s speed is also enhanced by its versatile optimizer. MySQL is constantly working on our optimizer which continues to evolve. The latest version of the optimizer provides exponential speed improvements for several classes of queries.

When databases were first created hard disks were in vogue, and many database algorithms were based on the rotation and buffering attributes of hard disks. Databases today need to adapt to the new world of solid state devices (SSDs). Though these devices are currently expensive, a limited use of these can result in huge speed gains. Databases for the web need to be SSD aware. The newest MySQL is adapting to use SSDs. It is now possible to relocate portions of the database which are more frequently accessed to a different path. A user can now choose his log files to be on a path which points to an SSD. The same flexibility is also provided for portions (read partitions) or complete tables or table spaces.

On the hardware side, there is not only progress in terms of SSD’s but the nature of the microprocessor itself has changed. The microprocessor technology continues to increase the number of processors on the chip and multiple cores on each processor. This means that a server has multiple threads and these threads can run uninterrupted on their separate core. This has huge implications for finely tuned software like a database servers. The thread bottlenecks and points of contention are now very different from a single or dual core chips. The database therefore needs to be re-architected to take advantage of say 64 threads running at the same time. Users expect that if they spend money to upgrade from a 32 core machine to a 64 core machine, the throughput of the database should also double. The MySQL team has had to work hard on architecture to ensure that the latest MySQL scales almost linearly up to 64 cores and half a million queries per second.

Read DimitryK’s blog at http://dimitrik.free.fr/blog/index.html for benchmark numbers for MySQL

(Figure 2).

Flexibility—online schema changes

Since I already talked about scalability earlier, let me devote the final paragraphs to flexibility

The current trend of NoSQL was initiated because database users felt constrained at having to define a fixed schema before developing their application. With today’s agile models; schema changes happen more frequently and need to happen dynamically on 24×7 sites. These sites are business critical and down-times for these changes are very expensive for an Internet business. Paradoxically, increasing competition means that new changes need to be provided at the speed of the Internet, without disruption to the current production environment. Traditionally, changing database schemas was a huge investment of time for a database that did not allow dynamic schema changes. This business requirement has led to the new online schema change model which the latest MySQL 5.6 also supports. You can add and modify columns in a table on the fly. New indexes can also be created and dropped online. Relationships between tables in the form of foreign keys can also be altered while the server is running.

This is the Internet age and trends rise and get established very quickly. One of the ways for any software to provide flexibility to its users is to embrace upcoming trends. MySQL adapted to the NoSQL trend by providing their users an option to access data using either SQL or NoSQL.

The user gets the full advantage of having a strong ACID compliant underlying layer while having the flexibility of a schema-less data architecture. The protocol we have chosen to expose for NoSQL is the memcached protocol. So if you have a running application using memcached you can choose to use it with MySQL. Since the NoSQL interface connects to the lowest layer of MySQL (See MySQL Architecture), there is no SQL processing overhead from the parser/optimizer and query execution layer.

MySQL has also embraced the big data trend with the integration of Hadoop using the MySQL applier for Hadoop. This allows realtime transfer of data from MySQL to a Hadoop database. Bug data is being used to crunch the huge data coming getting created on the web. This data is being generated from multiple sources including devices connected to the Web. And since the underlying database of the web is MySQL, it is important that Hadoop and MySQL talk to each other realtime.

There are multiple success stories of MySQL which you can read on the web. Would encourage you to read blogs from Twitter (new-tweets-per-second-record-and-how), Facebook (Mark Callaghan).

Staying at the forefront and remaining the most popular database is a complex and interesting challenge for the MySQL development team. We thank you dear reader for your support and attention.



PlanetMySQL Voting: Vote UP / Vote DOWN

High Performance Drupal with MariaDB

Mon, 2014-10-20 09:31maria-luisaraviol

I am back from 2014 Amsterdam Drupalcon where MariaDB Corporation was present as sponsor. It was my first time there and I must say I was really impressed by the amount of people attending the conference (around 2300 people) and the interest that the people showed for MariaDB.

We had many conversations with several kind of engineers, developers, providers and just for a few of them MariaDB was something new to discover; the great majority of them either were already using it or were planning to do it but they did not manage to find some “free” time to do it yet.

What impressed me, was that almost all of the MariaDB happy users just replaced their previous database server installation (MySQL or Percona) with MariaDB with the same approach they might have had for a standard database server upgrade: switch off, backup, install, switch on, done!

Well that’s great and we all know that MariaDB is a drop-in replacement for MySQL and with just this simple migration, all of them experienced a faster website.

What all of them clearly was not aware of is that they can achieve even better performances taking advantage of some of the MariaDB specific features.

MariaDB Optimizer

We all know that a huge amount of work has been done in MariaDB around optimization since version 5.3 and the list of features is really impressing (check the Optimizer Feature Comparison Matrix) and I am sure the enthusiastic comments on how faster the Drupal websites were after installing MariaDB, have a strong part of their foundations on a mix of JOIN and SUBQUERY and Disk access optimization.

XtraDB Storage Engine

We all know how XtraDB has improved InnoDB and how it has been designed to better work in environments where high performance is crucial. It can scale much better on more cores architecture and use memory more efficiently.

XtraDB is the default MariaDB storage engine so new users will take advantage of it with no specific user action needed.

Sphinx Storage Engine

A lot of Drupal users are Sphinx users as well. I can easily say that no one I talked to was aware of MariaDB SphinxSE that allows users to do full text search with Sphinx search engine and still use the regular database.

This means that Drupal users (but not only them!!) can let Sphinx do what it has been designed to do at its best (indexing, searching, filtering, sorting) and JOIN with other MariaDB tables in a transparent way.

Thread Pool

MySQL traditionally assigned a thread for every client connection; of course as the number of concurrent users grows this model shows performance drops. For what I know there are issues in using persistent connections with PDO, that is the interface for accessing databases in PHP and on the other hand if you have many concurrent clients or many users accessing a web site each of them will theoretically open one thread per connection and we know that many active threads are performance killers and it is sometimes not easy to properly set the max_connections limit.

MariaDB allows Drupal users to create a pool of threads that can be reused. Moreover the MariaDB pool is an adaptive pool, this means that it takes care of creating new threads in times of high demand and kill threads if they are idle.

Why are thread pools a good option for Drupal users? Because thread pools are most efficient when queries are relatively short and this often happens when users are reaching a website and the requested page does not hit the cache. So if you are a Drupal user with many concurrent clients or many users that potentially use a thread each simply turn thread pool on in your my.cnf.

Further reading

https://mariadb.com/kb/en/mariadb/documentation/optimization-and-tuning/...
https://mariadb.com/kb/en/mariadb/what-is-the-difference-between-mysql-a...
https://mariadb.com/kb/en/mariadb/documentation/storage-engines/xtradb-a...

Tags: CMS
PlanetMySQL Voting: Vote UP / Vote DOWN

Making UUID() and RAND() replication safe

MySQL's UUID() and RAND() functions both provide with (pseudo) indeterministic result. UUID()'s result is moreover bound to the host on which it executes. For this reason, both are unsafe to replicate with STATEMENT binlog format. As an example, consider:

master> create table test.uuid_test (id int, u varchar(64)); master> insert into test.uuid_test values (1, UUID()); Query OK, 1 row affected, 1 warning (0.03 sec) master> select * from test.uuid_test; +------+--------------------------------------+ | id   | u                                    | +------+--------------------------------------+ |    1 | 7e3596d8-56ac-11e4-b284-3c970ea31ea8 | +------+--------------------------------------+

The warning we got on the insert directly relates to the following inconsistency on a slave:

slave1> select * from test.uuid_test; +------+--------------------------------------+ | id   | u                                    | +------+--------------------------------------+ |    1 | 7e379d63-56ac-11e4-8477-3c970ea31ea8 | +------+--------------------------------------+

The data on the slave is clearly inconsistent with the master's. The slave, replicating via STATEMENT binlog format, re-executes the INSERT command and gets a different UUID value.

External

One solution to the above is to generate the UUID value from your application. By the time MySQL gets the INSERT statement, the UUID value is a constant string, as far as MySQL is concerned.

Internal

However there's a way to do it from within MySQL, by decoupling the UUID() function from the INSERT statement. It takes a session variable. Consider:

master> set @safe_uuid := UUID(); Query OK, 0 rows affected (0.00 sec) master> insert into test.uuid_test values (2, @safe_uuid); Query OK, 1 row affected (0.02 sec) master> select * from test.uuid_test; +------+--------------------------------------+ | id   | u                                    | +------+--------------------------------------+ |    1 | 7e3596d8-56ac-11e4-b284-3c970ea31ea8 | |    2 | 29c51fb9-56ad-11e4-b284-3c970ea31ea8 | +------+--------------------------------------+

And on a slave:

slave1> select * from test.uuid_test; +------+--------------------------------------+ | id   | u                                    | +------+--------------------------------------+ |    1 | 7e379d63-56ac-11e4-8477-3c970ea31ea8 | |    2 | 29c51fb9-56ad-11e4-b284-3c970ea31ea8 | +------+--------------------------------------+

The reason why this succeeds is that MySQL stores session variable values that are being used by DML queries in the binary log. It just so happened that @safe_uuid was assigned the UUID() value, but it could just as well have been assigned a constant or other computation. MySQL stored the resulting value into the binary log, where it is forces upon the slave to use. Check out this binary log snippet:

# at 14251 #141018 12:57:35 server id 1  end_log_pos 14319         Query   thread_id=2     exec_time=0     error_code=0 SET TIMESTAMP=1413626255/*!*/; SET @@session.sql_auto_is_null=0/*!*/; BEGIN /*!*/; # at 14319 #141018 12:57:35 server id 1  end_log_pos 14397         User_var SET @`safe_uuid`:=_utf8 0x32396335316662392D353661642D313165342D623238342D336339373065613331656138 COLLATE `utf8_general_ci`/*!*/; # at 14397 #141018 12:57:35 server id 1  end_log_pos 14509         Query   thread_id=2     exec_time=0     error_code=0 SET TIMESTAMP=1413626255/*!*/; insert into test.uuid_test values (2, @safe_uuid) /*!*/; # at 14509 #141018 12:57:35 server id 1  end_log_pos 14536         Xid = 145 COMMIT/*!*/;

The same can be applied for RAND(). Funny thing about RAND() is that it is already taken care of by the binary log via SET @@RAND_SEED1, SET @@RAND_SEED2 statements (i.e. it works), though the documentation clearly states it is unsafe.

With Row Based Replication (RBR) the problem never arises in the first place since the binlog contains the values of the new/updated rows.


PlanetMySQL Voting: Vote UP / Vote DOWN

Improved Fault Diagnosis UI

In our efforts to improve MySQL monitoring, we recently enhanced our fault diagnosis UI. Adaptive Fault Detection has been an integral part of our suite, and we are excited for the UI updates that will help you better manage your databases.

The new release provides a more compact view, allowing you to quickly assess potential problems before they become bigger. Notice how a tiny, tiny server stall was caught by our algorithm. Fault detection has allowed us to get remarkable results from our weak EC2 boxes by keeping them running really cleanly.

We have also added more sections showing metrics such as top processes, network sockets, and network ports. These sections have quick-links to navigate to each of the various tools in the exact time range of the selected fault.

Further, you can click on the zoom buttons on sparklines to pop them out and make them big for closer inspection.

The new UI allows you to view metrics at a high level or drill down to the details, depending on your need. We’re always improving our app and appreciate feedback. Customer suggestions influence our product greatly.


PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB 10.1 Brings Compound Statements

A very old post of mine in 2009, MySQL’s stored procedure language could be so much more Useful suggested that it would be nice if MySQL could be adapted to use compound statements directly from the command line in a similar way to the language used for stored procedures. I’ve just seen that this seems to be possible now in MariaDB 10.1. See the release notes.

I now need to look at this. So thanks, it looks like this feature request is now available.


PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB 10.1.1: default roles

As you all know MariaDB supported roles since the MariaDB release 10.0.5. They were implemented almost exactly as specified in the SQL Standard 2003, features T331 “Basic roles” and T332 “Extended Roles”.

But we were often hearing complains, users were not satisfied with purely standard set of features. In particular, the standard specified that one had to do

SET ROLE foobar;

to be able to use privileges, granted to the role foobar. This was not always convenient and sometimes not even possible (imagine, you need to grant role privileges to an account used by a closed-source application). There had to be some way to enable a given role automatically, when a user connects.

To solve this issue we have introduced the concept of a default role. A default role for given user is automatically enabled when this user connects. Problem solved!

To set foobar as a default role you use, quite logically,

SET DEFAULT ROLE foobar;

This stores your default role in the mysq.user table, and next time you connect the role foobar will be enabled for you automatically.

To remove a default role use

SET DEFAULT ROLE NONE;

this works similarly to the standard SET ROLE statement.

You can also set a default role for another user (remember that use case with a closed-source application?):

SET DEFAULT ROLE foobar FOR user@host;

Privilege-wise, if you can enable a role (using SET ROLE statement), you can make it a default (using SET DEFAULT ROLE statement). But to change a default role for someone else, you need insert privilege for the mysq.user table — same as when you change a password for someone else.

And don’t forget to run mysql_upgrade before using default roles — as they are stored in privilege tables, these tables have to be updated to the latest version to include the necessary columns. Otherwise SET DEFAULT ROLE statement will fail.

The implementation for this feature was contributed by Vicenţiu Ciorbaru.


PlanetMySQL Voting: Vote UP / Vote DOWN

Three new MariaDB Releases

The MariaDB project is pleased to announce the immediate availability of the following:



MariaDB 10.1.1 is an Alpha release.

Download MariaDB 10.1.1

Release Notes Changelog What is MariaDB 10.1?

MariaDB APT and YUM Repository Configuration Generator

Don’t use alpha releases on production systems!

See the Release Notes and Changelog for detailed information on this release and the What is MariaDB 10.1? page in the MariaDB Knowledge Base for general information about the MariaDB 10.1 series.

MariaDB Galera Cluster 5.5.40 is a Stable (GA) release.

Download MariaDB Galera Cluster 5.5.40

Release Notes Changelog What is MariaDB Galera Cluster?

MariaDB APT and YUM Repository Configuration Generator

See the Release Notes and Changelog for detailed information on this release and the What is MariaDB Galera Cluster? page in the MariaDB Knowledge Base for general information about MariaDB Galera Cluster.

MariaDB Galera Cluster 10.0.14 is a Stable (GA) release.

Download MariaDB Galera Cluster 10.0.14

Release Notes Changelog What is MariaDB Galera Cluster?

MariaDB APT and YUM Repository Configuration Generator

See the Release Notes and Changelog for detailed information on this release and the What is MariaDB Galera Cluster? page in the MariaDB Knowledge Base for general information about MariaDB Galera Cluster.

Thanks, and enjoy MariaDB!


PlanetMySQL Voting: Vote UP / Vote DOWN

[ERROR] COLLATION ‘utf8_general_ci’ is not valid for CHARACTER SET ‘latin1′

Recently come across the problem where mysql server stop running and refusing to start with an error

The server quit without updating PID file (/var/run/mysqld/mysqld.pid)

Checked Mysql error log and found that an invalid usage of charset with collation causing problem.

error log:
141017 12:20:41 mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data
while 15088 [ERROR] COLLATION 'utf8_general_ci' is not valid for CHARACTER SET 'latin1'
15088 [ERROR] Aborting

In this case mysqld trying to start  with  character-set-server = latin1 and collation-server = utf8_general_ci, which is not valid.
The following is the right charset value for COLLATION ‘utf8_general_ci’

node1 [localhost] {msandbox} ((none)) > SHOW COLLATION LIKE ‘utf8_general_ci';

+—————–+———+—-+———+———-+———+

| Collation       | Charset | Id | Default | Compiled | Sortlen |

+—————–+———+—-+———+———-+———+

| utf8_general_ci | utf8    | 33 | Yes     | Yes      |       1 |

+—————–+———+—-+———+———-+———+

Solution:

Add following options in my.cnf

character-set-server = utf8
collation-server = utf8_general_ci
character-set-client-handshake = false

And start  mysql server.It will start without any error.

ALL SET.



PlanetMySQL Voting: Vote UP / Vote DOWN

Innodb transaction history often hides dangerous ‘debt’

In many write-intensive workloads Innodb/XtraDB storage engines you may see hidden and dangerous “debt” being accumulated – unpurged transaction “history” which if not kept in check over time will cause serve performance regression or will take all free space and cause an outage. Let’s talk about where it comes from and what can you do to avoid running into the trouble.

Technical Background: InnoDB is an MVCC engine which means it keeps multiple versions of the rows in the database, and when rows are deleted or updated they are not immediately removed from the database but kept for some time – until they can be removed. For a majority of OLTP workloads they can be removed seconds after the change actually took place. In some cases though they might need to be kept for a long period of time – if there are some old transactions running in the system that might still need to look at an old database state. As of MySQL 5.6 Innodb has one or several “purge threads” which remove the old data that can be removed, though they might not be doing it fast enough for workloads with very intensive writes.

Does it really happen? I started looking into this problem based on some customer concerns and to my surprise I could very easily get the history to grow rapidly using basic sysbench “update” workload. It is especially easy with default innodb_purge_threads=1 setting but even with innodb_purge_threads=8 it grows rather rapidly.

If we take a look at the purging speed (which comes from innodb-metrics table) we can see what purge is being very much starved by the active concurrent sysbench process and it speeds up greatly when it is finished:

Now to be frank this is not an easy situation to get in the majority of workloads with short transactions when the undo space is kept in memory purge and is able to keep up. If Undo space however happens to be gone from buffer pool the purge speed can slow down drastically and the system might not be able to keep up anymore. How it could happen? There are 2 common variants….

Long Running Transaction: If you’re having some long running transaction, for example mysqldump, on the larger table the purging has to pause while that transaction is running and a lot of history will be accumulated. If there is enough IO pressure a portion of undo space will be removed from the buffer pool.

MySQL Restart: Even with modest history length restarting MySQL will wash away from memory and will cause purge to be IO bound. This is of course if you’re not using InnoDB Buffer Pool save and reload.

How do you check if your UNDO space is well cached? In Percona Server I can use those commands:

mysql> select sum(curr_size)*16/1024 undo_space_MB from XTRADB_RSEG; +---------------+ | undo_space_MB | +---------------+ | 1688.4531 | +---------------+ 1 row in set (0.00 sec) mysql> select count(*) cnt, count(*)*16/1024 size_MB, page_type from INNODB_BUFFER_PAGE group by page_type; +--------+-----------+-------------------+ | cnt | size_MB | page_type | +--------+-----------+-------------------+ | 55 | 0.8594 | EXTENT_DESCRIPTOR | | 2 | 0.0313 | FILE_SPACE_HEADER | | 108 | 1.6875 | IBUF_BITMAP | | 17186 | 268.5313 | IBUF_INDEX | | 352671 | 5510.4844 | INDEX | | 69 | 1.0781 | INODE | | 128 | 2.0000 | SYSTEM | | 1 | 0.0156 | TRX_SYSTEM | | 6029 | 94.2031 | UNDO_LOG | | 16959 | 264.9844 | UNKNOWN | +--------+-----------+-------------------+ 10 rows in set (1.65 sec)

This shows what the total undo space size is now, 1.7GB, with less than 100MB cached in the buffer pool size….

Here are a few graphs from Running Heavy concurrent query during lighter workload where purging could keep up. In this case I used the “injection” benchmark in sysbench setting –trx-rate to 50% of what the system shown as peak.

mysql> select count(distinct k+ length(pad)) from sbtest1; +--------------------------------+ | count(distinct k+ length(pad)) | +--------------------------------+ | 30916851 | +--------------------------------+ 1 row in set (28 min 32.38 sec)

What we can see from those graphs is that InnoDB purging initially is progressing at a speed fast enough to keep up with inflow of transactions,
however as we kick up the complicated query, purging is stopped and when the query is done the purge speed settles on the new much lower level where it is not able to keep up with the workload anymore.

Now, there is recognition of this problem and there are options with innodb_max_purge_lag and innodb_max_purge_lag_delay to set the maximum length of the history after reaching which delay will be injected for DML statements up to a specified amount of microseconds.

Unfortunately it is not designed very well to use with real applications. The problems I see with its design are two fold….

Looking at Total History: If you think about it there are 2 kinds of records within the history – there are records that can be purged and there are ones which can’t be purged because they are needed by some active transaction. It is perfectly fine to have a lot of records in history if some long transaction is running – it is not the cause of the problem or overload, while we expect what “purgable history” should be low most of the time.

Looking at the Size rather than Rate of Change: Even worse, the history blowout prevention is looking at the current value to inject a delay and not at whenever it is that’s growing or already shrinking.

These together means that cases of long running transactions concurrently with OLTP workloads is handled very poorly – as long as history reaches the specified maximum amount the system will kick into overdrive, delaying all statements to the maximum extent possible, until the history falls back below the threshold. Here is how it looks on graphs:

As you see on the last graph, we got the purge_dml_delay_usec spiking to 10000us (the max I set) even as no purging can be done (see the blue line is at zero). It only actually starts to work on the history when the heavy query completes and really releases the breaks when the purge is complete. In this case the throughput of the system reduced more than 5 times when the delay was active – which would not work for most real-world systems.

Design Thoughts: So what would I change in the purging design of the configuration? I would like to see a better default configuration that should include multiple purge threads and purge delay (improved). I would find some way to measure not only history size but purgable history size and base purge delay on it.  Also make it based on the change rather than threshold – do just enough delay so the history is gradually shrinking. Also basing it on the undo space size instead of the number of transactions (which can vary in size) might be more practical and easier to auto-tune. We also can probably do better in terms of undo space caching – similar to Insert buffer, I’d like to keep it in memory say until 10% of the buffer pool size as removing from the cache something you know you will need very soon is bad business, as well as consider whether there is some form of read-ahead which can work to pre-read undo space which is needed. Right now I’ve tested and neither linear nor random read-ahead seems to help picking it up from disk with less random IO.

Practical Thoughts: Whatever improvements we’ll get from purging we have MySQL and Percona Server 5.6 systems to run for some years to come. So what are the practical steps we can do to manage purge history better?

Monitor: Make sure you are monitoring and graphing innodb_history_list_length. If you use large transactions, set alerts pretty high but do not leave it unchecked.

Configure Set innodb_purge_threads=8 or some other value if you have write intensive workload. Consider playing with innodb_max_purge_lag and innodb_max_purge_lag_delay but be careful – as currently designed it can really bring the server to its knees. You may consider using it interactively instead, changing them as run-time options if you spot history list growths unchecked, balancing current workload demands with resources allocated to purging.

Let it purge before shutdown: In many cases I find purge performance much worse after I restart MySQL Server because of caching. So the good approach might be just to remove the workload from MySQL server before shutting it down to let the purge of outstanding history complete – and only after that shut it down. If the server has crashed you might consider letting it complete purging before getting traffic routed back to it.

Use Innodb Buffer Pool Preload Use innodb_buffer_pool_dump_at_shutdown=on and innodb_buffer_pool_load_at_startup=on to ensure undo space is preloaded back to the buffer pool on startup.

P.S If you wonder where the graphs I have used came from – it is our Percona Cloud Tools – a very convenient way for analyses like these allowing access to all MySQL status variables, InnoDB metrics, tons of OS metrics and more.

The post Innodb transaction history often hides dangerous ‘debt’ appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Integrating ClusterControl with FreeIPA and Windows Active Directory for Authentication

October 17, 2014 By Severalnines

Integrating ClusterControl with a corporate LDAP directory is a common task for many IT organizations. In an earlier blog, we showed you how to integrate ClusterControl with OpenLDAP. In this post, we will show you how to integrate with FreeIPA and Windows Active Directory. 

 

How ClusterControl Performs LDAP Authentication

 

ClusterControl supports up to LDAPv3 protocol based on RFC2307. More details on this in the documentation.

 

When authenticating, ClusterControl will first bind to the directory tree server (LDAP Host) using the specified Login DN user and password, then it will check if the username you entered exists in the form of uid or cn of the User DN. If it exists, it will then use the username to bind against the LDAP server to check whether it has the configured group as in LDAP Group Name in ClusterControl. If it has, ClusterControl will then map the user to the appropriate ClusterControl role and grant access to the UI.

 

The following flowchart summarizes the workflow:

 

FreeIPA

 

FreeIPA is a Red Hat sponsored open source project which aims to provide an easily managed Identity, Policy and Audit (IPA) suite primarily targeted towards networks of Linux and Unix computers. It is easy to install/configure, and is an integrated security information management solution combining Linux (Fedora), 389 Directory Server, MIT Kerberos, NTP, DNS, Dogtag (Certificate System).

 

read more


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL 5.7.5- More variables in replication performance_schema tables

At MySQL, replication usability is of utmost importance to us. Replication information has long been part of SHOW commands, SHOW SLAVE STATUS occupying a major chunk of it. The other sources of replication information being:

SHOW MASTER STATUS,
SHOW BINLOG EVENTS,
SHOW RELAYLOG EVENTS,
SHOW VARIABLES,
SHOW STATUS,
ERROR LOGS etc.

As the replication module grows further, there is a lot more monitoring information, so much that the present interfaces seem too rigid to accommodate all the information we would like to present. So we need to organize them in a more structured manner. In MySQL-5.7.2, we introduced replication performance_schema (P_S) tables providing an SQL interface to monitor replication configuration and status partitioned into different tables, each table grouping logically related information. You can read more about the contents of these tables from official MySQL documentation.

In MySQL-5.7.5 we added some more status variables to these performance_schema tables to enable monitoring the latest replication features viz. multi-source replication in labs. Multi-source allows a MySQL slave to replicate from multiple sources (masters) directly. Talking of multi-source, one needs the replication information per source. So we added global variables that would be useful to extend to per-source scope to the replication performance\_schema tables to help monitor multi-source replication. Note that these variables still work for the single sourced replication and can still be accessed as:

Show status like ‘Slave_running’;
Show status like ‘Slave_retried_transactions’;
Show status like ‘Slave_last_heartbeat’;
Show status like ‘Slave_received_heartbeats’;
show status like ‘Slave_heartbeat_period’;

Note though that the status variables are now mostly useful in single-source mode ONLY. If more sources are added, the status variables still just apply to the first source. For other replication sources (masters), the only way to access these variables is to use the replication performance_schema tables as named in the table below. Here is how the names of server variables map to the names in the replication performance_schema tables:

Variable name P_S Table Name P_S field name  SLAVE_HEARTBEAT_PERIOD  replication_connection_configuration  HEARTBEAT_INTERVAL  SLAVE_RECEIVED_HEARTBEATS  replication_connection_status  COUNT_RECEIVED_HEARTBEATS  SLAVE_LAST_HEARTBEAT  replication_connection_status  LAST_HEARTBEAT_TIMESTAMP  SLAVE_RETRIED_TRANSACTIONS  replication_execute_status  COUNT_TRANSACTIONS_RETRIES

The variable ‘slave_running’ reports whether the slave is running or not. This can be found by inspecting the two replication components (receiver and applier) separately to see if the receiver module is running or not by executing

SELECT SERVICE_STATE FROM performance_schema.replication_connection_status;

And execute module is running or not by executing

SERVICE_STATE FROM performance_schema.replication_execute_status;

Please try out multi-source replication and our new monitoring interface in the form of replication performance_schema tables. As always, your feedback is very valuable to us.


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL 5.7.5-labs: Multi-source Replication

Multi-source replication for MySQL has been released as a part of 5.7.5-labs-preview
downloadable from labs.mysql.com. It is one among the several features that are
cooking in the replication technologies at MySQL.  (For a birds eye view of all
replication features introduced in 5.7 and labs, look  at the blog posts here and here.

Previously, we have introduced a preliminary multi-source feature labs preview. Based on the feed back from that labs release, we have released a feature preview based on 5.7.5 DMR which:

  1.  has a better user interface;
  2. is well tested;
  3. and has a better integration with other features such as Multi-threaded slave (MTS), Global Transaction ID(GTID) and replication performance_schema tables.

In this blog, I give you a description of what and how of a multi-source replication.

Motivation

Multi-source replication makes a replication slave connected to several sources (aka masters) simultaneously and aggregates the data from these sources. This setup has use cases like, having to backup of several servers to a single place or merging of tableshards or making any replication topology possible. Note that, multi-source replication doesn’t do any conflict detection and resolution and those tasks are left to the user’s application. If you are looking for a multi-master update everywhere with conflict detection and automatic resolution, look at the post here.

Replication Channel 

In my previous blog post, the concept of replication channel has been introduced. To recap, a replication channel encompasses the path of binary log events from master to the IO thread to the Relay log files to the applier threads (SQL thread or coordinator & worker threads). Multi-sourced slave creates a replication channel for each master. Each channel has a unique name and configuration parameters can be specified per channel (described below).

Preconditions for setting up multi-source replication

Replication repositories are either stored in FILE or TABLE based repository. TABLE based repository is crash-safe and multi-source replication is usable only in the case of TABLE based repositories. To setup multi-source replication, start mysqld with
–master-info-repostiory=TABLE && –relay-log-info-repository=TABLE.
(A better way, is to put these options in the .cnf files). If you are using a single source replication using FILE repository and want to setup a multi-source replication, convert repositories to TABLE dynamically in the following way:
STOP SLAVE;
SET GLOBAL @@master_info_repository = TABLE;
SET GLOBAL @@relay_log_info_repository = TABLE;

Commands for operations on a single channel

To make replication commands act per channel, we have introduced a new clause
called FOR CHANNEL “<channel_name>” to the replication commands so that
a user can manage a channel independent of other channels.
Following commands are channel specific:

  • CHANGE MASTER… FOR CHANNEL “<channel_name>”
    - Creates a channel if it doesn’t exist.
  • START SLAVE … FOR CHANNEL “<channel_name>”
  • STOP SLAVE… FOR CHANNEL “<channel_name>”
  • SHOW RELAYLOG EVENTS …FOR CHANNEL “<channel_name>”
  • FLUSH RELAY LOGS FOR CHANNEL “<channel_name>”
  • SHOW SLAVE STATUS FOR CHANNEL “<channel_name>”
  • RESET SLAVE [ALL] FOR CHANNEL “<channel_name>”

There is an extra parameter introduced for the following functions.

  • MASTER_POS_WAIT(binlog_file, binlog_pos, time_out, channel_name);
  •  WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS(gtid_set, time_out, channel_name);
Compatibility with single source replication

The server automatically creates a channel whose name is the empty string (“”). We call this the default channel. Replication commands with no FOR CHANNEL clause act on this channel if there is no named channel configured (A named channel is one that has a name different than “”).
This means a default channel exists the user can still use the same commands to interact with replication. As such, there is no need in that case to use the FOR CHANNEL clause at all, since the behavior will match the one before multi-source replication. This means that with a single channel configured, we get backwards compatibility command-wise.

Commands for operations on all channels

Since a slave could have several replication channels, and if FOR CHANNEL clause is not provided, then replication commands shall act on all the channels wherever it is semantically valid. (Note that this behavior is different from previous labs release).
For example:

  • START SLAVE [IO_THREAD|SQL_THREAD] starts replication threads for all the  channels.
  • STOP SLAVE [IO_THREAD/SQL_THREAD] stops replication threads for all the      channels.
  • SHOW SLAVE STATUS reports the status for all the channels.
  • RESET SLAVE [ALL] resets the slave for for all channels.
    - To preserve backward compatibiliy though, server purges all the relay log files
             of named channels but initialize them again for only default channel.
    - The ALL option would delete the internal memory structures and thereby all the
             information and configurations assosiated with a particular channel.
  • FLUSH RELAY LOGS flushes the relay logs of all channels.

However, not all the commands make sense to do for all channels. The following
commands generate error 1964 (“Multiple channels exist on the slave. Please provide channel name as an argument.”) when a channel is not provided and if number of channels are greater than 1. (Note that, a default channel always exists).

  • SHOW RELAYLOG EVENTS;
  • CHANGE MASTER..;
  • MASTER_POS_WAIT();
  • WAIT_FOR_SQL_THREAD_AFTER_GTIDS();
Multi-source monitoring

To monitor the status of all slave channels, a user has two options:

  1. performance_schema tables.: 5.7.2 introduced replication performance_schema tables to monitor replication status. Multi-source replication extended these tables to add Channel_Name as the first column to these tables. This will enable the users to write complex queries based on Channel_Name as a key.
  2. SHOW SLAVE STATUS [FOR CHANNEL "<channel_name>"]: This command by default shows the slave status for all channels with one row per channel. The identifier channel_name is added as the column in the result set. If a FOR CHANNEL clause is provided, the user will be provided the status of only that replication channel.

(NOTE: Replication SHOW VARIABLES: SHOW VARIABLES does not work with multi-source
replication. The information that was available through these variables have
been migrated to the replication performance tables. For more information, look at
Shiv’s post  here. The current SHOW VARIABLES will show status of only the default channel.)

User session and server log error messages

Since, a multi-sourced slave can have multiple channels, we introduced explicit
error messages pertaining to a channel. To make this more consistent across all
channels, we introduced new error codes and messages.
For example: The notes that used to be emitted in previous versions of mysql
 “Slave is already running” or “Slave is already stopped”  is replaced with
  “Replication thread(s) for channel “channe_name” are already running”  and
  “Replication threads(s) for channel “channel_name” are already stopped” respectively.
We also changed server log messages to indicate the channel on which the note/warning/error was generated. This makes debugging and/or tracing easier.

Integration with other features

This feature has been well integrated with Global Transaction Identifiers and
Multi-Threaded Slave. Currently, replication filters are global and applies
to all channels and cannot be set per channel. Semi-sync replication only works when
all the masters and slave is semi-sync enabled. We are looking at further extending
the Multi-source framework for better support of these features.

Conclusion

Multi-source replication has been released as part of 5.7.5-labs-preview. Please try out this
new labs release which can be downloaded  at labs.mysql.com and tell us if you need
some things more to make this feature even better.


PlanetMySQL Voting: Vote UP / Vote DOWN

Abdel-Mawla Gharieb: How to install multiple MySQL instances on a single host using MyEnv?

We have been asked several times by MySQL users about how to install multiple MySQL instances on a single host.
Typically, this is required when testing different MySQL versions or MySQL servers (MySQL server, Percona server and MariaDB server) while no available resources are available.
Sometimes, it is even required to install multiple MySQL instances on a single production server.

In this article, I'll go through the steps needed to install multiple MySQL instances on a single host (using the tar balls binaries) and how our popular tool MyEnv can make such process so easy.

Prepare MySQL environment [root@centos-temp ~]# groupadd mysql [root@centos-temp ~]# useradd -g mysql mysql [root@centos-temp ~]# su - mysql [mysql@centos-temp:~]$ mkdir ~/product [mysql@centos-temp:~]$ mkdir ~/data [mysql@centos-temp ~]$ cd /downloads [mysql@centos-temp ~]$ wget http://dev.mysql.com/get/Downloads/MySQL-5.6/mysql-5.6.21-linux-glibc2.5-x86_64.tar.gz [mysql@centos-temp ~]$ cd ~/product [mysql@centos-temp ~]$ tar xf /downloads/mysql-5.6.21-linux-glibc2.5-x86_64.tar.gz [mysql@centos-temp ~]$ ln -s mysql-5.6.21-linux-glibc2.5-x86_64 mysql-5.6.21 Install MyEnv

MyEnv can be downloaded from here and the installation steps are listed here.

Install the first instance (named master)
  • Prepare the first instance: [mysql@centos-temp ~]$ mkdir ~/data/master [mysql@centos-temp ~]$ vi ~/data/master/my.cnf [mysqld] port = 3306 basedir = /home/mysql/product/mysql-5.6.21 datadir = /home/mysql/data/master socket = /home/mysql/data/master/master.sock user = mysql
  • Launch the myenv installer: [mysql@centos-temp ~]$ ~/product/myenv/bin/installMyEnv.sh PHP is installed on /usr/bin/php Starting MyEnv installer: /home/mysql/product/myenv-1.1.2/bin/installMyEnv.php Configuration file /etc/myenv/myenv.conf does NOT exist. Copy from template or abort (T, a):
  • Since this is the first instance, the myenv config file does not exist yet, we take the template (t): Copy from template or abort (T, a): t Copy /home/mysql/product/myenv-1.1.2/etc/myenv.conf.template to /etc/myenv/myenv.conf
  • Then MyEnv will detect that no instances are there, we choose the first option (a) to add a new instance: No instance exists yet. An instance is the same as a mysqld process. What do you want to do next? o Add a new instance, o change an existing instance, o delete an existing instance, o save configuration and exit or o quit without saving (A/c/d/s/q)? a
  • MyEnv will ask for the new instance name (master): You have to enter a new instance name. An instance name can consist of 0-9, a-z, A-Z, - and _. Examples: test, prod, dev, [mysqld1] mysqld-3306 Enter a new instance name: master
  • MyEnv will ask for the MySQL basedir and datadir: Changing instance master: Set basedir. The basedir is the directory where your MySQL binaries are located. Example: /home/mysql/product/myenv /home/mysql/product/myenv-1.1.2 /home/mysql/product/mysql-5.6.21 /home/mysql/product/mysql-5.6.21-linux-glibc2.5-x86_64 /usr/local/mysql /opt/mysql [/usr] basedir = /home/mysql/product/mysql-5.6.21 /home/mysql/product/mysql-5.6.21 seems to exist and has already some files in it . Omitting installation of binary tar ball. Set datadir. The datadir is the directory where your MySQL data are located. Example: /usr/local/mysql/data, /opt/mysql/data or /var/lib/mysql or /home/mysql/product/mysql-5.6.21/data or [/home/mysql/data/master] or /home/mysql/data/master datadir = /home/mysql/data/master WARNING: directory /home/mysql/data/master already exist.
  • MyEnv will detect that no mysql database is created yet in the datadir and ask if it should install it for us or not (using the normal mysql_install_db script) Shall I install a mysql database under /home/mysql/data/master (Y/n)? y Installing MySQL instance. Please be patient, this can take a while... /home/mysql/product/mysql-5.6.21/scripts/mysql_install_db --datadir=/home/mysql/data/master --basedir=/home/mysql/product/mysql-5.6.21
  • For the new instance, we should specify the port, the socket, and the location of my.cnf (all these must be specific per instance): Set port. Example: 3307 or [3306] port = 3306 Set socket. Example: /tmp/mysql-3306.sock /tmp/mysql-master.sock [/tmp/mysql.sock] /var/run/mysqld/mysql.sock socket = /home/mysql/data/master/master.sock Choose location of my.cnf: Example: /etc/my.cnf /etc/mysql/my.cnf /home/mysql/data/master/my-3306.cnf [/home/mysql/data/master/my.cnf] my.cnf = /home/mysql/data/master/my.cnf
  • MyEnv will list the just added instance and ask what should be the next step, we should save and exit (s): The following instances are available: master An instance is the same as a mysqld process. What do you want to do next? o Add a new instance, o change an existing instance, o delete an existing instance, o save configuration and exit or o quit without saving (a/c/d/S/q)? s Writing the configuration file... Rename /etc/myenv/myenv.conf to /etc/myenv/myenv.conf.2014-10-15-05-50-12 Write /etc/myenv/myenv.conf
  • MyEnv will propose to add the following lines to the .bash_profile file, we should confirm (y): Do you want to add the following code to your ~/.bash_profile file? . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD (Y/n)? y Writing /etc/myenv/MYENV_BASE
  • We should add the myenv startup script as shown here into the appropriate path (the provided commands should be executed manually from the root/privileged-user after that): Please copy yourself manually the MyEnv start/stop script to the following location: shell> sudo cp /home/mysql/product/myenv-1.1.2/bin/myenv.server /etc/init.d/myenv done (Y)? y and link it to your O/S runlevels: RedHat/CentOS: shell> sudo chkconfig --add myenv; chkconfig myenv on Ubuntu/Debian: shell> sudo update-rc.d myenv defaults SLES/OpenSuSE: shell> sudo chkconfig --add myenv done (Y)? y Now source your profile as follows: . ~/.bash_profile
  • Now, we should logout and login so that the .bash_profile code will be executed (we can copy the startup script in the meantime): [mysql@centos-temp ~]$> exit [root@centos-temp ~]# cp /home/mysql/product/myenv-1.1.2/bin/myenv.server /etc/init.d/myenv [root@centos-temp ~]# chkconfig --add myenv [root@centos-temp ~]# su - mysql Up : Down : master (5.6.21) master ( 3306) : test mysql@centos-temp:~ [master, 3306]>
  • Now, we can start the just installed instance and check if it is started correctly or not: mysql@centos-temp:~ [master, 3306]> start . SUCCESS! mysql@centos-temp:~ [master, 3306]> up Up : master (5.6.21) Down : master ( 3306) : test mysql@centos-temp:~/data/master [master, 3306]>

Cool! We have got the first instance installed. let's install the second one ...

Install the second instance (named slave1)
  • Prepare the second instance: mysql@centos-temp:~ [master, 3306]> mkdir ~/data/slave1 mysql@centos-temp:~ [master, 3306]> vi ~/data/slave1/my.cnf [mysqld] port = 3307 basedir = /home/mysql/product/mysql-5.6.21 datadir = /home/mysql/data/slave1 socket = /home/mysql/data/slave1/slave1.sock user = mysql
  • Launch the MyEnv installer and add the new instance: mysql@centos-temp:~ [master, 3306]> product/myenv/bin/installMyEnv.sh PHP is installed on /usr/bin/php Starting MyEnv installer: /home/mysql/product/myenv-1.1.2/bin/installMyEnv.php Configuration file /etc/myenv/myenv.conf already exists. Use this, overwrite with template or abort (U,t,a): u The following instances are available: master An instance is the same as a mysqld process. What do you want to do next? o Add a new instance, o change an existing instance, o delete an existing instance, o save configuration and exit or o quit without saving (A/c/d/s/q)? a You have to enter a new instance name. An instance name can consist of 0-9, a-z, A-Z, - and _. Examples: test, prod, dev, [mysqld1] mysqld-3306 Enter a new instance name: slave1 Changing instance slave1: Set basedir. The basedir is the directory where your MySQL binaries are located. Example: /home/mysql/product/myenv /home/mysql/product/myenv-1.1.2 /home/mysql/product/mysql-5.6.21 /home/mysql/product/mysql-5.6.21-linux-glibc2.5-x86_64 /usr/local/mysql /opt/mysql [/usr] basedir = /home/mysql/product/mysql-5.6.21 /home/mysql/product/mysql-5.6.21 seems to exist and has already some files in it. Omitting installation of binary tar ball. Set datadir. The datadir is the directory where your MySQL data are located. Example: /usr/local/mysql/data, /opt/mysql/data or /var/lib/mysql or /home/mysql/product/mysql-5.6.21/data or [/home/mysql/data/slave1] or /home/mysql/data/slave1 datadir = /home/mysql/data/slave1 WARNING: directory /home/mysql/data/slave1 already exist. Shall I install a mysql database under /home/mysql/data/slav1 (Y/n)? y Installing MySQL instance. Please be patient, this can take a while... /home/mysql/product/mysql-5.6.21/scripts/mysql_install_db --datadir=/home/mysql/data/slave1 --basedir=/home/mysql/product/mysql-5.6.21 Set port. Example: 3307 or [3306] port = 3307 Set socket. Example: /tmp/mysql-3307.sock /tmp/mysql-slave1.sock [/tmp/mysql.sock] /var/run/mysqld/mysql.sock socket = /home/mysql/data/slave1/slave1.sock Choose location of my.cnf: Example: /etc/my.cnf /etc/mysql/my.cnf /home/mysql/data/slave1/my-3307.cnf [/home/mysql/data/slave1/my.cnf] my.cnf = /home/mysql/data/slave1/my.cnf The following instances are available: master slave1 An instance is the same as a mysqld process. What do you want to do next? o Add a new instance, o change an existing instance, o delete an existing instance, o save configuration and exit or o quit without saving (a/c/d/S/q)? s Writing the configuration file... Rename /etc/myenv/myenv.conf to /etc/myenv/myenv.conf.2014-10-15-06-44-04 Write /etc/myenv/myenv.conf Now source your profile as follows: . ~/.bash_profile
  • Then we logout and login again to load the new configs: mysql@centos-temp:~ [master, 3306]> exit logout [root@centos-temp product]# su - mysql Up : master (5.6.21) Down : slave1 (5.6.21) master ( 3306) : test slave1 ( 3307) : test mysql@centos-temp:~ [master, 3306]>

MyEnv says that we have two instances (master and slave1), master is up and running while slave1 is stopped. Let's start it then ..

  • First, all commands we execute are sent to the master instance, we should change to the slave1 instance: mysql@centos-temp:~ [master, 3306]> slave1 mysql@centos-temp:~ [slave1, 3307]> start . SUCCESS! mysql@centos-temp:~ [slave1, 3307]>
  • Both instances are now up and running ... mysql@centos-temp:~ [slave1, 3307]> up Up : master (5.6.21) slave1 (5.6.21) Down : master ( 3306) : test slave1 ( 3307) : test mysql@centos-temp:~ [slave1, 3307]>
Instances navigation and MySQL connection

After we installed the two instances we need to know how can we navigate between them to open MySQL connections.

  • Open a MySQL connection to the master instance: mysql@centos-temp:~ [master, 3306]> mysql -u root Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 7 Server version: 5.6.21 MySQL Community Server (GPL) Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. root@localhost master [(none)] SQL>
  • While the instance name is already printed in the terminal we need to double check that by a MySQL command (the used port) and exit after that: root@localhost master [(none)] SQL> SHOW GLOBAL VARIABLES LIKE 'port'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | port | 3306 | +---------------+-------+ 1 row in set (0.00 sec) root@localhost master [(none)] SQL> exit Bye mysql@centos-temp:~ [master, 3306]>
  • Open a MySQL connection to the slave1 instance and check the used port: mysql@centos-temp:~ [master, 3306]> slave1 mysql@centos-temp:~ [slave1, 3307]> mysql -u root Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 5 Server version: 5.6.21 MySQL Community Server (GPL) Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. root@localhost slave1 [(none)] SQL> SHOW GLOBAL VARIABLES LIKE 'port'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | port | 3307 | +---------------+-------+ 1 row in set (0.00 sec) root@localhost slave1 [(none)] SQL>
  • Navigate back to the master instance: mysql@centos-temp:~ [slave1, 3307]> master mysql@centos-temp:~ [master, 3306]>

More information about MyEnv commands can be checked here.

Notes:
  • Using the same procedures above we can add several instances using either the same binaries or some others.
  • We used the smallest configurations in the my.cnf file, you are free to make your own changes.

PlanetMySQL Voting: Vote UP / Vote DOWN

Percona Toolkit for MySQL with MySQL-SSL Connections

I recently had a client ask me how to use Percona Toolkit tools with an SSL connection to MySQL (MySQL-SSL). SSL connections aren’t widely used in MySQL due to most installations being within an internal network. Still, there are cases where you could be accessing MySQL over public internet or even over a public “private” network (ex: WAN between two colo datacenters). In order to keep packet sniffers at bay, the connection to MySQL should be encrypted.

If you are connecting to Amazon RDS from home or office (ie: not within the AWS network) you better be encrypted!

As there is already a MySQL Performance Blog post on how to setup MySQL SSL connections, we can skip that and dive right in.

As you probably know, the mysql client can read multiple configuration files; the primary one being /etc/my.cnf  You probably also know that the client reads a config file in your $HOME directory: .my.cnf (that’s dot-my-dot-cnf).  It is inside this file that we can set parameters for our shell-user account when connecting to MySQL hosts.

Percona Toolkit uses Perl’s DBI:mysql to make connections to MySQL hosts. This library is linked to the libmysqlclient C library which is responsible for reading and parsing the global config file as well as your $HOME config file. Let’s set some options here that are not directly available in the toolkit scripts. Using $MY_FAVORITE_EDITOR, edit your $HOME/.my.cnf as such:

[client] user = myuser password = foobar ssl-ca = /Users/drmac/ca-cert.pem

You must use the absolute path to the CA file. Relative paths won’t cut it:

ERROR 2026 (HY000): SSL connection error: SSL_CTX_set_default_verify_paths failed

Test your connection first using the mysql client:

asura:~ drmac$ mysql -h 74.13.19.17 -e "SHOW STATUS LIKE 'Ssl_cipher'" +---------------+--------------------+ | Variable_name | Value | +---------------+--------------------+ | Ssl_cipher | DHE-RSA-AES256-SHA | +---------------+--------------------+

Excellent! Now we can use any Percona Toolkit script and connect via SSL:

asura:~ drmac$ pt-table-checksum -h 74.13.19.17 -d foo -t zipcodes TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 10-13T14:10:02 0 0 45358 7 0 5.959 foo.myzipcodes

Sweet!

Unfortunately, Percona Toolkit scripts are hard-coded to read the [client] section of your .my.cnf. If you don’t want to overwrite any existing configuration that may be present, you can make a new configuration and specify that file to any toolkit script using -F. Again, relative paths won’t work here. Use the absolute path; even if you are in the same directory.

asura:~ drmac$ cp .my.cnf mytestconfig.cnf asura:~ drmac$ rm .my.cnf asura:~ drmac$ pt-table-checksum -h 74.13.19.17 -d foo -t zipcodes -F /Users/drmac/mytestconfig.cnf

Now you can continue using our awesome tools in a secure manner.

Cheers!
-Matthew

The post Percona Toolkit for MySQL with MySQL-SSL Connections appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Benchmarking Presentation at Percona Live London 2014

In a few weeks I’m presenting “Performance Benchmarking: Tips, Tricks, and Lessons Learned” at Percona Live London 2014 (November 3-4). I continue to learn lessons and improve my benchmarking capabilities, so the content is a full upgrade from my presentation at Percona Live Santa Clara in April 2013. Anyone interested in achieving and sustaining the best performance out of their software/hardware/application should attend.

Also, Tokutek is sponsoring so we’ll be available in the expo hall throughout the show.

If you are attending or in the area and want to learn more about TokuDB or TokuMX, please email me at tim@tokutek.com to coordinate.

Hope to see you at the show!

 

The post Benchmarking Presentation at Percona Live London 2014 appeared first on Tokutek.


PlanetMySQL Voting: Vote UP / Vote DOWN

InnoDB: Supporting Page Sizes of 32k and 64k

In the new InnoDB lab release we support page sizes of 32k and 64k. This gives users even more choices on the page size, allowing you to further customize InnoDB for your particular workload.

There are some things worthy of note related to this new feature:

  1. The extent size changes when the innodb_page_size is set 32k or 64k.
    The extent size is 2M for 32k page sizes, and 4M for 64k page sizes (the extent size is 1M for 4k, 8k, and 16k page sizes). If we do not enlarge the extent size then we will have too many extent headers on the allocation bitmap page, and the bitmap page will overflow.
  2. The innodb_log_buffer_size default value changes when the innodb_page_size is set to 32k or 64k. We change the innodb_log_buffer_size default value from 8M to 16M in this case. If we don’t increase the innodb_log_buffer_size then we may suffer from a lack of free log pages because we have fewer available log buffer pages.
  3. We do not support compression when innodb_page_size is set to 32k or 64k.
    When innodb_page_size is set to 32k or 64k, you will get the following warning if you include a ROW_FORMAT=COMPRESSED or a KEY_BLOCK_SIZE clause in your table DDL:
    Warning 1478 InnoDB: Cannot create a COMPRESSED table when innodb_page_size > 16k
    If innodb_strict_mode is ON, you will instead get the following error:
    Error 1031 Table storage engine for 't1' doesn't have this option
  4. The maximum record size is 16k when innodb_page_size is set to 64k.
    In theory we can store a record whose size is about 50% of the page size. In ROW_FORMAT=REDUNDANT, the record header contains 14-bit pointers to the end of each field, limiting the maximum record size to 16k. For simplicity, we enforce the 16k record size limit on all row formats when innodb_page_size is 64k.
  5. The innodb_page_size setting is global for all tablespaces within a single MySQL server instance.

We look forward to your feedback on this new feature! Please let us if you encounter any issues, or if you have any more general feedback.

Thank you for using MySQL!


PlanetMySQL Voting: Vote UP / Vote DOWN

SSL and RSA Configuration Made Easy!

What?

MySQL 5.7.5 Enterprise Server now makes it easy to setup SSL and RSA artifacts and enable mysqld to use them. Two new read-only global options are introduced for this task:

  • --auto-generate-certs: Enables automatic generation and detection of SSL artifacts at server start-up.
  • --sha256-password-auto-generate-rsa-keys: Enables automatic generation of RSA key pair.

These options govern automatic generation and detection of SSL artifacts and RSA key pairs respectively. Auto generated files are placed inside the data directory as it is under the control of MySQL server. Both options default to ON.

For sha256_password authentication plugin, private key and public key files already default to data directory and hence, automatic detection of these files is already in place. Thus, function of --sha256-password-auto-generate-rsa-keys is limited to automatic key generation.

Why?

Using SSL connections in communications with the server protects one’s data from the eyes of malicious entities. Especially when the server and clients are connected through open and/or insecure networks. While MySQL does provide a definitive guide to help users set up SSL certificates and keys, one still needs to do following in order to enable SSL within the MySQL server:

  • Use the steps provided in documentation to generate the certificates
  • Move these certificates and keys to a secure location
  • Update the MySQL server configuration file to specify the location of these certificates
  • Start the MySQL server in order to use the new SSL artifacts

Similar is the case with RSA keys. While documentation helps one in generating RSA key pair, using the newly generated key still requires steps similar to the above mentioned ones.

Our aim is to make MySQL secure by default. At the same time, we also want to make sure that it is easy to setup the secure environment with very little user intervention. These options are a step towards this goal. These new server options default to ON, and hence in absence of SSL artifacts and RSA key pair, automatic generation will take place resulting in the MySQL server having SSL and RSA capabilities enabled upon startup. This will be convenient for users who wish to create secure connections to the MySQL server without going through the trouble of generating SSL artifacts/RSA key pairs by hand and configuring the server to use them.

Note that the purpose of this functionality is to encourage users to use secure methods when connecting to the server by making the initial secure configuration easy. For better security, it is strongly recommended that users later switch to a valid set of certificates signed by a recognized certificate authority as soon as possible, rather than continuing to use the auto generated certificates indefinitely.

How? Auto-enabling SSL support

The option --auto-generate-certs kicks in if none of the ssl command line options (except --ssl of course!) are specified. It works in following manner:

  • Step 1: Check whether any of the ssl command line options except --ssl are specified, if so, the server will skip automatic generation and try to use the supplied options.
    ... 2014-09-23T06:56:07.353216Z 0 [Note] Skipping generation of SSL certificates as options related to SSL are specified. ...
  • Step 2: Check for existing SSL artifacts in the data directory. If they exist then the automatic creation process is skipped with a message similar to following:
    ... 2014-09-23T06:56:45.146238Z 0 [Note] Skipping generation of SSL certificates as certificate files are present in data directory. ...
    Note that we check for the presence of ca.pem, server-cert.pem, and server-key.pem files as these three files are essential for enabling SSL support on the MySQL server.
  • Step 3: If the certificate files are not present in data directory then the following new certificate files are generated within data directory.
    File NameDescription ca.pemX509 Certificate of CA – Self Signed ca-key.pemPrivate key for CA certificate server-cert.pemX509 Certificate for Server – Signed by auto-generate CA server-key.pemPrivate key for Server certificate client-cert.pemX509 Certificate for Client – Signed by auto-generate CA client-key.pemPrivate key for Client certificate Upon successful generation, the MySQL server will log a message similar to following:
    ... 2014-09-23T06:58:03.184170Z 0 [Note] Auto generated SSL certificates are placed in data directory. ...

From this set of generated files, ca.pem, server-cert.pem, and server-key.pem are used for the --ssl-ca, --ssl-cert and --ssl-key options respectively. Using these auto generated files allows SSL support to be automatically enabled within the MySQL server.

mysql> show variables like '%ssl%'; +---------------+-----------------+ | Variable_name | Value | +---------------+-----------------+ | have_openssl | YES | | have_ssl | YES | | ssl_ca | ca.pem | | ssl_capath | | | ssl_cert | server-cert.pem | | ssl_cipher | | | ssl_crl | | | ssl_crlpath | | | ssl_key | server-key.pem | +---------------+-----------------+ 9 rows in set (0.01 sec) mysql> show status like 'Ssl_cipher'; +---------------+--------------------+ | Variable_name | Value | +---------------+--------------------+ | Ssl_cipher | DHE-RSA-AES256-SHA | +---------------+--------------------+ 1 row in set (0.00 sec)

Furthermore, an extra set of X509 certificates and private keys are generated which can be used as the client certificate and key.

Some of the properties of the automatically generated certificates and keys are:

  • The RSA key is 2048 bits.
  • The certificates are signed using the sha256 algorithm.
  • The certificates are valid for 1 year.
  • The subject line of the certificates contain only the common name (CN) field.
  • The naming convention for the generated CN is:
    <MySQL_Server_Version>_Auto_Generated_Certificate
    Where MySQL_Server_Version is fixed at compile time. TYPE can be one of the CA, Server and Cliente.g. CN=MySQL_Server_X.Y.Z_Auto_Generated_Server_Certificate.
  • The new CA certificate is self-signed and other certificates are signed by this new auto generated CA certificate and private key.
 Auto-enabling RSA support for sha256_password authentication

Much like auto-enabling SSL support, --sha256-password-auto-generate-rsa-keys is responsible for automatic generation of the RSA key pair. When the client tries to connect to the server using the sha256_password authentication plugin, password is never sent in cleartext. By default, sha256_password tries to use an SSL connection. If MySQL is built with OpenSSL, an additional option of using RSA encryption is also available to the client. MySQL server exposes --sha256_password_private_key_path and --sha256_password_public_key_path, which can be used to point to RSA private key and public key respectively at server startup.

The new option, --sha256-password-auto-generate-rsa-keys, works in following manner:

  • Step 1: Check if a non-default value for either --sha256_password_private_key_path or --sha256_password_public_key_path is used. If so, the server will skip automatic generation and try to obtain keys from the following locations:
    ... 2014-09-23T06:59:22.776254Z 0 [Note] Skipping generation of RSA key pair as options related to RSA keys are specified. ...
  • Step 2: If the default location is used for both of these options, then check if the private_key.pem and public_key.pem files are present in the data directory. If these files are found, then auto generation is skipped.
    ... 2014-09-23T06:56:45.160448Z 0 [Note] Skipping generation of RSA key pair as key files are present in data directory. ...
  • Step 3: Otherwise we generate the private_key.pem and public_key.pem files with a key length of 2048 bits. These files are placed within the data directory and are picked up automatically by the server.
    ... 2014-09-23T06:58:03.363858Z 0 [Note] Auto generated RSA key files are placed in data directory. ...

These keys are then picked up by server to enable RSA support for the sha256_password authentication.

mysql> show variables like 'sha256_password%'; +----------------------------------------+-----------------+ | Variable_name | Value | +----------------------------------------+-----------------+ | sha256_password_auto_generate_rsa_keys | ON | | sha256_password_private_key_path | private_key.pem | | sha256_password_public_key_path | public_key.pem | +----------------------------------------+-----------------+ 3 rows in set (0.00 sec) mysql> show status like '%public_key%'\G *************************** 1. row *************************** Variable_name: Rsa_public_key Value: -----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuTlv3K2nKl8+PbutlSxX mJ9+S9iW9Bz0Y6QWXa+FwNH00e2ZYBTfhemx25JmcLS1nI6yyX/ToV9d+s9yWLEf 9gaa8wpE8rfzucfy/BpyrQidF2coSKNW50SMbPG7nEkkC0p6iCw+ejCZhqNBKUEK uYajrdUhnj/dNVTpIoCDteAC14oDMN0ZbhhnuNM0loZGW2LQMPNG3r9UxXbs/d31 nKa07jkIA0t89QtYH4FVYTek582EDwdFm/yWDizFGxmllVmOL3A50GsX72YT+8VF l1hI39t6vQGskMDsoSjDpMOGzQBeNXdeNHy8gl/QG4ZrREFd9lvlC5Won1MRo7TJ FwIDAQAB -----END PUBLIC KEY----- 1 row in set (0.00 sec)

We look forward to your input on these new features! Please let us know if you have any questions, or if you encounter any problems.

 


PlanetMySQL Voting: Vote UP / Vote DOWN

Pages