Planet MySQL

Basic Windows MySQL Installation Without Installer

It dawned on me that most folks are using the Installer these days.

As I need quick access to every MySQL version,  using an Installer is never an option.
And for folks wanting 100% control over their setups, they may not want an installer doing things.

So this shows how to setup an instance manually.
  1. download non-installer .zip version from dev.mysql.com
  2. create a directory c:\mysql and c:\mysql\tmp
  3. unzip the .zip into c:\mysql
  4. move the data directory into c:\mysql for easier future upgrades
  5. create a basic my.ini
  6. install the service
  7. start the service


Here I'll show each step with more detail.   I purposely leave out things like post-installation security, to keep it simple.
1.  Create a directory.Decide where you will put the installation and datadir.  I use c:\mysql and c:\mysql\datasince I truly despise the "windows way" with long paths such as "c:\Program Files\MySQL Server 5.6" ...
C:\>mkdir mysql
C:\>cd mysql
C:\mysql>mkdir tmp
C:\mysql>dir
 Volume in drive C has no label.
 Volume Serial Number is 802E-2730
 Directory of C:\mysql
2014/08/01  09:31    <DIR>          .
2014/08/01  09:31    <DIR>          ..
2014/08/01  09:31    <DIR>          tmp
               0 File(s)              0 bytes
               3 Dir(s)  74 040 700 928 bytes free
2.   Download the non-installer .zip version.       Use a browser to download the latest version, for example:         http://dev.mysql.com/get/Downloads/MySQL-5.6/mysql-5.6.20-win32.zip

3.   Extract the zip file.
I use 7zip or winrar, but windows explorer can also be used to extract the .zip file right here.

4.  Move the data directoryThe non-installer .zip comes with a data directory which I will use in this installation.As you might want to upgrade the instance later,  I prefer to put the datadir a separate location to the version just downloaded.

C:\mysql>dir
 Volume in drive C has no label.
 Volume Serial Number is 802E-2730
 Directory of C:\mysql
2014/08/01  08:11    <DIR>          .
2014/08/01  08:11    <DIR>          ..
2014/08/01  08:11    <DIR>          data
2014/08/01  08:11    <DIR>          mysql-5.6.20-win32
2014/08/01  07:52       353 970 000 mysql-5.6.20-win32.zip
               1 File(s)    353 970 000 bytes
               4 Dir(s)  73 969 897 472 bytes free
C:\mysql>move mysql-5.6.20-win32\data data
        1 dir(s) moved.


5.  Write the my.ini.I'll keep the my.ini in the datadir, to lesson complexity.
C:\mysql>notepad data\my.ini
C:\mysql>type data\my.ini
[mysqld]
datadir=c:/mysql/data
tmpdir=c:/mysql/tmp
log-error=c:/mysql/data/mysql.err
port=3306
slow-start-timeout=0
log-warnings=2


6.  Install mysqld as a service.This part seems confusing due to the options used.  To keep things clear, I use a specific service name for each version, so that I know what it is later.You must be running cmd.exe as an administrative user to do this.

C:\mysql>cd mysql-5.6.20-win32
C:\mysql\mysql-5.6.20-win32>cd bin
C:\mysql\mysql-5.6.20-win32\bin>mysqld.exe --install MySQL_5620 --defaults-file=c:/mysql/data/my.ini --local-service
Service successfully installed.
C:\mysql\mysql-5.6.20-win32\bin>cd..\..\

7.   Start the service.

C:\mysql>sc start MySQL_5620
SERVICE_NAME: MySQL_5620
        TYPE               : 10  WIN32_OWN_PROCESS
        STATE              : 2  START_PENDING
                                (NOT_STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN)
        WIN32_EXIT_CODE    : 0  (0x0)
        SERVICE_EXIT_CODE  : 0  (0x0)
        CHECKPOINT         : 0x1
        WAIT_HINT          : 0x1f40
        PID                : 4352
        FLAGS              :
If you have larger innodb settings, give it a few seconds/minutes to start, then check it.

C:\mysql>mysql-5.6.20-win32\bin\mysql.exe --no-defaults -h127.0.0.1 -uroot -P3306
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.20 MySQL Community Server (GPL)
Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> \s
--------------
mysql-5.6.20-win32\bin\mysql.exe  Ver 14.14 Distrib 5.6.20, for Win32 (x86)
Connection id:          1
Current database:
Current user:           root@localhost
SSL:                    Not in use
Using delimiter:        ;
Server version:         5.6.20 MySQL Community Server (GPL)
Protocol version:       10
Connection:             127.0.0.1 via TCP/IP
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    cp850
Conn.  characterset:    cp850
TCP port:               3306
Uptime:                 2 min 37 sec
Threads: 1  Questions: 5  Slow queries: 0  Opens: 67  Flush tables: 1  Open tables: 60  Queries per second avg: 0.031
--------------
The way I personally stay sane with >50 versions on my machine is to follow simple rules:
  • not any MySQL products 'installed' on my work machine.
  • not any global my.cnf and my.ini lurking around.
  • always use --no-defaults when running mysql programs, in case I broke the last rule.
  • not any mysql program in the path.

Note, the manual pages cover everything here, and even more verbosely.Starting as a service
The reason I prefer this method is that upgrades are generally easier. You simply download the next 5.6.21, extract it, delete the existing MySQL_5620 service, create a new MySQL_5621 service using same command, and run mysql_upgrade once it's started.


PlanetMySQL Voting: Vote UP / Vote DOWN

Paris OpenStack Summit Voting – Percona Submits 16 MySQL Talks

MySQL plays a critical role in OpenStack. It serves as the host database supporting most components such as Nova, Glance, and Keystone and is the most mature guest database in Trove. Many OpenStack operators use Percona open source software including the MySQL drop-in compatible Percona Server and Galera-based Percona XtraDB Cluster as well as tools such as Percona XtraBackup and Percona Toolkit. We see a need in the community to understand how to improve MySQL performance in OpenStack. As a result, Percona, submitted 16 presentations for the Paris OpenStack Summit.

Paris OpenStack Summit presentations are chosen by OpenStack member voting. Please vote for our talks by clicking the titles below that interest you. You must be an OpenStack Foundation member to vote. If you aren’t a member, sign up here – it’s free and only takes a minute. The deadline to vote is Wednesday, August 6, 2014!

Paris OpenStack Summit MySQL Talks Submitted by PerconaOpenStack Operations

MySQL Database Operations in the OpenStack World
Speaker: Stéphane Combaudon

MySQL High Availability Options for Openstack
Speakers: Stéphane Combaudon

Host and Guest Database Backup and Recovery for OpenStack Ops
Speakers: George Lorch, David Busby

Benchmarking the Different Cinder Storage Backends
Speaker: Peter Boros

MySQL and OpenStack Deep Dive
Speakers: Peter Boros, Jay Pipes (Mirantis)

Trove Performance Tuning for MySQL
Speaker: Alexander Rubin

Schema Management: Versioning and Automation with Puppet and MySQL Utilities
Speaker: Frederic Descamps

Deploying Databases for OpenStack
Speakers: Matt Griffin, Jay Pipes (Mirantis), Amrith Kumar (Tesora), Vinay Joosery (Severalnines)

Related Open Source Software Projects

Introduction to Percona XtraDB Cluster
Speaker: Kenny Gryp

Percona Server Features for OpenStack and Trove Ops
Speakers: George Lorch, Vipul Sabhaya (HP Cloud)

Products, Tools & Services

ClusterControl: Efficient and reliable MySQL Management, Monitoring, and Troubleshooting for OpenStack HA
Speakers: Peter Boros, Vinay Joosery (Severalnines)

Advanced MySQL Performance Monitoring for OpenStack Ops
Speaker: Daniel Nichter

Targeting Apps for OpenStack Clouds

Oars in the Cloud: Virtualization-aware Galera instances
Speaker: Raghavendra Prabhu

ACIDic Clusters: Review of contemporary ACID-compliant databases with synchronous replication
Speaker: Raghavendra Prabhu

Cloud Security

Security: It’s more than just your database you should worry about
Speaker: David Busby

Planning Your OpenStack Project

Infrastructure at Scale
Speaker: Michael Coburn

The Paris OpenStack Summit will offer developers, operators, and service providers with valuable insights into OpenStack. The Design Summit sessions will be filled with lively discussions driving OpenStack development including sessions defining the future of Trove, the DBaaS (database as a service) component near and dear to Percona’s heart. There will also be many valuable presentations in the main Paris OpenStack Summit conference about operating OpenStack, utilizing the latest features, complimentary software and services, and real world case studies.

Thank you for your support. We’re looking forward to seeing many Percona software users at the Paris OpenStack Summit in November.

The post Paris OpenStack Summit Voting – Percona Submits 16 MySQL Talks appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB: Selective binary logs events

In the first post in a series on MariaDB features we find interesting, we begin with selectively skipping replication of binlog events. This feature is available on MariaDB 5.5 and 10.

By default when using MySQL’s standard replication, all events are logged in the binary log and those binary log events are replicated to all slaves (it’s possible to filter out some schema). But with this feature, it’s also possible to bypass some events to be replicated on the slave(s) even if they are written in the binary log. Having those event in the binary logs is always useful for point-in-time recovery.

Indeed, usually when we need to not replicate an event, we set sql_log_bin = 0 and the event is bypassed: neither written into the binlog, neither replicated to slave(s).

So with this new feature, it’s possible to just set a session variable to tag events that will be written into the binary log and bypassed on demand on some slaves.

And it’s really easy to use, on the master you do:

set skip_replication=1;

and on the slave(s) having replicate_events_marked_for_skip='FILTER_ON_MASTER' or 'FILTER_ON_SLAVE' the events skipped on the master won’t be replicated.

The valid values for replicate_events_marked_for_skip are:

  • REPLICATE (default) : skipped events are replicated on the slave
  • FILTER_ON_SLAVE : events so marked will be skipped on the slave and not replicated
  • FILTER_ON_MASTER : the filtering will be done on the master so the slave won’t even receive it and then save network bandwidth

That’s a cool feature but when this can be very useful?

Use case:

For archiving this can be very interesting. Indeed most of the time when people is archiving data, they use something like pt-archiver that deletes the data and copy the removed data on an archive server.

Thanks to this feature, instead of having an archiving server where we copy the deleted data, it’s possible to have a slave where we won’t delete the data. This will be much faster (smarter?) and allows to have an archiving server always up to date. Of course in this case sql_log_bin = 0 would have worked (if we ignore the point-in-time recovery).
But with a Galera Cluster? Yes that’s where this feature is really cool, if we would have used sql_log_bin = 0 on a Galera Cluster node, all other nodes would have ignored the delete and the result would be inconsistency between the nodes.

So if you use an asynchronous slave as an archiving server of a Galera Cluster, this feature is really mandatory.

As illustrated below, you can have a MariaDB Galera Cluster node joining a Percona XtraDB Cluster that will be used to delete historical data using pt-archiver:

pt-archiver is started with --set-vars "skip_replication=1"

The post MariaDB: Selective binary logs events appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Percona Server 5.1.73-14.12 is now available

Percona Server version 5.1.73-14.12

Percona is glad to announce the release of Percona Server 5.1.73-14.12 on July 31st, 2014 (Downloads are available here and from the Percona Software Repositories). Based on MySQL 5.1.73, including all the bug fixes in it, Percona Server 5.1.73-14.12 is now the current stable release in the 5.1 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.1.73-14.12 milestone at Launchpad.

NOTE: Packages for Debian 7.0 (wheezy), Ubuntu 13.10 (saucy) and 14.04 (trusty) are not available for this release due to conflict with newer packages available in those releases.

Bugs Fixed:

  • Percona Server couldn’t be built with Bison 3.0. Bug fixed #1262439, upstream #71250.
  • Ignoring Query Cache Comments feature could cause server crash. Bug fixed #705688.
  • Database administrator password could be seen in plain text when debconf-get-selections was executed. Bug fixed #1018291.
  • If XtraDB variable innodb_dict_size was set, the server could attempt to remove a used index from the in-memory InnoDB data dictionary, resulting in a server crash. Bugs fixed #1250018 and #758788.
  • Ported a fix from MySQL 5.5 for upstream bug #71315 that could cause a server crash f a malformed GROUP_CONCAT function call was followed by another GROUP_CONCAT call. Bug fixed #1266980.
  • MTR tests from binary tarball didn’t work out of the box. Bug fixed #1158036.
  • InnoDB did not handle the cases of asynchronous and synchronous I/O requests completing partially or being interrupted. Bugs fixed #1262500 (upstream #54430).
  • Percona Server version was reported incorrectly in Debian/Ubuntu packages. Bug fixed #1319670.
  • Percona Server source files were referencing Maatkit instead of Percona Toolkit. Bug fixed #1174779.
  • The XtraDB version number in univ.i was incorrect. Bug fixed #1277383.

Other bug fixes: #1272732, #1167486, and #1314568.

Release notes for Percona Server 5.1.73-14.12 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.1.73-14.12 is now available appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL: InnoDB transactions, a practical example

Only the MySQL storage engine InnoDB supports transactions. A major benefit of transactions is that it’s possible to roll back changes until a transaction is committed. This example show you how to write multi-statement transaction, how to perform a rollback in case of errors and how to commit a transaction.

Starting transactions

In autocommit mode (the InnoDB default behavior), MySQL does a commit after each SQL statement that was executed without errors. That makes it impossible to roll back an committed statement. When we start a new transaction, the autocommit mode is disabled implicitly by MySQL and enabled again once we commit the transaction:

START TRANSACTION; -- ... -- add all insert & update statements here -- to save the received order -- ... -- on success: COMMIT; -- in case of failure: ROLLBACK; Adding savepoints for partial rollbacks

InnoDB supports a feature called savepoints. A savepoints makes partial rollbacks of a transaction possible:

START TRANSACTION; -- -- add all insert & update statements here -- to save the received order -- -- in case something went wrong: ROLLBACK; -- otherwise set a savepoint called "order_added" SAVEPOINT order_added; -- -- add a success message somewhere: -- insert into logging values (123, 'no errors occured') -- -- -- execute other statements... -- in case of problems, rollback to savepoint: -- ROLLBACK TO SAVEPOINT order_added; -- -- the success message is not logged, -- but the order was saved -- -- on success: COMMIT; Implicit commits / rollbacks

When writing InnoDB transactions, it’s important to keep in mind that there are several statements that cause InnoDB to commit the current transaction. Afterwards, a new transaction is started automatically. Here are a few of them, for a more complete list see the “additional information” section below:

  • ALTER TABLE
  • CREATE INDEX
  • DROP …
  • TRUNCATE …
  • LOCK / UNLOCK

Additionally important: If you don’t commit a transaction that you have started using START TRANSACTION before you close the connection, InnoDB performs a implicit rollback.

Additional information

Since this example is just a introduction, make sure to take a look at the useful information about InnoDB and transactions in the MySQL documentation:

http://avtokum.com/ http://utis.org.ua/
PlanetMySQL Voting: Vote UP / Vote DOWN

Testing MySQL repository packages: how we make sure they work for you

Around nine months ago, we launched the MySQL yum repositories for Enterprise Linux and Fedora, followed by apt repos for Debian and Ubuntu back in May. We’re extremely happy that the repos have proved to be a big hit with the community: the monthly number of downloads hit 100K recently and it is still growing […]
PlanetMySQL Voting: Vote UP / Vote DOWN

Compare and Synchronize with Updated Comparison Tools!

We are happy to announce the update of comparison tools for Oracle and MySQL databases. Each of them has its own features that our team implemented to let you feel a professional in comparing and synchronizing databases. Choose the right product that is suited for you and enjoy your work.


PlanetMySQL Voting: Vote UP / Vote DOWN

Compare and Synchronize with Updated Comparison Tools!

We are happy to announce the update of comparison tools for Oracle and MySQL databases. Each of them has its own features that our team implemented to let you feel a professional in comparing and synchronizing databases. Choose the right product that is suited for you and enjoy your work.


PlanetMySQL Voting: Vote UP / Vote DOWN

Beyond the FRM: ideas for a native MySQL Data Dictionary

The frm file has provided long service since the earliest days of MySQL. Now, it is time to replace it with a native InnoDB-based Data Dictionary.

This is a change that has been on our wish list for a long time, as well as others in the MySQL development community:

For historical context:

  • Every table in MySQL has at least a corresponding .frm file. For example in the case of MyISAM these three files constitute a complete backup:

    mytable.MYI mytable.MYD mytable.frm

  • The .frm file stores information such as column names and data-types. It is a binary format, which as Stewart explains predates MySQL and spans back to 1979 with UNIREG. In addition to the .frm file, there are .trn, .trg and .par files which have been added over time to support triggers and partitioning.

Our motivation to develop a native dictionary spans from a number of issues with the current filesystem-based formats:

  1. The current .frm file predates MySQL's support for transactions, and has a lot of complexity to handle various failure states in replication and crash recovery. For example: Bug#69444. Using a native data dictionary simplifies code and makes handling failure states very simple.

  2. Our information_schema implementation currently suffers, and has been subject to years of complaints. By using a native dictionary, we will be able implement information_schema as views over real tables, significantly improving the speed of queries.

  3. On a closely related point, we currently build information_schema on top of a series of differing filesystem properties, while attempting to provide the same cross-platform experience. The code to account for filesystem case insensitivity increases code complexity, and ties up developer resources that could be better spent elsewhere.

  4. Aside from the MySQL server's data dictionary, storage engines may also store their own data dictionary. In the case of the InnoDB storage engine, this redundant storage has led to complexity in troubleshooting an out-of-sync data dictionary.

  5. The current non-comformity of the data dictionary (using .frm, .par, .trn and .trg files) spans from a lack of extensibility from the original .frm format. Not having a centralized extensible repository makes it difficult to incorporate feature requests that require additional meta data stored, or to offer new relational objects in the future.

  6. The current format does not support versioning meta-data in such a way that we can use it to assist in the upgrade experience between MySQL versions.

This change is of course in addition to my recent post about storing system tables in InnoDB.

While this change will be transparent for many of our users, we are inviting feedback from the community. Please let us know:

  • If you have a use-case where you interact with the file-based formats directly.
  • What features you want to see in a native data dictionary!

You can either leave a comment, or email me.


PlanetMySQL Voting: Vote UP / Vote DOWN

Resetting advanced options to defaults in MySQL for Excel

Welcome to another blog post from the MySQL for Excel Team. Today we're going to talk about a new feature included since 1.3.0, remember you can always install the latest GA or maintenance version using the MySQL Installer also you can download directly any GA or non-GA version from the MySQL Developer Zone.

In this post we will learn how to reset several advanced options to their default values.



Remember that your feedback is very important for us, please drop us a message with your comments, suggestions for this or other features and follow us at our social media channels:

Cheers!


PlanetMySQL Voting: Vote UP / Vote DOWN

Examining the TokuDB MySQL storage engine file structure

As we know different storage engines in MySQL have different file structures. Every table in MySQL 5.6 must have a .frm file in the database directory matching the table name. But where the rest of the data resides depends on the storage engine.

For MyISAM we have .MYI and .MYD files in the database directory (unless special settings are in place); for InnoDB we might have data stored in the single table space (typically ibdata1 in the database directory) or as file per table (or better said file per partition) producing a single file with .ibd extension for each table/partition. TokuDB as of this version (7.1.7) has its own innovative approach to storing the table contents.

I have created the table in the database test having the following structure:

CREATE TABLE `mytable` ( `id` int(10) unsigned NOT NULL, `c` varchar(15) NOT NULL, `d` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `c` (`c`), KEY `d` (`d`,`c`), KEY `d_2` (`d`) ) ENGINE=TokuDB DEFAULT CHARSET=latin1

No files appear in the “test” database directory besides mytable.frm, however few files are created in the database directory:

-rwxrwx--x 1 mysql mysql 40960 Jul 29 21:01 _test_mytable_key_c_22f19b0_3_19_B_0.tokudb -rwxrwx--x 1 mysql mysql 16896 Jul 29 21:02 _test_mytable_key_d_2_22f223b_3_19_B_0.tokudb -rwxrwx--x 1 mysql mysql 16896 Jul 29 21:01 _test_mytable_key_d_22f1c9a_3_19_B_0.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:01 _test_mytable_main_22f1818_2_19.tokudb -rwxrwx--x 1 mysql mysql 65536 Jul 29 21:02 _test_mytable_status_22f1818_1_19.tokudb

As you can see the table is presented by a series of files – the “status” file, the “main” table which contains clustered fractal tree index (primary key) plus each index is stored in its own file. Note how files are named – to include the database name, file name and the key name (the name you give to the key, not the columns involved). This is followed by something like “22f1818_1_19″ which I assume is kind of internal TokuDB object identifier.

Note also (at least in my system) files are created with executable bit set. I see no reason for this and this is probably just a minor bug.

Another minor bug (or intended design limitation?) seems to be TokuDB might loose the actual table name in its file name when you alter the table. For example as I altered the table to drop one of the keys and add another one named “superkey” I see the “mytable” name is replaced with “sql_390c_247″ which looks very much like the temporary table which was used to rebuild the table:

-rwxrwx--x 1 mysql mysql 32768 Jul 29 21:15 _test_sql_390c_247_key_c_22f6f7d_3_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:15 _test_sql_390c_247_key_d_22f6f7d_5_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:15 _test_sql_390c_247_key_superkey_22f6f7d_4_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:14 _test_sql_390c_247_main_22f6f7d_2_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:14 _test_sql_390c_247_status_22f6f7d_1_19.tokudb

I like the approach of storing different indexes in the different files as this makes it much easier to drop the index as well as potentially allows the placement of indexes onto different storage if it is desired for some reason. However putting all tables in the database root is a bad idea – having substantial amount of tables, especially with few indexes, each producing huge amounts of files, makes it inconvenient to work with the database directory (which often contains other files – log files, binary logs etc.) plus it might push file systems to their limits or performance limits dealing with huge amounts of files in the single directory.

Many also like having files in the data directory as it allows, in basic configurations, to use simple Unix tools such as du to see how much space given database physically takes.

Same as InnoDB and MyISAM, TokuDB will create a separate set of files for each partition, placing it in the same directory, having same table partitioned by HASH on primary key with 4 partitions I observe:

-rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p0_key_c_22f9f35_3_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p0_key_d_22f9f35_5_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p0_key_superkey_22f9f35_4_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p0_main_22f9f35_2_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p0_status_22f9f35_1_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p1_key_c_22f9f8e_3_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p1_key_d_22f9f8e_5_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p1_key_superkey_22f9f8e_4_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p1_main_22f9f8e_2_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p1_status_22f9f8e_1_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p2_key_c_22f9fe1_3_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p2_key_d_22f9fe1_5_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p2_key_superkey_22f9fe1_4_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p2_main_22f9fe1_2_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p2_status_22f9fe1_1_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p3_key_c_22f9ffb_3_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p3_key_d_22f9ffb_5_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p3_key_superkey_22f9ffb_4_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p3_main_22f9ffb_2_19.tokudb -rwxrwx--x 1 mysql mysql 32768 Jul 29 21:22 _test_sql_390c_25b_P_p3_status_22f9ffb_1_19.tokudb

As you can see “P_p1″ to “P_p2″ suffixes added to each of the files.

What other files exist beyond those which come from TokuDB tables?

There are few system files

-rwxrwx--x 1 mysql mysql 32768 Jul 29 21:16 tokudb.directory -rwxrwx--x 1 mysql mysql 16384 Jul 17 19:09 tokudb.environment -rwxrwx--x 1 mysql mysql 1048576 Jul 29 21:22 tokudb.rollback

Which as their name say contain “directory” – metadata about tables and indexes in the system, rollback – contains data which is needed for transaction rollback and environment contains some kind of information about environment. I did not dig into this – I just see this file is not changed after I have started the instance unlike other 2 which are changed when table structure is changed (tokudb.directory) or when database is modified (tokudb.rollback)

Next you will see

-rw------- 1 mysql mysql 0 Jul 17 19:09 __tokudb_lock_dont_delete_me_data -rw------- 1 mysql mysql 0 Jul 17 19:09 __tokudb_lock_dont_delete_me_environment -rw------- 1 mysql mysql 0 Jul 17 19:09 __tokudb_lock_dont_delete_me_logs -rw------- 1 mysql mysql 0 Jul 17 19:09 __tokudb_lock_dont_delete_me_recovery -rw------- 1 mysql mysql 0 Jul 17 19:09 __tokudb_lock_dont_delete_me_temp

I see TokuDB really tries to lock the database instance preventing concurrent access to the directory with multiple files. I think InnoDB does it more clean way placing the lock on system tablespace which does not require extra files… though the TokuDB team might have some specific reason to do it this way. Might be Oracle holds the software patent on preventing concurrent database operation by locking the file?

Finally there is the transaction log file:

-rwx------ 1 mysql mysql 37499084 Jul 29 21:34 log000000002593.tokulog25

TokuDB transaction log files are not pre-allocated like InnoDB’s but they look more similar to MySQL binary logs – they have sequentially increasing file numbers which will increment as new files are created, file itself will grow as new data is written to the log. As I understand log rotation happens during checkpoint and you would typically see only one log file.

There is a lot more for me to learn when it comes to TokuDB file layout and purpose of individual files, yet I hope this provides you with good basic overview of the TokuDB MySQL storage engine.

The post Examining the TokuDB MySQL storage engine file structure appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Chicago MySQL Meetup August 4th

High Availability With MySQL – Jay Janssen of Percona

Monday, August 4, 2014
6:00 PM

GrubHub
111 W. Washington St.
Suite 2100
Chicago, IL

Come join Jay Janssen, Principal Consultant at Percona as he speaks about High Availability with MySQL. Jay has been with Percona since 2011. Before that, spent 7 years working for Yahoo in a variety of fields including High Availability architectures, MySQL training, tool building, global server load balancing, multi-datacenter environments, operationalization, and monitoring.

Pizza and beverages will be provided.



PlanetMySQL Voting: Vote UP / Vote DOWN

Using innodb_large_prefix to avoid ERROR 1071

If you've ever tried to add an index that includes a long varchar column to an InnoDB table in MySQL, you may have seen this error:

ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes

The character limit depends on the character set you use. For example if you use latin1 then the largest column you can index is varchar(767), but if you use utf8 then the limit is varchar(255). There is also a separate 3072 byte limit per index. The 767 byte limit is per column, so you can include multiple columns (each 767 bytes or smaller) up to 3072 total bytes per index, but no column longer than 767 bytes. (MyISAM is a little different. It has a 1000 byte index length limit, but no separate column length limit within that).

One workaround for these limits is to only index a prefix of the longer columns, but what if you want to index more than 767 bytes of a column in InnoDB?

In that case you should consider using innodb_large_prefix, which was introduced in MySQL 5.5.14 and allows you to include columns up to 3072 bytes long in InnoDB indexes. It does not affect the index limit, which is still 3072 bytes as quoted in the manual:

The InnoDB internal maximum key length is 3500 bytes, but MySQL itself restricts this to 3072 bytes. This limit applies to the length of the combined index key in a multi-column index.

Read on for details and examples about innodb_large_prefix.

Here are a few pre-requisites for using innodb_large_prefix:

The default file format is still Antelope for backwards compatibility, and the default row format is COMPACT.

You can set both innodb_file_format and innodb_large_prefix dynamically, but you should also set them in my.cnf so they survive a restart.

Here's an example. If I try to create this table with innodb_large_prefix disabled I get an error:

``` mysql> create table if not exists utf8_test (

-> day date not null, -> product_id int not null, -> dimension1 varchar(500) character set utf8 collate utf8_bin not null, -> dimension2 varchar(500) character set utf8 collate utf8_bin not null, -> unique index unique_index (day, product_id, dimension1, dimension2) -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes ```

If I enable innodb_large_prefix I can create the table successfully:

``` mysql> set global innodb_file_format = BARRACUDA; Query OK, 0 rows affected (0.00 sec)

mysql> set global innodb_large_prefix = ON; Query OK, 0 rows affected (0.00 sec)

mysql> create table if not exists utf8_test (

-> day date not null, -> product_id int not null, -> dimension1 varchar(500) character set utf8 collate utf8_bin not null, -> dimension2 varchar(500) character set utf8 collate utf8_bin not null, -> unique index unique_index (day, product_id, dimension1, dimension2) -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC;

Query OK, 0 rows affected (0.02 sec) ```

The examples are similar for latin1, but I can use columns three times as long since it's a single-byte character set.

``` mysql> create table if not exists latin1_test (

-> day date not null, -> product_id int not null, -> dimension1 varchar(1500) not null, -> dimension2 varchar(1500) not null, -> unique index unique_index (day, product_id, dimension1, dimension2) -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes

mysql> set global innodb_file_format = BARRACUDA; Query OK, 0 rows affected (0.00 sec)

mysql> set global innodb_large_prefix = ON; Query OK, 0 rows affected (0.00 sec)

mysql> create table if not exists latin1_test (

-> day date not null, -> product_id int not null, -> dimension1 varchar(1500) not null, -> dimension2 varchar(1500) not null, -> unique index unique_index (day, product_id, dimension1, dimension2) -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ROW_FORMAT=DYNAMIC;

Query OK, 0 rows affected (0.02 sec) ```

And here's what happens if I try to create an index longer than 3072 bytes:

``` mysql> create table if not exists long_index_test (

-> day date not null, -> product_id int not null, -> dimension1 varchar(1500) not null, -> dimension2 varchar(1500) not null, -> dimension3 varchar(1500) not null, -> unique index unique_index (day, product_id, dimension1, dimension2, dimension3) -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ROW_FORMAT=DYNAMIC;

ERROR 1071 (42000): Specified key was too long; max key length is 3072 bytes ```


PlanetMySQL Voting: Vote UP / Vote DOWN

Early Bird Pricing for MySQL Central @ Oracle Open World Extended to August 1st

Register before August 1st for early bird pricing!

Millions of organizations around the world trust MySQL to power their business-critical web, cloud, and embedded applications. Want to learn best practices to develop next-generation applications with MySQL? Joins us at MySQL Central @ OpenWorld.

Highlights
Learn new skills
Share and network with the global MySQL community
Hear about new MySQL features directly from Oracle
Get insight on product roadmaps
Have fun



PlanetMySQL Voting: Vote UP / Vote DOWN

OSCON 2014: Wrap Up

Tue, 2014-07-29 19:36Marc Sherwood

Another OSCON has been wrapped up. While these year was slightly smaller than last year it was still an amazing event. The interesting part was that MySQL and MariaDB seemed to be bigger topics than in years gone by.

Now that I have been back in the office for a few days, I am getting caught up on my overloaded inbox, and have had some time to reflect on the event.

The first thing that comes to mind is that MariaDB adoption rate is growing rapidly, and we hear about more amazing deployments at each event we attend. This year we met with a number of people who have made the switch from MySQL to MariaDB for their corporate use, as well as a good number of hosting companies who have migrated to MariaDB. We are working on getting some of these stories ready to share so stay tuned for more on that!

We aim to have fun in the MariaDB booth - how else could we stay semi-sane during those long hours (hint - we are not sane to begin with)? This year we had Nolan from EventToons to do caricatures in our booth. Shortly we will have all of these drawings available on our Facebook page. In the meantime here's one of the world's youngest MariaDB fan (my daughter):

The caricatures kept us quite busy in the MariaDB booth and also gave us a good chance to talk with our guests as well as to to have them complete a short survey. Here are some of the results that I found interesting:

I don't think that seeing performance and availability at being at the top of this list will be a shock to anyone though I did find that scale and cost came in tied. This indicates to me that the areas in which we are focused on with MariaDB are in line with that people are saying that they need.

It was also interesting to see that physical servers onsite still are in the majority. I was expecting to see the cloud adoption a little higher than what we found here. An interesting follow-up question to this would have been "do you plan to change were your servers are currently running?". My guess is that most have plans to leverage the cloud to some capacity.

For all those that we met who are using MariaDB today there were about twice that amount that are using MySQL. The good news is that they now have something worth migrating to.

On the migration topic, one thing that we offered to people at OSCON was a chance to meet with a MariaDB engineer to talk about how you are using MySQL today so that they could outline exactly where moving to MariaDB would bring performance gains. We would like to extend this offer to those of you who are also reading this blog. To request a call with an engineer you can simply complete this form and one of our Engineers will connect with you.

I am very much looking forward to next year's OSCON, but before there we will be at a number of events. To see where we will be next have a look at our events calendar.

Tags: BusinessCommunityOpen Source Advocacy About the Author Marc Sherwood

Marc Sherwood is North American Marketing Manager.


PlanetMySQL Voting: Vote UP / Vote DOWN

Pages