Planet MySQL

How to restore MySQL data folder using persistent disk snapshots (GCP)

There are already many tools for MySQL to perform backups and consistent restores, but this time I’d like to elaborate on how to take advantage of the kindness the cloud, which gives us Persistent Disk Snapshots (PSD) to restore our data set in a consistent way.

In this entry, I’ll describe in detail how to restore a backup of our MySQL dataset in a Cloud Compute VM instance making use of persistent disk snapshots and using code examples that can be used as part of an automation strategy. This method will allow us to switch between datasets in less than a minute, as opposed to the several hours that would be required to perform a restore via xtrabackup, or even longer via mysqldump.

Considerations

I’ve been working in Google Cloud Platform (GCP) and chose a simple master/slave replication scenario consisting of two VM instances running Percona Server 5.7 with xtrabackup installed on the slave.

Prior to restoring from a snapshot, we need a backup PSD. I won’t explain it here and will assume a backup PSD has already been created

Regarding permissions, I created a service account for this and included it as part of the Compute Instance Admin (v1) role and gave it Service Account User permissions. You can find how to create this type of account here.

With the service account created, it’s time to add it to our gcloud tool in the VM instance helped by the gcloud auth activate-service-account command.

The idea

We want to restore a backup PSD from our collection and mount it in a folder in our VM. We’ll stop and point MySQL into that new data folder and start it again. Using this approach, the new data folder will be available in a few seconds (less than 1 minute) and MySQL will be up and running again with a stable and consistent set of data.

Steps

  • Create folders if they don’t exist

The first thing to do is to ensure we have 2 data folders with the right naming: one for the current data and other for the new one. For this example, we are going to assume that we already have an existing data directory in /data-a and that we are going to restore our PSD to /data-b. As such, it’s important to confirm that /data-a exists and create /data-b if it doesn’t already exist.

mkdir -p /data-a #This will do nothing since /data-a already exists mkdir -p /data-b
  • Select the target folder to mount the restored snapshot

Using file count, we can determine which directory contains the current active data set. We can use this to drive logic that decides where to mount the new data partition coming from the latest PSD.

For this example, I’m taking the most common partition paths:

FILE_COUNT=(`ls /data-a | wc -l`) if [ $FILE_COUNT -eq 0 ]; then  NEW_PART_NAME='data-a'  NEW_PART_PATH='/dev/sdc1'  OLD_PART_NAME='data-b'  OLD_PART_PATH='/dev/sdb1' else  NEW_PART_NAME='data-b'  NEW_PART_PATH='/dev/sdb1'  OLD_PART_NAME='data-a'  OLD_PART_PATH='/dev/sdc1' Fi
  • Gather info for snapshot restoration to disk

Lastly, we need to decide which snapshot we’ll take from our collection and the disk names. We’ll need these variables in the coming steps.

For this example, all the PSD taken in the past contains the bkp string in the name, so in the GREP command, I filter based on this string. Based on this criteria, we are going to use the last backup PSD taken.

VM_NAME=`hostname` SNAP_NAME=(`gcloud compute snapshots list --sort-by ~NAME | grep bkp | head -2 | tail -1 | awk '{print $1}'`) SNAP_ZONE=(`gcloud compute snapshots list --sort-by ~NAME | grep bkp | head -2 | tail -1 | awk '{print $3}' | cut -d'/' -f 1`) NEW_DISK_NAME=${VM_NAME}-${NEW_PART_NAME} OLD_DISK_NAME=${VM_NAME}-${OLD_PART_NAME}
  • Restore snapshot to disk

Having all the information we need to perform a proper PSD restoration, the next step is to create a new disk from a given snapshot we got from the previous step.

gcloud compute disks create ${NEW_DISK_NAME} \ --source-snapshot ${SNAP_NAME} \ --zone ${SNAP_ZONE}
  • Attach disk to the current VM

We have a new disk containing our latest backup but is not attached to our VM. It’s time to change that:

gcloud compute instances attach-disk ${VM_NAME} \ --disk ${NEW_DISK_NAME} \ --zone ${SNAP_ZONE}
  • Mount the created disk

Once the disk is attached to our VM instance, we mount it.

mount ${NEW_PART_PATH} /${NEW_PART_NAME}
  • Ensure proper permissions in the new data folder

We have our data backup already on the VM. Ensure MySQL users will be able to access it:

chown -R mysql:mysql /${NEW_PART_NAME}
  • Stop MySQL
systemctl stop mysql
  • Find the datadir variable in my.cnf

Now, with MySQL stopped, it’s time to change the datadir variable and point it to the new data folder.

REPLACEMENT="datadir=/"${NEW_PART_NAME}"/mysql" sed -i "s|datadir=.*|${REPLACEMENT}|g" ${MY_CNF_FILE}
  • Start MySQL service back again

Time to start MySQL again and ensure it starts properly. If not, a quick look into the error log will point us to the reason:

systemctl start mysql
  • Edit fstab to make the partition changes permanent

We have our MySQL instance up and running using our new dataset but we have not made our changes OS persistent. We edit fstab for this matter:

sed -i "s|${OLD_PART_PATH}|${NEW_PART_PATH}|g" /etc/fstab sed -i "s|${OLD_PART_NAME}|${NEW_PART_NAME}|g" /etc/fstab
  • Umount unused partition

We can now umount our old data partition without taking any risk. I like to wait 5 seconds when scripting this to ensure any process in the OS has enough time to finish any action on the partition

sleep 5 umount -l ${OLD_PART_PATH}
  • Detach disk

Considering the old disk is not being used anymore by our OS, we can detach it from the VM instance:

gcloud compute instances detach-disk ${VM_NAME} \ --disk ${OLD_DISK_NAME} \ --zone ${SNAP_ZONE}
  • Remove disk

And, of course, we don’t need it in our disk collection anymore:

gcloud compute disks delete ${OLD_DISK_NAME} \ --zone ${SNAP_ZONE} \ --quiet
  • Remove the data dir

The old data directory can be removed, so we’re sure the next time we run the process it will create the empty folder and will use that one:

rm -rf /${OLD_PART_NAME}

Conclusions

Working in the cloud brings many advantages and flexibility. It’s a good thing to extend that flexibility to our restoration process which gives us a huge cut in restoration time when it comes to serious data sets.

In addition, creating and keeping a decent amount of PSD in our GCP collection instead of compressed backup seems more friendly for management (GCP has a nice GUI, in my personal opinion) and easier to maintain (you can remove them, sort them by date or size, you name it).

Where you can find MySQL this week

Our pleasure to remind everyone about three events which are happening this week and MySQL is part of. Please find them below.

  • PerconaLive, Austin, TX, US, May 28-30, 2019
    • as we previously announced MySQL is Silver sponsor w/ booth and several MySQL related talks as follows:
      • Tuesday, May 28, 2019:
        • MySQL Tutorial on: "MySQL InnoDB Cluster in a Nutshell: The Saga Continues with 8.0, the full guide" by Frederic Descamps & Kenny Gryp. Time: 9:30-12:30pm.
      • Wednesday, May 29:
        • 11:00-11:50, Room: Hill Country A: "Complex Stateful Applications Made Easier with Kubernetes" by Patrick Galbraith, the Principal Member of MySQL development
        • 11:00-11:50, Room: Texas 1: "InnoDB Management and Scalability Improvements in MySQL 8.0" by Sunny Bains, the MySQL SW Development Director
        • 11:55-12:45, Room: Texas 1: "New Features in MySQL 8.0 Replication" by Luis Soares, the MySQL SW/Replication Development Director  
        • 14:00-14:50, Room: Texas 1: "MySQL 8.0 Performance: Scalability & Benchmarks" by Dimitri Kravtchuk, the MySQL Performance Architect
        • 17:10-17:35, Room: Texas 7: "LATERAL Derived Tables in MySQL 8.0" by Norvald Ryeng, the Senior SW/Optimizer Development Manager
        • 17:40-18:05, Room: Texas 7: "MySQL Shell: The Best DBA tool? How to Use the MySQL Shell as a Framework for DBAs" by Frederic Descamps, the MySQL Community Manager
      • Thursday, May 30:
        • 14:00-14:50, Room: Texas 1: "Automatic Upgrade and New Error Logging in MySQL 8.0" by Stale Deraas, the MySQL SW Development Director
        • 14:55-15:45, Room: Texas 1: "MySQL Group Replication: The Magic Explained v2" by Frederic Descamps, the MySQL Community Manager.
  • ​​Texas Linux Fest, Austin, TX, US, May 31-Jun 1, 2019
    • MySQL is a Silver sponsor with a booth & MySQL talks (below) given by David Stokes, the MySQL Community Manager
      • "An Intro to MySQL", scheduled for Friday, May 31 @2:00-5:00pm,
      • "New Features in MySQL for Developers", scheduled for Saturday, June 1 @10:00-10:45am
      • "How to give back to Open Source", scheduled for Saturday, June 1 @4:30-5:15
  • OpenSource Conference, Hokkaido, Japan, May 31-Jun1, 2019
    • As we already announced MySQL is a Gold sponsor of this OS show. Our MySQL local team is going to staff the MySQL booth in the expo area as well having a MySQL talks as follows:  
      • "MySQL Update + MySQL InnoDB Cluster" given by Daisuke Inagaki, the MySQL Principal Solutions Engineer. His talk is being hold on May 31 @13:55.
      • "MySQL - latest information & peripheral information by Kei Skai & Yoku0825" this talk is given by MyNA (MySQL Nippon Association) by Kei Sakai & Yoku0825, the Vice Presidents of MyNA. The talk is scheduled for June 1 @11:00.​

 

MySQL auf der DOAG-Databanken Konferenz!

Trefft Carsten und Henry vom MySQL Team Deutschland auf der DOAG Datenbank 2019 (#DOAGDB19) am 3. und 4. Juni in Düsseldorf.

Wir sind mit einem kleinen Stand vertreten und Carsten wird am Montag von 14:15 bis 15:00 einen Vortrag zum Thema „MySQL, ein Einstieg in die 'andere' Datenbank von Oracle“ im Raum Amsterdam 2 halten. Das gesamte Programm für Montag und Dienstag lässt sich hier einsehen.

Wenn es im Vorfeld Fragen von Eurer Seite aus gibt, wendet Euch gerne per E-Mail an Henry [henry.kroger(at)oracle(dot)com].

Wir freuen uns auf Euch.

    Introducing MyHoard, your single solution to MySQL backups and restoration

    We at Aiven are big believers in Open Source. Not only do we provide Open Source databases as a service, we are contributors to several of those and many related projects. Additionally, we open source our in-house tools we’ve developed to provide Aiven services whenever we can.

    MySQL Support Engineer's Chronicles, Issue #10

    As promised, I am trying to write one blog post in this series per week. So, even though writing about InnoDB row formats took a lot of time and efforts this weekend, I still plan to summarize my findings, questions, discussions, bugs and links I've collected over this week.

    I've shared two links this week on Facebook that got a lot of comments (unlike links to my typical blog posts). The first one was to Marko Mäkelä's blog post at MariaDB.com, "InnoDB Quality Improvements in MariaDB Server". I do not see any comments (or any obvious way to comment) there, but the comments I've got at Facebook were mostly related to the statement that  
    "We are no longer merging new MySQL features with MariaDB..."noted in the text by Mark Callaghan and to the idea that "InnoDB" is a trademark of Oracle, so using it to refer to a fork (that is incompatible with the "upstream" InnoDB in too many ways since MariaDB 10.1 probably) is wrong, as stated by Matt Lord and Sunny Bains. People in the comments mostly agree that a new name makes sense (there are more reasons to give it now anyway than in the case of XtraDB by Percona), and we had a lot of nice and funny suggestions on Slack internally (FudDB was not among them, this is a registered trademark of Miguel Angel Nieto for many years already). We shell see how this may end up, but I would not be surprised by a new name announced soon. I suggest you to read comments in any case if you have a Facebook account, many of them are interesting.

    The second link was to Kaj Arnö's post at mariadb.org, "On Contributions, Pride and Cockiness". It's worth checking just because of Monty's photo there. Laurynas Biveinis stated in the comments that any comparison of number of pull requests (open and processed) is meaningless when development model used by other parties is different (closed, with contributions coming mostly via bug reports in case of Oracle, or all changes, external and internal, coming via pull requests in case of Percona). MariaDB uses a mix of a kind, where some contributions from contractors come via pull requests, while engineers from MariaDB Corporation work on GitHub sources of MariaDB Server directly. Anyway (meaningless statistics aside), MariaDB seems to be the easiest target for contributions from Community at the moment, and nobody argued against that. My followers also agreed that the same workflow for internal and external contributions is a preferred development model in ideal world.

    This kind of public discussions of (serious and funny) MySQL-related matters on Facebook (along with public discussions on MySQL bugs) make me think the way I use my Facebook page is proper and good for the mankind.

    Now back to notes made while working on Support issues. This week I had to explain one case when MariaDB server was shut down normally (but unexpectedly for DBA):
    2019-05-22 10:37:55 0 [Note] /usr/libexec/mysqld (initiated by: unknown): Normal shutdown This Percona blog post summarizes different ways to find a process which sent a HUP/KILL/TERM or other signal to the mysqld process. I've used SystemTap-based solution like suggested in that blog post in the past successfully. In this context I find this summary of the ways to force MySQL to fail useful. for all kinds of testing. SELinux manual is also useful to re-read at times.

    This week I've spent a lot of time and some efforts trying to reproduce the error (1942 and/or 1940 if anyone cares) on Galera node acting as an async replication slave. These efforts ended up with a bug report, MDEV-19572. Surely the idea to replicate MyISAM tables outside of mysql database to Galera cluster is bad at multiple levels, but why the error after running for a long time normally? In the process of testing I was reading various remotely related posts, so checked this and that... I also hit other problems in the process. Like this crash that happened probably while sending some signal to the node unintentionally:
    190523 17:19:46 [ERROR] mysqld got signal 11 ;
    This could be because you hit a bug. It is also possible that this binary
    or one of the libraries it was linked against is corrupt, improperly built,
    or misconfigured. This error can also be caused by malfunctioning hardware.

    To report this bug, see https://mariadb.com/kb/en/reporting-bugs

    We will try our best to scrape up some info that will hopefully help
    diagnose the problem, but since we have already crashed,
    something is definitely wrong and this may fail.

    Server version: 10.2.23-MariaDB-log
    key_buffer_size=134217728
    read_buffer_size=131072
    max_used_connections=3
    max_threads=153
    thread_count=65544
    It is possible that mysqld could use up to
    key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 467240 K  bytes of memory
    Hope that's ok; if not, decrease some variables in the equation.

    Thread pointer: 0x0
    Attempting backtrace. You can use the following information to find out
    where mysqld died. If you see no messages after this, something went
    terribly wrong...
    stack_bottom = 0x0 thread_stack 0x49000
    /home/openxs/dbs/maria10.2/bin/mysqld(my_print_stacktrace+0x29)[0x7f6475eb5b49]
    /home/openxs/dbs/maria10.2/bin/mysqld(handle_fatal_signal+0x33d)[0x7f64759d50fd]
    /lib/x86_64-linux-gnu/libpthread.so.0(+0x10330)[0x7f6473887330]
    /home/openxs/dbs/maria10.2/bin/mysqld(+0xb3b817)[0x7f6475ebc817]
    /home/openxs/dbs/maria10.2/bin/mysqld(+0xb3b9e6)[0x7f6475ebc9e6]
    /home/openxs/dbs/maria10.2/bin/mysqld(+0xb3bb8a)[0x7f6475ebcb8a]
    /home/openxs/dbs/maria10.2/bin/mysqld(lf_hash_delete+0x61)[0x7f6475ebcfa1]
    /home/openxs/dbs/maria10.2/bin/mysqld(+0x601eed)[0x7f6475982eed]
    include/my_atomic.h:298(my_atomic_storeptr)[0x7f6475983464]
    sql/table_cache.cc:534(tdc_delete_share_from_hash)[0x7f6475811f17]
    sql/table_cache.cc:708(tdc_purge(bool))[0x7f64759351ea]
    sql/sql_base.cc:376(close_cached_tables(THD*, TABLE_LIST*, bool, unsigned long))[0x7f64757c9ec7]
    nptl/pthread_create.c:312(start_thread)[0x7f647387f184]
    /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6472d8c03d]
    The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
    information that should help you find out what is causing the crash. I was not able so far top find the exact some backtrace in any known MariaDB bug, so one day I'll have to try to reproduce this crash as well.

    I try to check some MariaDB ColumnStore issues from time ot time, for a change, and this week I ended up reading this KB page while trying to understand how much we can control placement of data there.

    Finally, for the records, this is the way to "fix" InnoDB statistics if needed (and the need is real as you can find out from Bug #95507 - "innodb_stats_method is not honored when innodb_stats_persistent=ON" reported by my colleague Sergei Petrunia):
    update mysql.innodb_index_stats set last_update=now(), stat_value=445000000 where database_name='test' and table_name='t1' and index_name='i1' and stat_name='n_diff_pfx01'; I like to return to familiar nice places and topics, like Regent's Canal or MySQL bugs... The last bug not the least, MySQL bugs. This week I've subscribed to the following (already "Verified") interesting bug reports (besides the one mentioned above):
    • Bug #95484 - "EXCHANGE PARTITION works wrong/weird with different ROW_FORMAT.". Jean-François Gagné found out that there is a way to have partitions with different row_format values in the same InnoDB table, at least in MySQL 5.7. So why is this not supported officially? See also his Bug #95478 - "CREATE TABLE LIKE does not honour ROW_FORMAT.". It's a week of ROW_FORMAT studies for me, for sure!
    • Bug #95462 - "Data comparison broke in MySQL 8.0.16". It's common knowledge how much I like regression bugs. MySQL 8.0.16 introduced a new one, reported by
      Raman Haran, probably based on some good and valid intentions. But undocumented changes in behavior in GA versions are hardly acceptable, no matter what are the intentions.
    That's all for now. Some more links to MySQL bugs from me are always available on Twitter.

    On Importing InnoDB Tablespaces and Row Formats

    Let me start with a short summary and then proceed with a long story, code snippets, hexdumps, links and awk functions converted from the source code of MariaDB server. This blog post can be summarized as follows:
    • One can find row_format used to create table explicitly in the .frm file (or the outputs of SHOW CREATE TABLE or SHOW TABLE STATUS). Internals manual may help to find out where is it stored and source code reading helps to find the way to interpret the values.
    • For InnoDB tables created without specifying the row_format explicitly neither logical backup nor .frm file itself contains the information about the row format used. There are 4 of them (Redundant, Compact, Dynamic and Compressed). The one used implicitly is defined by current value of the innodb_default_row_format that may change dynamically.
    • At the .ibd file level there is no (easy) way to distinguish Redundant from Compact, this detail should come from elsewhere. If the source table's row format had NOT changed you can find it from the information_schema.innodb_sys_tables (or innodb_tables in case of MySQL 8), or from the output of SHOW TABL STATUS.
    • There is an easy enough way to check tablespace level flags in the .ibd file (sample awk functions/script are presented below) and this helps to find out that the row format was Compressed or Dynamic.
    • So far in basic cases (encryption etc aside) individual .ibd files for InnoDB tables from MariaDB (even 10.3.x) and MySQL 8.0.x are compatible enough.
    • You have to take all the above into account while importing individual tables to do partial restore or copy/move tablespaces from one database to the other.
    • Some useful additional reading and links may be found in MariaDB bug reports MDEV-19523 and MDEV-15049. Yes, reading MariaDB MDEVs may help MySQL DBAs to understand some things better!
    Now the real story.
    I miss London, so I am going to be there on June 13 to partcipate in Open Databases Meetup. Should I speak about importing InnoDB tablespaces there?
    * * * This is a long enough blog post about a "research" I had to make while working in Support recently. It all started with a question like this in a support issue earlier in May:
    "Is it somehow possible to extract ROW_FORMAT used from a table in a backup in XtraBackup format?" The context was importing tablespace for InnoDB table and error 1808, "Schema mismatch", and customer had a hope to find out proper format without attempts to import, in some way that can be scripted easily. When one tries to import .ibd file with a format that does not match .frm file or data dictionary content, she gets a very clear message in MariaDB (that still presents all thee details) due to the fix in MDEV-16851, but the idea was to avoid trial and error path entirely.

    There were several ideas on how to proceed. Given the .frm, one could use mysqlfrm utility (you can still find MySQL Utilities that are only under Sustaining Support by Oracle here) to get full CREATE TABLE from the .frm. But I was sure that just checking ROW_FORMAT should be easier than that. (Later test of latest mysqlfrm I could get running on Fedora 29 proved that it was a good idea to avoid it due to some problems I may write about one day.) Fine MySQL Internals Manual clearly describes .frm file format and shows that at offset 0x28 in the header section we have row_type encoded as one byte:
    0028 1 00 create_info->row_type Quick search in source code ended up with the following defined in sql/handler.h (links refer to MariaDB code, but the idea is clear and same for MySQL as well):
    enum row_type { ROW_TYPE_NOT_USED=-1, ROW_TYPE_DEFAULT, ROW_TYPE_FIXED,
                    ROW_TYPE_DYNAMIC, ROW_TYPE_COMPRESSED,
                    ROW_TYPE_REDUNDANT, ROW_TYPE_COMPACT, ROW_TYPE_PAGE };The rest looked clear at the moment. We should see decimal values from 2 to 5 at offset 0x28 (decimal 40) from the beginning of the .frm file representing row formats supported by InnoDB. I quickly created a set of tables with different row formats:
    MariaDB [test]> create table ti1(id int primary key, c1 int) engine=InnoDB row_format=redundant;
    Query OK, 0 rows affected (0.147 sec)

    MariaDB [test]> create table ti2(id int primary key, c1 int) engine=InnoDB row_format=compact;
    Query OK, 0 rows affected (0.145 sec)

    MariaDB [test]> create table ti3(id int primary key, c1 int) engine=InnoDB row_format=dynamic;
    Query OK, 0 rows affected (0.149 sec)

    MariaDB [test]> create table ti4(id int primary key, c1 int) engine=InnoDB row_format=compressed;
    Query OK, 0 rows affected (0.130 sec)

    MariaDB [test]> create table ti5(id int primary key, c1 int) engine=InnoDB;    
    Query OK, 0 rows affected (0.144 sec)
    MariaDB [test]> insert into ti5 values(5,5);
    Query OK, 1 row affected (0.027 sec) and checked the content of the .frm files with hexdump:
    [openxs@fc29 maria10.3]$ hexdump -C data/test/ti1.frm | more00000000  fe 01 0a 0c 12 00 56 00  01 00 b2 03 00 00 f9 01  |......V.........|
    00000010  09 00 00 00 00 00 00 00  00 00 00 02 21 00 08 00  |............!...|
    00000020  00 05 00 00 00 00 08 00  04 00 00 00 00 00 00 f9  |................|
    ...As you can see, we see expected value 04 for ROW_TYPE_REDUNDANT of the table ti1. After that it's easy to come up with some command line to just show numeric row format, like this:
    [openxs@fc29 server]$ hexdump --skip 40 --length=1 ~/dbs/maria10.3/data/test/ti1.frm | awk '{print $2}'
    0004or even better:
    [openxs@fc29 maria10.3]$ hexdump -C data/test/ti1.frm | awk '/00000020/ {print $10}'
    04
    [openxs@fc29 maria10.3]$ hexdump -C data/test/ti2.frm | awk '/00000020/ {print $10}'
    05
    [openxs@fc29 maria10.3]$ hexdump -C data/test/ti3.frm | awk '/00000020/ {print $10}'
    02
    [openxs@fc29 maria10.3]$ hexdump -C data/test/ti4.frm | awk '/00000020/ {print $10}'
    03
    [openxs@fc29 maria10.3]$ hexdump -C data/test/ti5.frm | awk '/00000020/ {print $10}'
    00But in real customer case there was no problem with tables created with explicit row_format set (assuming the correct .frm was in place). The problem was with table like ti5 above, those created with the default row format:
    MariaDB [test]> show variables like 'innodb%format';
    +---------------------------+---------+
    | Variable_name             | Value   |
    +---------------------------+---------+
    | innodb_default_row_format | dynamic |
    | innodb_file_format        |         |
    +---------------------------+---------+
    2 rows in set (0.001 sec) In .frm file (and in SHOW CREATE TABLE output) the format is NOT set, it's default, 0 (or 0x00 in hex). The problem happens when we try to import such a table into an instance with different innodb_default_row_format. Consider the following test case:
    [openxs@fc29 maria10.3]$ bin/mysql --socket=/tmp/mariadb.sock -uroot test
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A

    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 9
    Server version: 10.3.15-MariaDB Source distribution

    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

    MariaDB [test]> create database test2;
    Query OK, 1 row affected (0.000 sec)

    MariaDB [test]> use test2
    Database changed
    MariaDB [test2]> set global innodb_default_row_format=compact;
    Query OK, 0 rows affected (0.000 sec)

    MariaDB [test2]> create table ti0(id int primary key, c1 int) engine=InnoDB;    Query OK, 0 rows affected (0.165 sec)

    MariaDB [test2]> show create table ti0\G
    *************************** 1. row ***************************
           Table: ti0
    Create Table: CREATE TABLE `ti0` (
      `id` int(11) NOT NULL,
      `c1` int(11) DEFAULT NULL,
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1
    1 row in set (0.000 sec)

    MariaDB [test2]> \! hexdump -C data/test2/ti0.frm | awk '/00000020/ {print $10}'
    00 In this test we create a new table, ti0, in other database, test2, that has ROW_TYPE_DEFAULT (0) in the .frm file, same as the test.ti5 table created above. But if we try to import t5 tablespace by first exporting it properly in another session:
    MariaDB [test]> flush tables ti5 for export;
    Query OK, 0 rows affected (0.001 sec) and then discarding original test2.t0 tablespace, copying .ibd and .cfg files (with proper renaming) and running ALTER TABLE ... IMPORT TABLESPACE:
    MariaDB [test2]> alter table ti0 discard tablespace;Query OK, 0 rows affected (0.058 sec)

    MariaDB [test2]> \! cp data/test/ti5.cfg data/test2/ti0.cfg
    MariaDB [test2]> \! cp data/test/ti5.ibd data/test2/ti0.ibd
    MariaDB [test2]> alter table ti0 import tablespace;
    ERROR 1808 (HY000): Schema mismatch (Table flags don't match, server table has 0x1 and the meta-data file has 0x21; .cfg file uses ROW_FORMAT=DYNAMIC) we fail with error 1808 (that has all the details about the original's table row format, DYNAMIC, and hex information about some flags in hex that are different). We failed because now innodb_default_row_format is different, it's COMPACT!


    We can not fool the target server by removing (or not copying) non-mandatory .cfg file:
    MariaDB [test2]> \! rm data/test2/ti0.cfg
    MariaDB [test2]> alter table ti0 import tablespace;
    ERROR 1808 (HY000): Schema mismatch (Expected FSP_SPACE_FLAGS=0x0, .ibd file contains 0x21.)Now we see a bit different text, but the same error 1808. Real row format of InnoDB table is stored somewhere in .ibd file. As you can guess, copying .frm file (as it may when we copy back files from Xtrabackup- or mariabackup-based backup to do partial restore) also does not help - the files had the same row_format anyway and we verified that. So, real row format of InnoDB table is stored somewhere in InnoDB (data dictionary). When it does not match the one we see in .ibd file we get error 1808.

    How to resolve this error? There are two ideas to explore (assuming we found the real format in .ibd file somehow):
    1. Try to create target table with proper row_format and then import.
    2. Set innodb_default_row_format properly and create target table without explicit row format set, and then import.
    The first one works, as one can find out (but will end up with different .frm file than the original table had, surely). Check these:
    MariaDB [test2]> select * from test.ti5;
    +----+------+
    | id | c1   |
    +----+------+
    |  5 |    5 |
    +----+------+
    1 row in set (0,001 sec)

    MariaDB [test2]> alter table ti0 discard tablespace;
    Query OK, 0 rows affected (0,066 sec)

    MariaDB [test2]> \! cp data/test/ti5.ibd data/test2/ti0.ibd
    MariaDB [test2]> alter table ti0 import tablespace;
    ERROR 1808 (HY000): Schema mismatch (Expected FSP_SPACE_FLAGS=0x0, .ibd file contains 0x21.)
    MariaDB [test2]> \! cp data/test/ti5.cfg data/test2/ti0.cfg
    MariaDB [test2]> alter table ti0 import tablespace;
    ERROR 1808 (HY000): Schema mismatch (Table flags don't match, server table has 0x1 and the meta-data file has 0x21; .cfg file uses ROW_FORMAT=DYNAMIC)
    MariaDB [test2]> drop table ti0;
    Query OK, 0 rows affected (0,168 sec) So, if you care to understand the flags (we'll work on that below) or care to copy .cfg file as well, you surely can get the row format of the table. Now let's re-create ti0 with explicitly defined Dynamic row format and try to import again:
    MariaDB [test2]> create table ti0(id int primary key, c1 int) engine=InnoDB row_format=Dynamic;
    Query OK, 0 rows affected (0,241 sec)

    MariaDB [test2]> alter table ti0 discard tablespace;
    Query OK, 0 rows affected (0,071 sec)

    MariaDB [test2]> \! cp data/test/ti5.ibd data/test2/ti0.ibd
    MariaDB [test2]> alter table ti0 import tablespace;
    Query OK, 0 rows affected, 1 warning (0,407 sec)

    MariaDB [test2]> show warnings\G
    *************************** 1. row ***************************
      Level: Warning
       Code: 1810
    Message: IO Read error: (2, No such file or directory) Error opening './test2/ti0.cfg', will attempt to import without schema verification
    1 row in set (0,000 sec)

    MariaDB [test2]> select * from ti0;
    +----+------+
    | id | c1   |
    +----+------+
    |  5 |    5 |
    +----+------+
    1 row in set (0,001 sec)We see that copying .cfg file is not really mandatory and that explicit setting of ROW_FORMAT (assuming that .frm file is NOT copied) works.

    The second idea also surely works (and customer in his trial and error attempts just tried with all possible formats until import was successful). Lucky from the first error we'll know the original format used for sure:
    MariaDB [test2]> drop table ti0;
    Query OK, 0 rows affected (0.084 sec)

    MariaDB [test2]> set global innodb_default_row_format=dynamic;
    Query OK, 0 rows affected (0.000 sec)

    MariaDB [test2]> create table ti0(id int primary key, c1 int) engine=InnoDB;    Query OK, 0 rows affected (0.171 sec)

    MariaDB [test2]> alter table ti0 discard tablespace;
    Query OK, 0 rows affected (0.049 sec)

    MariaDB [test2]> \! cp data/test/ti5.cfg data/test2/ti0.cfg
    MariaDB [test2]> \! cp data/test/ti5.ibd data/test2/ti0.ibd
    MariaDB [test2]> alter table ti0 import tablespace;
    Query OK, 0 rows affected (0.307 sec)

    MariaDB [test2]> select * from ti0;
    +----+------+
    | id | c1   |
    +----+------+
    |  5 |    5 |
    +----+------+
    1 row in set (0.000 sec)

    MariaDB [test2]> show create table ti0\G
    *************************** 1. row ***************************
           Table: ti0
    Create Table: CREATE TABLE `ti0` (
      `id` int(11) NOT NULL,
      `c1` int(11) DEFAULT NULL,
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1
    1 row in set (0.000 sec)Now we can proceed with UNLOCK TABLES in that another session where we flushed test.ti5 for export.

    How could we find out the row format to use without trial and error, now that we know in one specific case .frm file (or even CREATE TABLE statement shown by server or mysqldump) misses it?

    First of all we could try to save this information (select @@innodb_default_file_format) alone with the backup. But that would show the value of this variable at the moment of asking, and it could be different when specific table was created. Does not work in general case.

    We could use SHOW TABLE STATUS also, as follows:
    MariaDB [test]> show create table ti5\G
    *************************** 1. row ***************************
           Table: ti5
    Create Table: CREATE TABLE `ti5` (
      `id` int(11) NOT NULL,
      `c1` int(11) DEFAULT NULL,
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1
    1 row in set (0.001 sec)

    MariaDB [test]> show table status like 'ti5'\G
    *************************** 1. row ***************************
                Name: ti5
              Engine: InnoDB
             Version: 10
          Row_format: Dynamic
    ...In the example above that table was created without setting row_format explicitly, but we see the real one used in the output of SHOW TABLE STATUS. So, if we cared enough, this kind of output could be saved when the data were backed up or exported.

    Then we could try to get it for each table from the InnoDB data dictionary of the system we get .ibd files from. In older MySQL versions we'd have to dig into the real data dictionary tables on disk probably, but in any recent MySQL (up to 5.7, 8.0 may be somewhat different due to a new data dictionary) or MariaDB we have a convenient, SQL-based way to get this information. There are two INFORMATION_SCHEMA tables to consider: INNODB_SYS_TABLESPACES and INNODB_SYS_TABLES. The first one is not good enough, as it considers Compact and Redundant row formats the same (even though fine MySQL Manual does not say this):
    MariaDB [test]> select * from information_schema.innodb_sys_tablespaces where name like '%ti%';
    +-------+----------------------------+------+----------------------+-----------+---------------+------------+---------------+-----------+----------------+
    | SPACE | NAME                       | FLAG | ROW_FORMAT           | PAGE_SIZE | ZIP_PAGE_SIZE | SPACE_TYPE | FS_BLOCK_SIZE | FILE_SIZE | ALLOCATED_SIZE |
    +-------+----------------------------+------+----------------------+-----------+---------------+------------+---------------+-----------+----------------+
    |     3 | mysql/transaction_registry |   33 | Dynamic              |     16384 |         16384 | Single     |          4096 |    147456 |         147456 |
    |     4 | mysql/gtid_slave_pos       |   33 | Dynamic              |     16384 |         16384 | Single     |          4096 |     98304 |          98304 |
    |     6 | test/ti1                   |    0 | Compact or Redundant |     16384 |         16384 | Single     |          4096 |     98304 |          98304 |
    |     7 | test/ti2                   |    0 | Compact or Redundant |     16384 |         16384 | Single     |          4096 |     98304 |          98304 |
    |     8 | test/ti3                   |   33 | Dynamic              |     16384 |         16384 | Single     |          4096 |     98304 |          98304 |
    |     9 | test/ti4                   |   41 | Compressed           |     16384 |          8192 | Single     |          4096 |     65536 |          65536 |
    |    10 | test/ti5                   |   33 | Dynamic              |     16384 |         16384 | Single     |          4096 |     98304 |          98304 |
    +-------+----------------------------+------+----------------------+-----------+---------------+------------+---------------+-----------+----------------+
    7 rows in set (0.000 sec)The second one works perfectly:
    MariaDB [test2]> select * from information_schema.innodb_sys_tables where name like '%ti%';
    +----------+----------------------------+------+--------+-------+------------+---------------+------------+
    | TABLE_ID | NAME                       | FLAG | N_COLS | SPACE | ROW_FORMAT | ZIP_PAGE_SIZE | SPACE_TYPE |
    +----------+----------------------------+------+--------+-------+------------+---------------+------------+
    |       19 | mysql/gtid_slave_pos       |   33 |      7 |     4 | Dynamic    |             0 | Single     |
    |       18 | mysql/transaction_registry |   33 |      8 |     3 | Dynamic    |             0 | Single     |
    |       21 | test/ti1                   |    0 |      5 |     6 | Redundant  |             0 | Single     |
    |       22 | test/ti2                   |    1 |      5 |     7 | Compact    |             0 | Single     |
    |       23 | test/ti3                   |   33 |      5 |     8 | Dynamic    |             0 | Single     |
    |       24 | test/ti4                   |   41 |      5 |     9 | Compressed |          8192 | Single     |
    |       25 | test/ti5                   |   33 |      5 |    10 | Dynamic    |             0 | Single     |
    |       26 | test2/ti0                  |    1 |      5 |    11 | Compact    |             0 | Single     |
    +----------+----------------------------+------+--------+-------+------------+---------------+------------+
    8 rows in set (0.000 sec)In the table above I was wondering about the exact values in FLAG column (note 33, 0x21 in hex, looks familiar from the error message in previous examples). MySQL Manual says just this:
    "A numeric value that represents bit-level information about tablespace format and storage characteristics."MariaDB's KB page is now way more detailed after my bug report, MDEV-19523, was closed. See the link for the details, or check the code of the i_s_dict_fill_sys_tables() function if you want to interpret the data properly:
    /**********************************************************************//**
    Populate information_schema.innodb_sys_tables table with information
    from SYS_TABLES.
    @return 0 on success */
    static
    int
    i_s_dict_fill_sys_tables(
    /*=====================*/
        THD*        thd,        /*!< in: thread */
        dict_table_t*    table,        /*!< in: table */
        TABLE*        table_to_fill)    /*!< in/out: fill this table */
    {
        Field**          fields;
        ulint            compact = DICT_TF_GET_COMPACT(table->flags);
        ulint            atomic_blobs = DICT_TF_HAS_ATOMIC_BLOBS(
                                    table->flags);
        const ulint zip_size = dict_tf_get_zip_size(table->flags);
        const char*        row_format;

        if (!compact) {
            row_format = "Redundant";
        } else if (!atomic_blobs) {
            row_format = "Compact";
        } else if (DICT_TF_GET_ZIP_SSIZE(table->flags)) {
            row_format = "Compressed";
        } else {
            row_format = "Dynamic";
        }
    ... Another part of the code shows how the checks above are performed:
    #define DICT_TF_GET_COMPACT(flags) \         ((flags & DICT_TF_MASK_COMPACT) \         >> DICT_TF_POS_COMPACT) /** Return the value of the ZIP_SSIZE field */ #define DICT_TF_GET_ZIP_SSIZE(flags) \         ((flags & DICT_TF_MASK_ZIP_SSIZE) \         >> DICT_TF_POS_ZIP_SSIZE) /** Return the value of the ATOMIC_BLOBS field */ #define DICT_TF_HAS_ATOMIC_BLOBS(flags) \         ((flags & DICT_TF_MASK_ATOMIC_BLOBS) \         >> DICT_TF_POS_ATOMIC_BLOBS) ... We miss masks and flags to double check (in the same storage/innobase/include/dict0mem.h file):
    /** Width of the COMPACT flag */
    #define DICT_TF_WIDTH_COMPACT        1

    /** Width of the ZIP_SSIZE flag */
    #define DICT_TF_WIDTH_ZIP_SSIZE        4

    /** Width of the ATOMIC_BLOBS flag.  The ROW_FORMAT=REDUNDANT and
    ROW_FORMAT=COMPACT broke up BLOB and TEXT fields, storing the first 768 bytes
    in the clustered index. ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPRESSED
    store the whole blob or text field off-page atomically.
    Secondary indexes are created from this external data using row_ext_t
    to cache the BLOB prefixes. */
    #define DICT_TF_WIDTH_ATOMIC_BLOBS    1

    ...

    /** Zero relative shift position of the COMPACT field */
    #define DICT_TF_POS_COMPACT        0
    /** Zero relative shift position of the ZIP_SSIZE field */
    #define DICT_TF_POS_ZIP_SSIZE        (DICT_TF_POS_COMPACT        \
                        + DICT_TF_WIDTH_COMPACT)
    /** Zero relative shift position of the ATOMIC_BLOBS field */
    #define DICT_TF_POS_ATOMIC_BLOBS    (DICT_TF_POS_ZIP_SSIZE        \
    + DICT_TF_WIDTH_ZIP_SSIZE) If we make some basic math we can find out that DICT_TF_POS_ZIP_SSIZE is 1 and DICT_TF_POS_ATOMIC_BLOBS is 5, etc. The masks are defined as:
    /** Bit mask of the COMPACT field */
    #define DICT_TF_MASK_COMPACT                \
            ((~(~0U << DICT_TF_WIDTH_COMPACT))    \
            << DICT_TF_POS_COMPACT)
    /** Bit mask of the ZIP_SSIZE field */
    #define DICT_TF_MASK_ZIP_SSIZE                \
            ((~(~0U << DICT_TF_WIDTH_ZIP_SSIZE))    \
            << DICT_TF_POS_ZIP_SSIZE)
    /** Bit mask of the ATOMIC_BLOBS field */
    #define DICT_TF_MASK_ATOMIC_BLOBS            \
            ((~(~0U << DICT_TF_WIDTH_ATOMIC_BLOBS))    \
            << DICT_TF_POS_ATOMIC_BLOBS)
    Basically we have what we need now, bit positions and masks. We can create a function to return a row format based on decimal value of falgs. Consider this primitive awk example:
    openxs@ao756:~/dbs/maria10.3$ awk '
    > function DICT_TF_GET_COMPACT(flags) {
    >   return rshift(and(flags, DICT_TF_MASK_COMPACT), DICT_TF_POS_COMPACT);
    > }
    >
    > function DICT_TF_GET_ZIP_SSIZE(flags)
    > {
    >   return rshift(and(flags, DICT_TF_MASK_ZIP_SSIZE), DICT_TF_POS_ZIP_SSIZE);
    > }
    >
    > function DICT_TF_HAS_ATOMIC_BLOBS(flags)
    > {
    >   return rshift(and(flags, DICT_TF_MASK_ATOMIC_BLOBS), DICT_TF_POS_ATOMIC_BLOBS);
    > }
    >
    > function innodb_row_format(flags)
    > {
    >     compact = DICT_TF_GET_COMPACT(flags);
    >     atomic_blobs = DICT_TF_HAS_ATOMIC_BLOBS(flags);
    >
    >     if (!compact) {
    >         row_format = "Redundant";
    >     } else if (!atomic_blobs) {
    >         row_format = "Compact";
    >     } else if (DICT_TF_GET_ZIP_SSIZE(flags)) {
    >         row_format = "Compressed";
    >     } else {
    >         row_format = "Dynamic";
    >     }
    >     return row_format;
    > }
    >
    > BEGIN {
    > DICT_TF_WIDTH_COMPACT=1;
    > DICT_TF_WIDTH_ZIP_SSIZE=4;
    > DICT_TF_WIDTH_ATOMIC_BLOBS=1;
    >
    > DICT_TF_POS_COMPACT=0;
    > DICT_TF_POS_ZIP_SSIZE=DICT_TF_POS_COMPACT + DICT_TF_WIDTH_COMPACT;
    > DICT_TF_POS_ATOMIC_BLOBS=DICT_TF_POS_ZIP_SSIZE + DICT_TF_WIDTH_ZIP_SSIZE;
    >
    > DICT_TF_MASK_COMPACT=lshift(compl(lshift(compl(0), DICT_TF_WIDTH_COMPACT)),DICT_TF_POS_COMPACT);
    > DICT_TF_MASK_ZIP_SSIZE=lshift(compl(lshift(compl(0), DICT_TF_WIDTH_ZIP_SSIZE)),DICT_TF_POS_ZIP_SSIZE);
    > DICT_TF_MASK_ATOMIC_BLOBS=lshift(compl(lshift(compl(0), DICT_TF_WIDTH_ATOMIC_BLOBS)),DICT_TF_POS_ATOMIC_BLOBS);
    >
    > print innodb_row_format(0), innodb_row_format(1), innodb_row_format(33), innodb_row_format(41);
    > }'
    Redundant Compact Dynamic Compressed
    openxs@ao756:~/dbs/maria10.3$  So, we know how to get format based on decimal values of flags. The remaining subtask is to find out where are the flags in the .ibd file. Instead of digging into the code (server/storage/innobase/include/fsp0fsp.h etc) one can just check this great blog post by Jeremy Cole to find out that flags are at bytes 54-57, 16 bytes offset after FIL header that is 38 bytes long (4 bytes starting from hex offset 0x36) in the .ibd file. These bytes are highlighted with bold below:

    [openxs@fc29 maria10.3]$ hexdump -C data/test/ti2.ibd | more
    00000000  5d 4f 09 aa 00 00 00 00  00 00 00 00 00 00 00 00  |]O..............|
    00000010  00 00 00 00 00 19 11 ee  00 08 00 00 00 00 00 00  |................|
    00000020  00 00 00 00 00 07 00 00  00 07 00 00 00 00 00 00  |................|
    00000030  00 06 00 00 00 40 00 00  00 00 00 00 00 04 00 00  |.....@..........|
    ...

    [openxs@fc29 maria10.3]$ hexdump -C data/test/ti4.ibd | more
    00000000  6c cd 19 15 00 00 00 00  00 00 00 00 00 00 00 00  |l...............|
    00000010  00 00 00 00 00 19 44 9f  00 08 00 00 00 00 00 00  |......D.........|
    00000020  00 00 00 00 00 09 00 00  00 09 00 00 00 00 00 00  |................|
    00000030  00 06 00 00 00 40 00 00  00 29 00 00 00 04 00 00  |.....@...)......|
    ...

    [openxs@fc29 maria10.3]$ hexdump -C data/test/ti5.ibd | more00000000  d8 21 6d 2e 00 00 00 00  00 00 00 00 00 00 00 00  |.!m.............|
    00000010  00 00 00 00 00 19 62 9d  00 08 00 00 00 00 00 00  |......b.........|
    00000020  00 00 00 00 00 0a 00 00  00 0a 00 00 00 00 00 00  |................|
    00000030  00 06 00 00 00 40 00 00  00 21 00 00 00 04 00 00  |.....@...!......|
    ... As you can see we have hex values 0x00, 0x29 (41 decimal), 0x21 (33 decimal) etc, and, theoretically, we can find out the exact row_format used (and other details) from that, based on the information presented above. For row format we need just one byte and we can get it as follows in hex:
    openxs@ao756:~/dbs/maria10.3$ hexdump -C data/test/t*.ibd | awk '/00000030/ {print $11}'
    21
    openxs@ao756:~/dbs/maria10.3$ hexdump -C data/test/t*.ibd | awk '/00000030/ {flags=strtonum("0x"$11); print flags;}'
    33To use the awk function defined above we need to convert hex to decimal, hence a small trick with strtonum() function. Now, let me put it all together and show that we can apply this MySQL as well (I checked MariaDB code mostly in the process). Let me create same tables ti1 ... ti5 in MySQL 8.0.x:
    openxs@ao756:~/dbs/8.0$ bin/mysqld_safe --no-defaults --basedir=/home/openxs/dbs/8.0 --datadir=/home/openxs/dbs/8.0/data --port=3308 --socket=/tmp/mysql8.sock &
    [1] 31790
    openxs@ao756:~/dbs/8.0$ 2019-05-26T10:55:18.274601Z mysqld_safe Logging to '/home/openxs/dbs/8.0/data/ao756.err'.
    2019-05-26T10:55:18.353458Z mysqld_safe Starting mysqld daemon with databases from /home/openxs/dbs/8.0/data

    openxs@ao756:~/dbs/8.0$ bin/mysql --socket=/tmp/mysql8.sock -uroot test
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A

    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 8
    Server version: 8.0.13 Source distribution

    Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.

    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

    mysql> select @@innodb_default_row_format;
    +-----------------------------+
    | @@innodb_default_row_format |
    +-----------------------------+
    | dynamic                     |
    +-----------------------------+
    1 row in set (0,00 sec)

    mysql> create table ti1(id int primary key, c1 int) engine=InnoDB row_format=redundant;
    Query OK, 0 rows affected (0,65 sec)

    mysql> create table ti2(id int primary key, c1 int) engine=InnoDB row_format=compact;
    Query OK, 0 rows affected (0,44 sec)

    mysql> create table ti3(id int primary key, c1 int) engine=InnoDB row_format=dynamic;
    Query OK, 0 rows affected (0,51 sec)

    mysql> create table ti4(id int primary key, c1 int) engine=InnoDB row_format=compressed;
    Query OK, 0 rows affected (0,68 sec)

    mysql> create table ti5(id int primary key, c1 int) engine=InnoDB;
    Query OK, 0 rows affected (0,59 sec)

    mysql> select * from information_schema.innodb_sys_tables where name like 'test/ti%';
    ERROR 1109 (42S02): Unknown table 'INNODB_SYS_TABLES' in information_schema
    mysql> select * from information_schema.innodb_tables where name like 'test/ti%';
    +----------+----------+------+--------+-------+------------+---------------+------------+--------------+
    | TABLE_ID | NAME     | FLAG | N_COLS | SPACE | ROW_FORMAT | ZIP_PAGE_SIZE | SPACE_TYPE | INSTANT_COLS |
    +----------+----------+------+--------+-------+------------+---------------+------------+--------------+
    |     1158 | test/ti1 |    0 |      5 |     6 | Redundant  |             0 | Single     |            0 |
    |     1159 | test/ti2 |    1 |      5 |     7 | Compact    |             0 | Single     |            0 |
    |     1160 | test/ti3 |   33 |      5 |     8 | Dynamic    |             0 | Single     |            0 |
    |     1161 | test/ti4 |   41 |      5 |     9 | Compressed |          8192 | Single     |            0 |
    |     1162 | test/ti5 |   33 |      5 |    10 | Dynamic    |             0 | Single     |            0 |
    +----------+----------+------+--------+-------+------------+---------------+------------+--------------+
    5 rows in set (0,03 sec)Now let's combine some shell and awk together:
    openxs@ao756:~/dbs/8.0$ for file in `ls data/test/ti*.ibd`
    > do
    > echo $file
    > hexdump -C $file | awk '
    > function DICT_TF_GET_COMPACT(flags) {
    >   return rshift(and(flags, DICT_TF_MASK_COMPACT), DICT_TF_POS_COMPACT);
    > }
    >
    > function DICT_TF_GET_ZIP_SSIZE(flags)
    > {
    >   return rshift(and(flags, DICT_TF_MASK_ZIP_SSIZE), DICT_TF_POS_ZIP_SSIZE);
    > }
    >
    > function DICT_TF_HAS_ATOMIC_BLOBS(flags)
    > {
    >   return rshift(and(flags, DICT_TF_MASK_ATOMIC_BLOBS), DICT_TF_POS_ATOMIC_BLOBS);
    > }
    >
    > function innodb_row_format(flags)
    > {
    >     compact = DICT_TF_GET_COMPACT(flags);
    >     atomic_blobs = DICT_TF_HAS_ATOMIC_BLOBS(flags);
    >
    >     if (!compact) {
    >         row_format = "Redundant";
    >     } else if (!atomic_blobs) {
    >         row_format = "Compact";
    >     } else if (DICT_TF_GET_ZIP_SSIZE(flags)) {
    >         row_format = "Compressed";
    >     } else {
    >         row_format = "Dynamic";
    >     }
    >     return row_format;
    > }
    >
    > BEGIN {
    > DICT_TF_WIDTH_COMPACT=1;
    > DICT_TF_WIDTH_ZIP_SSIZE=4;
    > DICT_TF_WIDTH_ATOMIC_BLOBS=1;
    >
    > DICT_TF_POS_COMPACT=0;
    > DICT_TF_POS_ZIP_SSIZE=DICT_TF_POS_COMPACT + DICT_TF_WIDTH_COMPACT;
    > DICT_TF_POS_ATOMIC_BLOBS=DICT_TF_POS_ZIP_SSIZE + DICT_TF_WIDTH_ZIP_SSIZE;
    >
    > DICT_TF_MASK_COMPACT=lshift(compl(lshift(compl(0), DICT_TF_WIDTH_COMPACT)),DICT_TF_POS_COMPACT);
    > DICT_TF_MASK_ZIP_SSIZE=lshift(compl(lshift(compl(0), DICT_TF_WIDTH_ZIP_SSIZE)),DICT_TF_POS_ZIP_SSIZE);
    > DICT_TF_MASK_ATOMIC_BLOBS=lshift(compl(lshift(compl(0), DICT_TF_WIDTH_ATOMIC_BLOBS)),DICT_TF_POS_ATOMIC_BLOBS);
    > }
    > /00000030/ {flags=strtonum("0x"$11); print innodb_row_format(flags);}'
    > done
    data/test/ti1.ibd
    Redundant
    data/test/ti2.ibd
    Redundant
    data/test/ti3.ibd
    Dynamic
    data/test/ti4.ibd
    Compressed
    data/test/ti5.ibd
    Dynamic
    openxs@ao756:~/dbs/8.0$So, we proved that there is a way (based on some code analysis and scripting) to find out the exact row format that was used to create InnoDB table based solely on the .ibd file and nothing else, in all cases but one! If you are reading carefully you noted Redundant printed for ti2.ibd as well, we've seen the same in the INNODB_SYS_TABLESPACES table. Flags in the tablespace are same for both Redundant and Compact row formats, see this part of the code also. It seems to be one of the reasons why .cfg file may be needed when we export tablespace is exactly this.

    One day I'll find out and create a followup post. Too much core reading for my limited abilities today...

    How to Prepare for MySQL Certification Exam

    How to Prepare for MySQL Certification Exam

    Here I am going to suggest MySQL DBA, student or any newbie that how they can prepare for "Oracle's MySQL 5.x Database Administrator" certification exam. 
    Step 1: Lab
    -- Prepare your local MySQL lab environment and make sure to install the respective MySQL version on your lab. (i.e., if you are going to appear on MySQL 5.6 DBA exam 1Z0-883 then install MySQL 5.6 on your lab) 
    -- You can follow my below blog post to create your local MySQL lab environment. 
    How to create CentOS VM using Vagrant Install MySQL on CentOS using DBdeployer
    Step 2: Book + HandsOn Experience
    -- If you don't have below book, then go right now and purchase this book. (Here Please, you don't judge that below-suggested book is for 5.0 version whereas you are going to prepare for MySQL 5.6/5.7 DBA certification exam. Just go ahead and purchase below book. In the end, you will definitely say it was your investment for yourself.) 
    MySQL 5.0 Certification Study Guide (Mysql Press) Paperback – 24 Aug 2005 by Paul DuBois (Author), Stefan Hinz (Author), Carsten Pedersen (Author)

    -- Read above book from the first page to the last page and make sure to test the covered topics on your local lab setup to get hands-on experience. 
    Step 3: What's New + HandsOn Experience
    -- Now go to your exam's MySQL version's MySQL reference manual and find the "What is new in MySQL {version}" reference manual page. 
    Like If I need to appear on MySQL 5.7 Database Administrator Exam (1Z0-888) then I will find below page. 
    1.4 What Is New in MySQL 5.7
    -- Read the "Features Added in MySQL 5.x" section and its related links in-depth and get hands-on experience with the new features introduced in new MySQL version. 
    *****************************
    I am sure once you will do all the above steps then definitely you will be able to clear MySQL DBA certification exam. 
    Don't go with any shortcut path like exam dump, by doing this you might be able to clear the exam, but you won't be able to deliver your best in practical on client's environment as well as you won't be able to succeed in your longer career path.

    *** All the best for your certification exam ***


    Photo by Nick Morrison on Unsplash

    Install MySQL on CentOS using DBdeployer

    Install MySQL on CentOS using DBdeployer
    1. Here It is assumed You will have a CentOS system/VM. If not, then please find my this blog post link where you will get instruction about, how you can create CentOS virtual machine using the vagrant. 
    2. Connect to CentOS VM abhinavs-MacBook-Air:centos7-test-vm agupta$ pwd /Users/agupta/vagrant_box/centos7-test-vm
    abhinavs-MacBook-Air:centos7-test-vm agupta$ vagrant ssh 
    [vagrant@centos7-test-vm ~]$ sudo su -
    [root@centos7-test-vm ~]#
    3. Run below command to install the latest DBdeployer package  [root@centos7-test-vm ~]# yum -y install wget
    [root@centos7-test-vm ~]# VERSION=1.30.0
    [root@centos7-test-vm ~]# OS=linux
    [root@centos7-test-vm ~]# origin=https://github.com/datacharmer/dbdeployer/releases/download/v$VERSION
    [root@centos7-test-vm ~]# wget $origin/dbdeployer-$VERSION.$OS.tar.gz
    [root@centos7-test-vm ~]# tar -xzf dbdeployer-$VERSION.$OS.tar.gz
    [root@centos7-test-vm ~]# chmod +x dbdeployer-$VERSION.$OS
    [root@centos7-test-vm ~]# sudo mv dbdeployer-$VERSION.$OS /usr/local/bin/dbdeployer  Note: Once writing these steps, we had the latest Dbdeployer version released 1.30.0. You make sure to check the latest version released on below page and make accordingly value of "VERSION=XX.XX.XX" above in 2nd command.  https://github.com/datacharmer/dbdeployer/releases
    - Verify the installed dbdeployer version.  [root@centos7-test-vm ~]# dbdeployer --version dbdeployer version 1.30.0 [root@centos7-test-vm ~]# 
    4. Download the latest Percona Server 5.7 binary tarball package from below link under section "Linux-Generic."
    https://www.percona.com/downloads/Percona-Server-5.7/LATEST/binary/tarball/
    Note: Here if you are confused which SSL package tar bar you should download, then please check my this blog post. [root@centos7-test-vm ~]# wget https://www.percona.com/downloads/Percona-Server-5.7/Percona-Server-5.7.25-28/binary/tarball/Percona-Server-5.7.25-28-Linux.x86_64.ssl101.tar.gz
    5. Unpack the tarball package using the DBdeployer.  [root@centos7-test-vm ~]# pwd /root
    [root@centos7-test-vm ~]# ls -lth Percona*tar.gz -rw-r--r--. 1 root root 408M May 22 00:14 Percona-Server-5.7.25-28-Linux.x86_64.ssl101.tar.gz
    [root@centos7-test-vm ~]# mkdir -p /root/opt/mysql
    [root@centos7-test-vm ~]# dbdeployer unpack --prefix=ps Percona-Server-5.7.25-28-Linux.x86_64.ssl101.tar.gz  Note: Above I have used "--prefix=ps" to explicitly mention with a binary directory that it is the Percona Server (PS) tarball packages. It is useful to use --prefix like Maria/Oracle/PS to distinguish between the multiple tarball packages installed on the system with different package providers. 
    Using below command, you can check the list of installed/available tarball binary packages for dbdeployer.  [root@centos7-test-vm ~]# dbdeployer versions Basedir: /root/opt/mysql ps5.7.25 [root@centos7-test-vm ~]# 
    Here you can see we have Percona Server 5.7.25 binary packages available for DBdeployer. 
    6. Create a single MySQL instance using the Percona Server 5.7.25 binary package. 
    [root@centos7-test-vm ~]# dbdeployer deploy single ps5.7.25 Creating directory /root/sandboxes Database installed in $HOME/sandboxes/msb_ps5_7_25 run 'dbdeployer usage single' for basic instructions' .. sandbox server started [root@centos7-test-vm ~]# 
    7. To access the MySQL, go to the installed directory which is "/root/sandboxes/msb_ps5_7_25" and then runs below command.  [root@centos7-test-vm msb_ps5_7_25]# pwd /root/sandboxes/msb_ps5_7_25
    [root@centos7-test-vm msb_ps5_7_25]# sh use Welcome to the MySQL monitor.  Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.25-28 Percona Server (GPL), Release 28, Revision c335905
    Copyright (c) 2009-2019 Percona LLC and/or its affiliates Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
    Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    mysql [localhost:5725] {msandbox} ((none)) > \s -------------- /root/opt/mysql/ps5.7.25/bin/mysql  Ver 14.14 Distrib 5.7.25-28, for Linux (x86_64) using  6.2
    Connection id:        3 Current database: Current user:        msandbox@localhost SSL:            Not in use Current pager:        stdout Using outfile:        '' Using delimiter:    ; Server version:        5.7.25-28 Percona Server (GPL), Release 28, Revision c335905 Protocol version:    10 Connection:        Localhost via UNIX socket Server characterset:    latin1 Db     characterset:    latin1 Client characterset:    utf8 Conn.  characterset:    utf8 UNIX socket:        /tmp/mysql_sandbox5725.sock Uptime:            3 min 42 sec
    Threads: 1  Questions: 26  Slow queries: 0  Opens: 111  Flush tables: 1  Open tables: 104  Queries per second avg: 0.117 --------------
    mysql [localhost:5725] {msandbox} ((none)) > 
    8. Here below are a couple of useful commands to play with your MySQL instance.  [root@centos7-test-vm msb_ps5_7_25]# sh status msb_ps5_7_25 on
    [root@centos7-test-vm msb_ps5_7_25]# sh stop stop /root/sandboxes/msb_ps5_7_25
    [root@centos7-test-vm msb_ps5_7_25]# sh start . sandbox server started
    [root@centos7-test-vm msb_ps5_7_25]# sh restart stop /root/sandboxes/msb_ps5_7_25 . sandbox server started 




    Photo by Caspar Camille Rubin on Unsplash

    The Important Role of a Tungsten Rollback Error

    The Question Recently, a customer asked us:

    What is the meaning of this error message found in trepsvc.log?

    2019/05/14 01:48:04.973 | mysql02.prod.example.com | [east - binlog-to-q-0] INFO pipeline.SingleThreadStageTask Performing rollback of possible partial transaction: seqno=(unavailable)

    Simple Overview The Skinny

    This message is an indication that we are dropping any uncommitted or incomplete data read from the MySQL binary logs due to a pending error.

    The Answer Safety First

    This error is often seen before another error and is an indication that we are rolling back anything uncommitted, for safety. On a master this is normally very little and would likely be internal transactions in the trep_commit_seqno table, for example.

    As you may know with the replicator we always extract complete transactions, and so this particular message is specific to the reading of the MySQL binlog into the internal memory queue (binlog-to-q).

    This queue then feeds into the q-to-thl pipeline. We only want to write a complete transaction into the THL, so anything not completed when a failure like this happens gets rolled back.

    Summary The Wrap-Up

    In this blog post we discussed the Tungsten Replicator rollback error.

    To learn about Continuent solutions in general, check out https://www.continuent.com/solutions

    The Library Please read the docs!

    For more information about monitoring Tungsten clusters, please visit https://docs.continuent.com.

    Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

    For more information, please visit https://www.continuent.com/solutions

    Want to learn more or run a POC? Contact us.

    Critical Update for Percona Server for MySQL 5.6.44-85.0

    This is a CRITICAL update and the fix mitigates the issues described in CVE-2019-12301. If you upgraded packages on Debian/Ubuntu to 5.6.44-85.0-1, please upgrade to 5.6.44-85.0-2 or later and reset all MySQL root passwords.

     

    Issue

    On 2019-05-18 Percona discovered an issue with the Debian/Ubuntu 5.6.44-85.0-1 packages for Percona Server for MySQL. When the previous versions, upgraded to the new version PS 5.6.44-85.0-1 on deb based systems, the MySQL root password was removed allowing users to login to the upgraded server as MySQL root without specifying a password.

     

    Scope

    This issue is limited to users who upgraded with the Debian/Ubuntu package 5.6.44-85.0-1 for Percona Server for MySQL v. 5.6. Newer versions (v. 5.7 and above) and new installations of v. 5.6 (>= 5.6.44-85.0-2) are not affected by this issue.

     

    The 5.6.44-85.0-1 packages were available for 19 hours, starting at 2019-05-17 and removed from the repository upon discovery of the issue. The 5.6.44-85.0-1 packages were replaced in the repository with the 5.6.44-85.0-2 packages on 2019-05-18 at 12:50 pm UTC (see bug #5640).

     

    Although the fixed package no longer removes the MySQL root password,  it cannot restore the previously removed password.

     

    If you downloaded the packages prior to 12:50 pm UTC on 2019-05-18, please update with a newer version and reset all MySQL root passwords on those servers. If you are not sure whether your version is affected, please verify with the script below.

     

    Global Impact

    This impacted a small subset of users who are running a very specific version of Linux + specific version of MySQL. For those in this narrow footprint, please take the steps noted in the Remediation section.

     

    Technical Impact

    All MySQL root user entries were reset and need to be reset to cure this issue, after applying the fixed 5.6.44 version of 5.6.44-85.0-2 or higher.

     

    Remediation

    To cure the vulnerability reported in CVE-2019-12301, upgrade using the Debian/Ubuntu 5.6.44-85.0-2 package (or newer) and reset all MySQL root passwords. To determine if the residual MySQL root password reset issue was resolved, or to verify whether the root password is empty (which may be common), please deploy this script:

     

    https://gist.github.com/Oneiroi/30fd32261ba8ceb699893c9b20f081c2

     

    Contact

    If you have any questions about this issue or need to report a security concern, please contact Percona’s Security Team via security@percona.com or www.percona.com/security.

    Create CRUD Rest API using Spring Boot and JPA

    This java rest api tutorial help to create CRUD rest api using spring boot application.The spring boot is the next level of spring framework.Its based on java spring framework.We will create maven type project and generating JAR build.

    The Spring boot provides CLI to create and manage application.You can setup a spring boot project with almost zero configuration.

    In this java rest tutorial, we’ll build a Restful CRUD API for employee module.This module have API to get all employee, get a single employee records based on id and delete employee record by id.

    How To Create a New Spring Boot Project

    The spring boot provides a web tool to create initial level spring boot application.We will go to https://start.spring.io/, Now enter the required details.

    Once all the details are entered, You need to click Generate Project button to generate and download your project.The downloaded project structure in zip file, unzip this folder and placed into your java_workplace or import into
    your favorite IDE.

    Following is the directory structure of our Employee spting boot application –

    Spring Boot Main Entry file

    Each java application is have the main entry point to execute application. We have com/example/restapiexample/SpringBootCrudExampleApplication.java file for main entry java application.

    Create MySQL Table

    We are taking MySQL as a database, Let’s create a db into mysql, the db name is 'test' and creating employee table into this db. We will run below sql query to create employee table –

    CREATE TABLE IF NOT EXISTS `employee` ( `id` int(11) NOT NULL COMMENT 'primary key', `employee_name` varchar(255) NOT NULL COMMENT 'employee name', `employee_salary` double NOT NULL COMMENT 'employee salary', `employee_age` int(11) NOT NULL COMMENT 'employee age' ) ENGINE=InnoDB AUTO_INCREMENT=59 DEFAULT CHARSET=latin1 COMMENT='datatable demo table'; -- -- Dumping data for table `employee` -- INSERT INTO `employee` (`id`, `employee_name`, `employee_salary`, `employee_age`) VALUES (2, 'Garrett Winters', 170750, 63), (3, 'Ashton Cox', 86000, 66), (4, 'Cedric Kelly', 433060, 22);

    inserted some sample data to list all employee records.

    Create MySQL Configuration into properties file

    We will store MySQL configure parameters into properties file.Let’s create a application.properties file into resources/ folder.The spring-data-jpa is in the classpath by reading the database configuration from application.properties file.

    server.port=8080 db.driver: com.mysql.jdbc.Driver db.url: jdbc:mysql://127.0.0.1:3306/test db.username: root db.password: # Hibernate hibernate.dialect: org.hibernate.dialect.MySQL5Dialect hibernate.show_sql: true hibernate.hbm2ddl.auto: update entitymanager.packagesToScan: com.example.restapiexample

    Create Config File

    We will create DbConfig.java file into /restapiexample/configs folder.This file contains data source related information.We will add below code into this file –

    package com.example.restapiexample.configs; import java.util.Properties; import javax.sql.DataSource; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.env.Environment; import org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor; import org.springframework.jdbc.datasource.DriverManagerDataSource; import org.springframework.orm.jpa.JpaTransactionManager; import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean; import org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter; import org.springframework.transaction.annotation.EnableTransactionManagement; /** * Contains database configurations. */ @Configuration @EnableTransactionManagement @ConfigurationProperties public class DbConfig { // ------------------------ // PUBLIC METHODS // ------------------------ /** * DataSource definition for database connection. Settings are read from * the application.properties file (using the env object). */ @Bean public DataSource dataSource() { DriverManagerDataSource dataSource = new DriverManagerDataSource(); dataSource.setDriverClassName(env.getProperty("db.driver")); dataSource.setUrl(env.getProperty("db.url")); dataSource.setUsername(env.getProperty("db.username")); dataSource.setPassword(env.getProperty("db.password")); return dataSource; } /** * Declare the JPA entity manager factory. */ @Bean public LocalContainerEntityManagerFactoryBean entityManagerFactory() { LocalContainerEntityManagerFactoryBean entityManagerFactory = new LocalContainerEntityManagerFactoryBean(); entityManagerFactory.setDataSource(dataSource); // Classpath scanning of @Component, @Service, etc annotated class entityManagerFactory.setPackagesToScan( env.getProperty("entitymanager.packagesToScan")); // Vendor adapter HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter(); entityManagerFactory.setJpaVendorAdapter(vendorAdapter); // Hibernate properties Properties additionalProperties = new Properties(); additionalProperties.put( "hibernate.dialect", env.getProperty("hibernate.dialect")); additionalProperties.put( "hibernate.show_sql", env.getProperty("hibernate.show_sql")); additionalProperties.put( "hibernate.hbm2ddl.auto", env.getProperty("hibernate.hbm2ddl.auto")); entityManagerFactory.setJpaProperties(additionalProperties); return entityManagerFactory; } /** * Declare the transaction manager. */ @Bean public JpaTransactionManager transactionManager() { JpaTransactionManager transactionManager = new JpaTransactionManager(); transactionManager.setEntityManagerFactory( entityManagerFactory.getObject()); return transactionManager; } /** * PersistenceExceptionTranslationPostProcessor is a bean post processor * which adds an advisor to any bean annotated with Repository so that any * platform-specific exceptions are caught and then rethrown as one * Spring's unchecked data access exceptions (i.e. a subclass of * DataAccessException). */ @Bean public PersistenceExceptionTranslationPostProcessor exceptionTranslation() { return new PersistenceExceptionTranslationPostProcessor(); } // ------------------------ // PRIVATE FIELDS // ------------------------ @Autowired private Environment env; @Autowired private DataSource dataSource; @Autowired private LocalContainerEntityManagerFactoryBean entityManagerFactory; } // class DatabaseConfig

    We have imported mysql databse related information from properties file and use here.

    Create Model File in Spring Boot Application

    We will create employee model class into /models folder.Let’s create Employee.java file into models folder.

    package com.example.restapiexample.models; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; import javax.validation.constraints.NotNull; @Entity @Table(name = "employee") public class Employee { // ------------------------ // PRIVATE FIELDS // ------------------------ @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private int id; @NotNull private String employee_name; @NotNull private int employee_salary; @NotNull private int employee_age; public Employee() { } public Employee(int id) { this.id = id; } public Employee(String employee_name, int employee_salary, int employee_age) { this.employee_name = employee_name; this.employee_salary = employee_salary; this.employee_age = employee_age; } public String getName() { return employee_name; } public int getSalary() { return employee_salary; } public int getAge() { return employee_age; } public void setAge(int age) { this.employee_age = age; } public void setName(String name) { this.employee_name = name; } public void setSalary(int salary) { this.employee_salary = salary; } public void setId(int value) { this.id = value; } public int getId() { return id; } }

    Create DAO Layer Into Spring Boot Appplication

    Let’s create dao layer for spring employee model. We will create EmployeeDao.java file into models/ folder and added below code into this file –

    package com.example.restapiexample.models; import java.util.List; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.transaction.Transactional; import org.springframework.stereotype.Repository; import com.example.restapiexample.models.Employee; @Repository @Transactional public class EmployeeDao { @PersistenceContext private EntityManager entityManager; /** * Return all the employee stored in the database. */ @SuppressWarnings("unchecked") public List<Employee> getAll() { return entityManager.createQuery("from Employee").getResultList(); } public Employee getById(int id) { return entityManager.find(Employee.class, id); } /** * Save the employee in the database. */ public Employee create(Employee emp) { entityManager.persist(emp); int id = emp.getId(); return entityManager.find(Employee.class, id); } /** * delete the employee in the database. */ public Boolean delete(Employee emp) { if (entityManager.contains(emp)) entityManager.remove(emp); else entityManager.remove(entityManager.merge(emp)); return true; } }

    Create Controller Class into Spring Boot

    We have created model class, DAO layer and configure JPA using dbconfig file, So now lets create controller file that will handle all rest end point and return response.

    package com.example.restapiexample.controllers; import com.example.restapiexample.models.Employee; import com.example.restapiexample.models.EmployeeDao; import java.util.List; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpHeaders; import org.springframework.http.ResponseEntity; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.ResponseBody; @Controller public class EmployeeController { @Autowired EmployeeDao employeeDao; @RequestMapping("/getall") @ResponseBody public List<Employee> getAll() { return employeeDao.getAll(); } @RequestMapping(value = "/get/{id}", method = RequestMethod.GET, consumes="application/json") @ResponseBody public ResponseEntity<Employee> getById(@PathVariable("id") int id) { Employee emp = null; HttpHeaders headers = new HttpHeaders(); headers.add("Content-Type", "application/json"); headers.add("Responded", "EmployeeController"); try { emp = employeeDao.getById(id); } catch(Exception ex) { System.out.println("Employee not found" + ex.getMessage()); return ResponseEntity.accepted().headers(headers).body(emp); } return ResponseEntity.ok().headers(headers).body(emp); } @RequestMapping(value = "/create", method = RequestMethod.POST, consumes="application/json") @ResponseBody public ResponseEntity<Employee> create(@RequestBody Employee postdata) { Employee emp = null; HttpHeaders headers = new HttpHeaders(); headers.add("Content-Type", "application/json"); headers.add("Responded", "EmployeeController"); try { emp = employeeDao.create(postdata); } catch(Exception ex) { System.out.println("Employee not found" + ex.getMessage()); return ResponseEntity.ok().headers(headers).body(emp); } return ResponseEntity.ok().headers(headers).body(emp); } @RequestMapping(value = "/delete/{id}", method = RequestMethod.DELETE, consumes="application/json") @ResponseBody public String delete(@PathVariable("id") int id) { boolean isDeleted = false; HttpHeaders headers = new HttpHeaders(); headers.add("Content-Type", "application/json"); headers.add("Responded", "EmployeeController"); try { Employee emp = new Employee(id); isDeleted = employeeDao.delete(emp); } catch(Exception ex) { System.out.println("Employee not found to delete" + ex.getMessage()); return "Error deleting the Employee: " + ex.toString(); } if(isDeleted) { return "Employee succesfully deleted!"; } return "Error! Employee deleted!"; } }

    Created three calls, /getall is responsible to get all employee records, /create will use to post data and create new record, /get/{id} is use to get a single employee record and /delete/{id} rest end point use to delete record.

    Get All Records Using Postman

    Get A Single Record Using Postman

    Create A record Using Postman

    Delete A record Using Postman

    The post Create CRUD Rest API using Spring Boot and JPA appeared first on Rest Api Example.

    Shinguz: Dropped Tables with FromDual Backup Manager

    Some applications have the bad behaviour to CREATE or DROP tables while our FromDual Backup Manager (bman) backup is running.

    This leads to the following bman error message:

    /opt/mysql/product/5.7.26/bin/mysqldump --user=dba --host=migzm96i --port=3306 --all-databases --quick --single-transaction --flush-logs --triggers --routines --hex-blob --events | tee >(md5sum --binary >/tmp/checksum.23357.md5) | gzip -1 to Destination: /var/mysql/dumps/mysql96i/daily/bck_mysql96i_full_2019-05-22_06-50-01.sql.gz ERROR: /opt/mysql/product/5.7.26/bin/mysqldump command failed (rc=253). mysqldump: [Warning] Using a password on the command line interface can be insecure. Error: Couldn't read status information for table m_report_builder_cache_157_20190521035354 () mysqldump: Couldn't execute 'show create table `m_report_builder_cache_157_20190521035354`': Table 'totara.m_report_builder_cache_157_20190521035354' doesn't exist (1146)

    There are various strategies to work around this problem:

    • If the table is only temporary create it with the CREATE command as a TEMPORARY TABLE instead of a normal table. This workaround would not work in this case because the table is a caching table which must be available for other connections as well.
    • Try to schedule your application job or your bman job in the way they do not collide. With bman that is quite easy but sometimes not with the application.
    • Try to create the table in its own schema (e.g. cache) which is excluded from bman backup. So you can easily do a bman backup without the cache schema. For example like this: $ bman --target=brman@127.0.0.1:3306 --type=schema --schema=-cache --policy=daily
    • If this strategy also does not work (because you cannot change the application behaviour) try to ignore the table. The underlying command mysqldump knows the option --ignore-table: mysqldump --help ... --ignore-table=name Do not dump the specified table. To specify more than one table to ignore, use the directive multiple times, once for each table. Each table must be specified with both database and table names, e.g., --ignore-table=database.table.
      This option can be used in bman as well. Options to the underlying application are passed through FromDual Backup Manager as follows: $ bman --target=brman@127.0.0.1:3306 --type=full --policy=daily --pass-through='--ignore-table=totara.m_report_builder_cache_157_20190521035354'
    • The problem here is, that this table contains a timestamp in its table name (20190521035354). So the table name is changing all the time. To pass through wildcards with --ignore-table is not possible with mysqldump. The tool mysqldump does not support (yet) this feature. The only solution we have in this case is, to ignore the error message with the risk that possible other error messages are also ignored. This is achieved again with the --pass-through option: $ bman --target=brman@127.0.0.1:3306 --type=full --policy=daily --pass-through='--force'

    I hope with this few tricks we can help you to make your FromDual Backup Manager (bman) backups hassle-free.

    Taxonomy upgrade extras:  Backup table bman drop error

    How to Automate Migration from Standalone MySQL to Galera Cluster using Ansible

    Database migrations don’t scale well. Typically you need to perform a great deal of tests before you can pull the trigger and switch from old to new. Migrations are usually done manually, as most of the process does not lend itself to automation. But that doesn’t mean there is no room for automation in the migration process. Imagine setting up a number of nodes with new software, provisioning them with data and configuring replication between old and new environments by hand. This takes days. Automation can be very useful when setting up a new environment and provisioning it with data. In this blog post, we will take a look at a very simple migration - from standalone Percona Server 5.7 to a 3-node Percona XtraDB Cluster 5.7. We will use Ansible to accomplish that.

    Environment Description

    First of all, one important disclaimer - what we are going to show here is only a draft of what you might like to run in production. It does work on our test environment but it may require modifications to make it suitable for your environment. In our tests we used four Ubuntu 16.04 VM’s deployed using Vagrant. One contains standalone Percona Server 5.7, remaining three will be used for Percona XtraDB Cluster nodes. We also use a separate node for running ansible playbooks, although this is not a requirement and the playbook can also be executed from one of the nodes. In addition, SSH connectivity is available between all of the nodes. You have to have connectivity from the host where you run ansible, but having the ability to ssh between nodes is useful (especially between master and new slave - we rely on this in the playbook).

    Playbook Structure

    Ansible playbooks typically share common structure - you create roles, which can be assigned to different hosts. Each role will contain tasks to be executed on it, templates that will be used, files that will be uploaded, variables which are defined for this particular playbook. In our case, the playbook is very simple.

    . ├── inventory ├── playbook.yml ├── roles │ ├── first_node │ │ ├── my.cnf.j2 │ │ ├── tasks │ │ │ └── main.yml │ │ └── templates │ │ └── my.cnf.j2 │ ├── galera │ │ ├── tasks │ │ │ └── main.yml │ │ └── templates │ │ └── my.cnf.j2 │ ├── master │ │ └── tasks │ │ └── main.yml │ └── slave │ └── tasks │ └── main.yml └── vars └── default.yml

    We defined a couple of roles - we have a master role, which is intended to do some sanity checks on the standalone node. There is slave node, which will be executed on one of the Galera nodes to configure it for replication, and set up the asynchronous replication. Then we have a role for all Galera nodes and a role for the first Galera node to bootstrap the cluster from it. For Galera roles, we have a couple of templates that we will use to create my.cnf files. We will also use local .my.cnf to define a username and password. We have a file containing a couple of variables which we may want to customize, just like passwords. Finally we have an inventory file, which defines hosts on which we will run the playbook, we also have the playbook file with information on how exactly things should be executed. Let’s take a look at the individual bits.

    Inventory File

    This is a very simple file.

    [galera] 10.0.0.142 10.0.0.143 10.0.0.144 [first_node] 10.0.0.142 [master] 10.0.0.141

    We have three groups, ‘galera’, which contains all Galera nodes, ‘first_node’, which we will use for the bootstrap and finally ‘master’, which contains our standalone Percona Server node.

    Playbook.yml

    The file playbook.yml contains the general guidelines on how the playbook should be executed.

    - hosts: master gather_facts: yes become: true pre_tasks: - name: Install Python2 raw: test -e /usr/bin/python || (apt -y update && apt install -y python-minimal) vars_files: - vars/default.yml roles: - { role: master }

    As you can see, we start with the standalone node and we apply tasks related to the role ‘master’ (we will discuss this in details further down in this post).

    - hosts: first_node gather_facts: yes become: true pre_tasks: - name: Install Python2 raw: test -e /usr/bin/python || (apt -y update && apt install -y python-minimal) vars_files: - vars/default.yml roles: - { role: first_node } - { role: slave }

    Second, we go to node defined in ‘first_node’ group and we apply two roles: ‘first_node’ and ‘slave’. The former is intended to deploy a single node PXC cluster, the later will configure it to work as a slave and set up the replication.

    - hosts: galera gather_facts: yes become: true pre_tasks: - name: Install Python2 raw: test -e /usr/bin/python || (apt -y update && apt install -y python-minimal) vars_files: - vars/default.yml roles: - { role: galera }

    Finally, we go through all Galera nodes and apply ‘galera’ role on all of them.

    Severalnines   DevOps Guide to Database Management Learn about what you need to know to automate and manage your open source databases Download for Free Variables

    Before we begin to look into roles, we want to mention default variables that we defined for this playbook.

    sst_user: "sstuser" sst_password: "pa55w0rd" root_password: "pass" repl_user: "repl_user" repl_password: "repl1cati0n"

    As we stated, this is a very simple playbook without much options for customization. You can configure users and passwords and this is basically it. One gotcha - please make sure that the standalone node’s root password matches ‘root_password’ here as otherwise the playbook wouldn’t be able to connect there (it can be extended to handle it but we did not cover that).

    This file is without much of a value but, as a rule of thumb, you want to encrypt any file which contains credentials. Obviously, this is for the security reasons. Ansible comes with ansible-vault, which can be used to encrypt and decrypt files. We will not cover details here, all you need to know is available in the documentation. In short, you can easily encrypt files using passwords and configure your environment so that the playbooks can be decrypted automatically using password from file or passed by hand.

    Roles

    In this section we will go over roles that are defined in the playbook, summarizing what they are intended to perform.

    Master role

    As we stated, this role is intended to run a sanity check on the configuration of the standalone MySQL. It will install required packages like percona-xtrabackup-24. It also creates replication user on the master node. A configuration is reviewed to ensure that the server_id and other replication and binary log-related settings are set. GTID is also enabled as we will rely on it for replication.

    First_node role

    Here, the first Galera node is installed. Percona repository will be configured, my.cnf will be created from the template. PXC will be installed. We also run some cleanup to remove unneeded users and to create those, which will be required (root user with the password of our choosing, user required for SST). Finally, cluster is bootstrapped using this node. We rely on the empty ‘wsrep_cluster_address’ as a way to initialize the cluster. This is why later we still execute ‘galera’ role on the first node - to swap initial my.cnf with the final one, containing ‘wsrep_cluster_address’ with all the members of the cluster. One thing worth remembering - when you create a root user with password you have to be careful not to get locked off MySQL so that Ansible could execute other steps of the playbook. One way to do that is to provide .my.cnf with correct user and password. Another would be to remember to always set correct login_user and login_password in ‘mysql_user’ module.

    Slave role

    This role is all about configuring replication between standalone node and the single node PXC cluster. We use xtrabackup to get the data, we also check for executed gtid in xtrabackup_binlog_info to ensure the backup will be restored properly and that replication can be configured. We also perform a bit of the configuration, making sure that the slave node can use GTID replication. There is a couple of gotchas here - it is not possible to run ‘RESET MASTER’ using ‘mysql_replication’ module as of Ansible 2.7.10, it should be possible to do that in 2.8, whenever it will come out. We had to use ‘shell’ module to run MySQL CLI commands. When rebuilding Galera node from external source, you have to remember to re-create any required users (at least user used for SST). Otherwise the remaining nodes will not be able to join the cluster.

    Galera role Related resources  ClusterControl for MySQL Galera Cluster  How to Automate Daily DevOps Database Tasks with Chef  Infrastructure Automation - Deploying ClusterControl and MySQL-based systems on AWS using Ansible

    Finally, this is the role in which we install PXC on remaining two nodes. We run it on all nodes, the initial one will get “production” my.cnf instead of its “bootstrap” version. Remaining two nodes will have PXC installed and they will get SST from the first node in the cluster.

    Summary

    As you can see, you can easily create a simple, reusable Ansible playbook which can be used for deploying Percona XtraDB Cluster and configuring it to be a slave of standalone MySQL node. To be honest, for migrating a single server, this will probably have no point as doing the same manually will be faster. Still, if you expect you will have to re-execute this process a couple of times, it will definitely make sense to automate it and make it more time efficient. As we stated at the beginning, this is by no means production-ready playbook. It is more of a proof of concept, something you may extend to make it suitable for your environment. You can find archive with the playbook here: http://severalnines.com/sites/default/files/ansible.tar.gz

    We hope you found this blog post interesting and valuable, do not hesitate to share your thoughts.

    Tags:  MySQL galera cluster automation migration galera ansible

    No Ping Will Tear Us Apart – Enabling member auto-rejoin in Group Replication

    With the release of version 8.0.16 of MySQL, we’ve added some features to the Group Replication (GR) plugin, in order to enhance its high-availability capacities. One of these features was the ability to enable a member that has left the group, under certain circumstances, to automatically rejoin it, without need of intervention by the user.…

    Re-evaluating MySQL 8 Query Transformations Capabilities

    I recently stumbled upon a very interesting post by Lukas Eder, where he describes 10 query transformations which do not depend on the database’s cost model. He posted it a couple of years ago, though when I read it, I assumed some portions of it may still be relevant today.

    In the original post, several databases were tested to see if their internal optimizer will be able to automatically re-write the SQL queries and optimize them. In those tests, MySQL under-performed in several of the use cases (the tested version was MySQL 8.0.2, which was released on 2017-07-17).

    Seeing those results, and given the previous evaluation was done almost two years ago, I thought that now can be a good chance to re-evaluate a few of those tests with the latest MySQL 8.0.16 (released on 2019-04-25), and demonstrate EverSQL Query Optimizer‘s capabilities while we are at it.

    We’ll use the Sakila sample database for the demonstrations, and some of the original queries from Lukas’s post.

    The following two tables will be used for these demonstrations:

    CREATE TABLE address ( address_id INT NOT NULL, address VARCHAR(50) NOT NULL, CONSTRAINT pk_address PRIMARY KEY (address_id) ); CREATE TABLE customer ( customer_id INT NOT NULL, first_name VARCHAR(45) NOT NULL, last_name VARCHAR(45) NOT NULL, address_id INT NOT NULL, CONSTRAINT pk_customer PRIMARY KEY (customer_id), CONSTRAINT fk_customer_address FOREIGN KEY (address_id) REFERENCES address(address_id) ); CREATE TABLE actor ( actor_id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT, first_name VARCHAR(45) NOT NULL, last_name VARCHAR(45) NOT NULL, last_update TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (actor_id), KEY idx_actor_last_name (last_name) )ENGINE=InnoDB DEFAULT CHARSET=utf8; JOIN Elimination

    Looking at the query below, you’ll probably notice that the table address isn’t really used anywhere in the query (other than in the join’s ON clause), and has no actual contribution to the query. The columns are selected from the customer table, and there are no predicates for the address table (or any predicates for that matter). The existence of a PRIMARY KEY for address_id in the table address and the FOREIGN KEY for the same column in the table customer should provide the database’s optimizer the confidence to conclude that the join to the table address can be spared.

    SELECT c.first_name, c.last_name FROM customer c JOIN address a ON c.address_id = a.address_id

    Yet, as you can see from the EXPLAIN, MySQL executes the join, without applying the expected query transformation.

    When submitting the same query and schema structure to EverSQL, it will output the following query and recommendation:

    In a similar manner, MySQL 8.0.16 will execute the join for all other examples in the Join Elimination section of the original post, so I saw no reason to repeat them here.

    Unneeded Self JOIN

    In the following query, the table actor is joined to itself. It can be proven that a1 = a2, because the tables are joined using the primary key actor_id. Therefore, anything we can do with a2, can actually be done with a1 as well. Therefore, we can can modify the references to a2 in the SELECT clause to the same columns in a1, and remove the redundant join to a2.

    SELECT a1.first_name, a2.last_name FROM actor a1 JOIN actor a2 ON a1.actor_id = a2.actor_id;

    MySQL 8.0.16 will execute the join in this case as well, without applying the expected query transformation.

    When submitting the same query and schema structure to EverSQL, it will output the following query and recommendation:

    Original queries and information for this section can be found here.

    Predicate Pushdown

    We should always strive to have our SQL queries process as less data as possible, especially if the filtering is done using indexes. In the following query, we expect MySQL to push the condition from the outer query to both parts of the UNION ALL, to make sure we filter out as much data as we can, as early as possible.

    SELECT * FROM (SELECT first_name, last_name, 'actor' type FROM actor UNION ALL SELECT first_name, last_name, 'customer' type FROM customer) people WHERE people.last_name = 'DAVIS';

    As you can see below, the transformation isn’t applied by MySQL, and both tables, actor and customer are scanned in full.

    Submitting the query and schema structure to EverSQL will result in the following query and recommendation:

    When looking at the execution plan of the optimized query, you can see that the indexes are used, and significantly less data is scanned:

    Wrapping up

    Query transformations can be very powerful, and it’s important to understand which of them will be applied automatically by the database and which won’t. In this post, we listed three examples (originally posted by Lukas Eder), in which MySQL 8.0.16 didn’t apply the expected transformations, which eventually resulted in non-optimal execution plans.

    Understanding Cross-Site Replication in a Tungsten Composite Multi-Master Cluster for MySQL, MariaDB and Percona Server

    Overview The Skinny

    In this blog post we will discuss how the managed cross-site replication streams work in a Composite Multi-Master Tungsten Cluster for MySQL, MariaDB and Percona Server.

    Agenda What’s Here?
    • Briefly explore how managed cross-site replication works in a Tungsten Composite Multi-Master Cluster
    • Describe the reasons why the default design was chosen
    • Explain the pros and cons of changing the configuration
    • Examine how to change the configuration of the managed cross-site replicators
    Cross-Site Replication A Very Brief Summary

    In a standard Composite Multi-Master (CMM) deployment, the managed cross-site replicators pull Transaction History Logs (THL) from every remote cluster’s current master node.

    The CMM functionality was introduced in Tungsten Clustering software version 6.0.0

    Cross-Site Replication: In-Depth How Does It All Work?

    Managed cross-site replicators run in addition to the base replication service for the local cluster. The additional replication services (one per remote site) run on all nodes, and have a relay and slaves. The relay runs on the current master node so as not to make things confusing.

    In a Composite Multi-Master cluster, each local cluster must pull data from the other remote sites. There is an additional replication service for each remote site, called a sub-service.

    Each sub-service is named to match the remote site.

    For example, assume a composite cluster with four sites: east, west, north and south.

    On the east cluster, there would be the following three additional replication streams:
    east_from_west
    east_from_north
    east_from_south

    As you can see, each sub-service is named in a way that makes it easy to understand.

    Reading sub-service names is also simple, and so for example “east_from_west” is stated as “I am in cluster east and pulling THL from cluster west”.

    Below is a diagram showing just two clusters within the “usa” composite service – east and west:

    Cross-Site Replication Architecture Pros and Cons

    The default architecture is designed so that the relay node for each cluster service gets the most recent information directly from the remote master, reducing the risk of data latency (staleness).

    As of Tungsten Clustering software version 6.0.4, the cross-site replicators within a CMM deployment can be configured to point to slave nodes, and to prefer slave nodes over master nodes during operation.

    This configuration allows the slave nodes to handle the load generated by the remote cross-site relays upon the master nodes. This becomes a concern when there are many sites, all of which are pulling THL from the same remote master nodes.

    Of course, when using this option, one must accept that the downstream data may be delayed by the additional hop, and that the data replicated to the remote sites could (and probably would) be older than it would be using the standard topology.

    Tuning Cross-Site Replication How To Configure the Replication Streams to Meet Your Needs

    To configure the cluster to prefer slave nodes over master nodes, use the --policy-relay-from-slave=true option to tpm.

    Both master and slave nodes remain in the list of possible hosts, so if no slave nodes are available during a switch or failover event, then a master will be used.

    Summary The Wrap-Up

    In this blog post we discussed Tungsten Composite Multi-Master Cluster cross-site replication configuration.

    To learn about Continuent solutions in general, check out https://www.continuent.com/solutions

    The Library Please read the docs!

    For more information about monitoring Tungsten clusters, please visit https://docs.continuent.com.

    Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

    For more information, please visit https://www.continuent.com/solutions

    Want to learn more or run a POC? Contact us.

    Check Out the Percona Live 2019 Live Stream!

    Feeling the FOMO? Now you don’t have to! With the Percona Live Open Source Database Conference 2019 Live Stream, you’ll practically be there in person, living the experience to the fullest. Our keynote stage features morning and lunchtime keynotes on Wednesday, May 29th and morning keynotes on Thursday, May 30th. Join us at home or on the go, from any corner of the world!

    Percona is streaming the keynotes on Wednesday, May 29, 2019, at 9 AM CDT, Wednesday, May 29, 2019, at 1:25 PM CDT, and Thursday, May 30, 2019, at 9 AM, beginning at 9 AM CDT.

    Keynote speakers include Continuent, VividCortex, AWS, Facebook, MariaDB Foundation, and many more. The keynote panel will feature topics such as the changing face of Open Source.

    The list of keynote talks and speakers for each day is as follows:

    Wednesday, May 29, 2019 Day 1 – Morning Keynotes (9 AM – 10:30 AM CT)

    Welcome – Matt Yonkovit, Percona
    The State of Open Source Databases – Peter Zaitsev, Percona
    MyRocks in the Real World – Yoshinori Matsunobu, Facebook
    MariaDB 10.4: Where We Are Now – Vicentiu Ciorbaru, MariaDB Foundation
    TiDB 3.0: What’s New and What’s Next? – Ed Huang, PingCAP
    Open Source at Amazon – Arun Gupta, AWS

    Wednesday, May 29, 2019 Day 1 – Lunch Keynotes (1:25 PM – 1:55 PM CT)
    Panel Discussion: The Changing Face of Open Source – Shawn Turnbull, Percona
    MySQL Community Awards – Emily Slocombe, Square

    Thursday, May 30, 2019 Day 2 – Morning Keynotes (9 AM – 10:25 AM)
    Welcome Back – Matt Yonkovit, Percona
    The State of the Dolphin – Frédéric Descamps, Oracle
    The Color of Open Source Money: Are some open source business models more acceptable than others? – Eero Teerikorpi, Continuent
    Bringing DevOps to the Database – Baron Schwartz, VividCortex
    From Respected by DBAs to Loved by Application Developers – Craig Kerstiens, Microsoft
    Closing Remarks – Matt Yonkovit, Percona

    To learn more about Percona Live Open Source Database Conference visit our page.

    Subscribe NOW to Percona on YouTube.

    Follow #PerconaLive on social:
    Facebook: https://www.facebook.com/Percona
    Twitter: https://twitter.com/percona

    Watch More: YouTube

    How to fetch single row from database in php codeigniter

    If you require to get only one record from database table using codeigniter query then you can do it using row we can easily return one row from database in codeigniterI will give you simple example of fetch single record from database using mysql codeigniterSometime we need to get only one

    Performance Tuning Tungsten Replication to MySQL

    The Question Recently, a customer asked us:

    Why would Tungsten Replicator be slow to apply to MySQL?

    The Answer Performance Tuning 101

    When you run trepctl status and see:
    appliedLatency : 7332.394
    like this on a slave, it is almost always due to the inability for the target database to keep up with the applier.

    This means that we often need to look first to the database layer for the solution.

    Here are some of the things to think about when dealing with this issue:

    Architecture and Environment
     Are you on bare metal?
     Using the cloud?
     Dev or Prod?
     Network speed and latency?
     Distance the data needs to travel?
     Network round trip times? Is the replicator applying to a database installed on the same server or is it applying over the network to a remote server?

    shell> time mysql -e "select 1" ... real 0m0.004s user 0m0.003s sys 0m0.000s

    Observe the value for real – if it is 15ms or more chances are you will see slow apply rates.

    MySQL Binary Logging
     What binary logging format are you using?
    mysql> select @@global.binlog_format;

    • For non-Multi-Master deployments, use MIXED
    • For Multi-Master topologies, use ROW

    MySQL Tables
     Verify that all tables are InnoDB.
     Also make sure all tables have a Primary Key.
     Do the tables have proper indexes?
     Use the slow query log to identify if any tungsten-owned queries are taking a long time
     The MySQL EXPLAIN command is very useful in understanding slow queries:
    https://dev.mysql.com/doc/refman/5.7/en/using-explain.html
    https://dev.mysql.com/doc/refman/5.7/en/explain-output.html
    https://dev.mysql.com/doc/refman/5.7/en/explain-extended.html

    MySQL Locks
     MySQL locks can prevent queries from completing in a timely manner. Check for queries that are holding locks open:

    mysql> show full processlist; mysql> show open tables where in_use <> 0; mysql> show engine innodb status;

    OS Memory
     Is the database configured to use enough memory?
     Check for lack of server memory

    shell> free -m shell> top

    Physical Disk
     Check for disk i/o contention – this is often the real issue, especially with remote disk
    shell> iostat -xpne 2
     Add SSD storage into your production systems
     Split filesystems up
     Implement multi-volume striping for improved i/o speed
     Make sure there are enough IOPS if using cloud instances

    Summary The Wrap-Up

    In this blog post we discussed Tungsten Replicator applier performance tuning.

    To learn about Continuent solutions in general, check out https://www.continuent.com/solutions

    The Library Please read the docs!

    For more information about monitoring Tungsten clusters, please visit https://docs.continuent.com.

    Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

    For more information, please visit https://www.continuent.com/solutions

    Want to learn more or run a POC? Contact us.

    errno: 24 - Too many open files

    errno: 24 - Too many open files
    You might see this error inside MySQL's error log file or during mysql_upgrade operation.

    Can't open file: ‘./db/table.frm' (errno: 24 - Too many open files)
    To solve this issue, you will need to change the open files limit at system and db service level.

    --> To check and change this limit at the system level, we use ulimit client tool to set it dynamically.

    * Check current system open file limit.
    root@test:~# ulimit -a core file size          (blocks, -c) 0 data seg size           (kbytes, -d) unlimited scheduling priority             (-e) 0 file size               (blocks, -f) unlimited pending signals                 (-i) 515256 max locked memory       (kbytes, -l) 64 max memory size         (kbytes, -m) unlimited open files                      (-n) 1024 pipe size            (512 bytes, -p) 8 POSIX message queues     (bytes, -q) 819200 real-time priority              (-r) 0 stack size              (kbytes, -s) 8192 cpu time               (seconds, -t) unlimited max user processes              (-u) 515256 virtual memory          (kbytes, -v) unlimited file locks                      (-x) unlimited root@test:~# ulimit -n 1024 root@test:~# ulimit -Hn 4096 root@test:~# ulimit -Sn 1024
    * Change system open file limit on the fly(dynamically).
    root@test:~# ulimit -n 65535
    root@test:~# ulimit -Hn 65535 root@test:~# ulimit -Sn 65535 root@test:~# ulimit -n 65535 root@test:~# ulimit -a core file size          (blocks, -c) 0 data seg size           (kbytes, -d) unlimited scheduling priority             (-e) 0 file size               (blocks, -f) unlimited pending signals                 (-i) 515256 max locked memory       (kbytes, -l) 64 max memory size         (kbytes, -m) unlimited open files                      (-n) 65535 pipe size            (512 bytes, -p) 8 POSIX message queues     (bytes, -q) 819200 real-time priority              (-r) 0 stack size              (kbytes, -s) 8192 cpu time               (seconds, -t) unlimited max user processes              (-u) 515256 virtual memory          (kbytes, -v) unlimited file locks                      (-x) unlimited 
    --> To make the permanent changes of open files limit at the system level, Please update below file with below 2 lines for mysql user. So once the server will be rebooted, our changes will be persistent. 

    vim /etc/security/limits.conf * * mysql hard nofile 65535 mysql soft nofile 65535
    --> On most OS like CentOS, Ubuntu, RHEL: Locate your mysqld.service file and change the LimitNOFILE value which is present under [Service] section to 65535. 

    vim /usr/lib/systemd/system/mysqld.service * * * # Sets open_files_limit LimitNOFILE = 65535 service mysql restart



    Photo by Patrick Tomasso on Unsplash



    Pages