Planet MySQL

MariaDB AX Distributed Tarball Installation

MariaDB AX Distributed Tarball Installation Faisal Wed, 08/01/2018 - 08:13

This guide is meant to help set up a MariaDB AX cluster using TARBALL tar.gz binary image instead of RPM files with a non-root account on CentOS 7 machines. But we still need to install some dependencies using root and yum repository manager.

By the end of this, we will have a 2 User Modules (UM), 3 Performance Module (PM) node cluster running on local storage.

Summary

Download the latest tarball binaries for MariaDB AX.

The following is the summary of tasks to be performed:

  • VMs OS prerequisites setup
    • We will be using CentOS 7, it should be identical for Red Hat Enterprise Linux as well.
  • Create MariaDB ColumnStore owner account and a group as mcsadm and set its password.
  • Setup the /etc/hosts file on all the nodes with IP - HostName mapping for easier access.
  • Download the ColumnStore TARBALL and extract it under /home/mcsadm.
  • Generate ssh key on PM1 using mcsadm user and copy the key to all the nodes.
  • Generate ssh key on UM1 and copy the public key to UM1 and UM2.
    • This is used for UM1 to UM2 data replication.
  • Generate ssh key on UM2 and copy the public key to UM2 and UM1.
    • This is used for UM2 to UM1 data replication.
  • Server preparation
    • Using the root user
      • Install the ColumnStore dependencies on all nodes using yum
      • setup umask on all nodes
      • setup sudo access on all nodes for mcsadm user
      • setup ulimit on all nodes for mcsadm user
  • Test the setup using the ClusterTest tool using mcsadm user
Preparing the VM OS

There are a few important things that are required before we start the installations.

Note: the following steps must be performed and validated on all the VMs

Disable SELinux

For this we will edit the SELinux configuration, in the file /etc/selinux/config, make sure to change SELINUX=disabled and it should look like this:

# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted

After saving and exiting, we will need to reboot the VM to take permanent effect. Check if the SELinux has actually been disabled, use either of the two commands (sestatus/getenforce) to confirm:

[root@localhost ~] sestatus SELinux status: disabled [root@localhost ~] getenforce Disabled Disable firewalld

Firewalld is a standard service that is disabled using the command systemctl on the RHEL 7 / CentOS 7. Disable it on all the nodes and check its status using the systemctl status firewalld:

[root@localhost ~] systemctl stop firewalld [root@localhost ~] systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@localhost ~] systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) Change localedef

Execute the following on all the nodes to make sure proper character set across the cluster.

[root@localhost ~] localedef -i en_US -f UTF-8 en_US.UTF-8 Performance optimization considerations

Following are some of the network-related optimizations that we can do, please also consult with your network/OS administrators for other optimizations that might be beneficial.

GbE NIC settings:

Modify /etc/rc.d/rc.local to include the following:

/sbin/ifconfig eth0 txqueuelen 10000 Modify /etc/sysctl.conf for the following: # increase TCP max buffer size net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 # increase Linux autotuning TCP buffer limits # min, default, and max number of bytes to use net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 # don't cache ssthresh from previous connection net.ipv4.tcp_no_metrics_save = 1 # recommended to increase this for 1000 BT or higher net.core.netdev_max_backlog = 2500 # for 10 GigE, use this net.core.netdev_max_backlog = 30000 NOTE: Make sure there is only 1 setting of net.core.netdev_max_backlog in the /etc/sysctl.conf file. Cache memory settings

To optimize Linux to cache directories and inodes the vm.vfs_cache_pressure can be set to a lower value than 100 to attempt to retain caches for inode and directory structures. This will help improve read performance. A value of 10 is suggested. The following commands must all be run as the root user or with sudo.

To check the current value:

cat /proc/sys/vm/vfs_cache_pressure

Add the following to /etc/sysctl.conf to make the cache changes permanent.

vm.vfs_cache_pressure = 10 Create mcsadm account

Now we are ready to create the MariaDB ColumnStore owner account mcsadm which will be used for the installation of the tarball.

The mcsadm user and group are required to be created on all the nodes. sudo privilege to the mcsadm user is also mandatory, this requirement of sudo access is being removed in future releases of ColumnStore.

Following this, all steps will be done using mcsadm user with the help of sudo unless specified differently.

Remember to set mcsadm user's password as it will be required later on for key exchange.

[root@localhost ~] groupadd mcsadm [root@localhost ~] useradd -g mcsadm mcsadm [root@localhost ~] passwd mcsadm Changing password for user mcsadm. New password: Retype new password: passwd: all authentication tokens updated successfully. Setup /etc/hosts file

Add the following to the file /etc/hosts on all the nodes. This will ensure all nodes are accessible by their respective hostnames.

192.168.56.104 UM1 192.168.56.105 UM2 192.168.56.106 PM1 192.168.56.107 PM2 192.168.56.108 PM3 Download

In this case, we are going to download the tar file directly from the server, but feel free to download it externally and transfer to the server using your favorite secure file transfer tools.

[mcsadm@pm1 ~]$ wget https://downloads.mariadb.com/MariaDB/mariadb-columnstore/latest/centos/x86_64/7/mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz --2018-06-30 14:02:50-- https://downloads.mariadb.com/MariaDB/mariadb-columnstore/latest/centos/x86_64/7/mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz Resolving downloads.mariadb.com (downloads.mariadb.com)... 51.255.94.155, 2001:41d0:1004:249b:: Connecting to downloads.mariadb.com (downloads.mariadb.com)|51.255.94.155|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 489245964 (467M) [application/octet-stream] Saving to: ‘mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz’ 100%[================================================================>] 489,245,964 5.54MB/s in 2m 16s 2018-06-30 14:05:13 (3.44 MB/s) - ‘mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz’ saved [489245964/489245964] [mcsadm@pm1 ~]$ ls -rlt total 477780 -rw-rw-r-- 1 mcsadm mcsadm 489245964 Jun 15 16:45 mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz [mcsadm@pm1 downloads]$ tar -zxf mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz [mcsadm@pm1 ~]$ ls -rlt total 0 drwxr-xr-x 3 mcsadm mcsadm 25 Jun 11 19:29 mariadb -rw-rw-r-- 1 mcsadm mcsadm 489245964 Jun 15 16:45 mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz [mcsadm@pm1 ~]$ pwd /home/mcsadm Generating SSH Keys

Once the tarball is downloaded and extracted under /home/mcsadm folder, generate the Key on PM1 node using ssh-keygen and then copy it to all the nodes using ssh-copyid -i

[mcsadm@pm1 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/mcsadm/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/mcsadm/.ssh/id_rsa. Your public key has been saved in /home/mcsadm/.ssh/id_rsa.pub. The key fingerprint is: SHA256:IcAWtetlQw8vHS3QwAgS4vphvY22HdCJaCHYqRHa1ab mcsadm@um1 The key's randomart image is: +---[RSA 2048]----+ |. . o..-.+ .=. | |o= + .B.+ +o *. | |= B .+.-. .+B.o | | * X o o E.+o- | | * + o S . | | -+ o == . E | | E o o--B | | - + . | | o . | +----[SHA256]-----+ [mcsadm@pm1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub pm1 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'pm1 (192.168.56.106)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@pm1's password: ****** Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'pm1'" and check to make sure that only the key(s) you wanted were added. [mcsadm@pm1 ~]$ ssh pm1 Last login: Sat Jun 30 13:37:50 2018 [mcsadm@pm1 ~]$ exit logout Connection to pm1 closed. [mcsadm@pm1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub pm2 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'pm2 (192.168.56.107)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@pm2's password: ****** Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'pm2'" and check to make sure that only the key(s) you wanted were added. [mcsadm@pm1 ~]$ ssh pm2 [mcsadm@pm2 ~]$ exit logout Connection to pm2 closed. [mcsadm@pm1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub pm3 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'pm3 (192.168.56.108)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@pm3's password: ****** Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'pm3'" and check to make sure that only the key(s) you wanted were added. [mcsadm@pm1 ~]$ ssh pm3 [mcsadm@pm3 ~]$ exit logout Connection to pm3 closed. [mcsadm@pm1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub um1 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'um1 (192.168.56.104)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@um1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'um1'" and check to make sure that only the key(s) you wanted were added. [mcsadm@pm1 ~]$ ssh um1 [mcsadm@um1 ~]$ exit logout Connection to um1 closed. [mcsadm@pm1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub um2 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'um2 (192.168.56.105)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@um2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'um2'" and check to make sure that only the key(s) you wanted were added. [mcsadm@pm1 ~]$ ssh um2 [mcsadm@um2 ~]$ exit logout Connection to um2 closed. [mcsadm@pm1 ~]$ Key Exchange between UM1 and UM2

Generate an SSH key on UM1 and copy in both UM1 and UM2, similarly generate another SSH key on UM2 and copy it to UM2 and UM1 respectively.

UM1:

[mcsadm@um1 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/mcsadm/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/mcsadm/.ssh/id_rsa. Your public key has been saved in /home/mcsadm/.ssh/id_rsa.pub. The key fingerprint is: SHA256:IcXWteJlQw8vHK3YwAgS4vphvY2bHdCJaCHYqRHa1ro mcsadm@um1 The key's randomart image is: +---[RSA 2048]----+ |. . o..o.+ .=. | |o= + . .+ +o *. | |= B . ... .+B.o | | * = o o o.+oo | |o * + o S . | | + o = | | E o o | | + . | | o . | +----[SHA256]-----+ [mcsadm@um1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub um1 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'um1 (192.168.56.104)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@um1's password: Permission denied, please try again. mcsadm@um1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'um1'" and check to make sure that only the key(s) you wanted were added. [mcsadm@um1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub um2 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'um2 (192.168.56.105)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@um2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'um2'" and check to make sure that only the key(s) you wanted were added. [mcsadm@um1 ~]$ ssh um2 [mcsadm@um2 ~]$ exit logout Connection to um2 closed. [mcsadm@um1 ~]$

UM2:

[mcsadm@um2 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/mcsadm/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/mcsadm/.ssh/id_rsa. Your public key has been saved in /home/mcsadm/.ssh/id_rsa.pub. The key fingerprint is: SHA256:VSiUVyGkQQYcQE/WxeNP6wiWIWtNOfpSqzdEG/uImA8 mcsadm@um2 The key's randomart image is: +---[RSA 2048]----+ | .oo=*==+o+. | | +..+o=o | | . .=.. | | . B.. . | | BSB o . | | + X o | | E+ * = o | | o.o * o . | | .o+ . | +----[SHA256]-----+ [mcsadm@um2 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub um2 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'um2 (192.168.56.105)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@um2's password: Permission denied, please try again. mcsadm@um2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'um2'" and check to make sure that only the key(s) you wanted were added. [mcsadm@um2 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub um1 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'um1 (192.168.56.104)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@um1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'um1'" and check to make sure that only the key(s) you wanted were added. Install Dependencies and Configurations

Using user root, install all the required packages, set  umask, sudo privileges and ulimit for mcsadm user. All these are required by ColumnStore on all the nodes and also disable "tty" for sudoers.

[root@pm1 local] yum -y install boost expect perl perl-DBI openssl zlib file sudo libaio rsync snappy net-tools perl-DBD-MySQL [root@pm1 local] echo "umask 022" >> /etc/profile [root@pm1 local] echo "mcsadm ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers [root@pm1 local] echo "Defaults:mcsadm !requiretty" >> /etc/sudoers [root@pm1 local] echo "@mcsadm hard nofile 65536" >> /etc/security/limits.conf [root@pm1 local] echo "@mcsadm soft nofile 65536" >> /etc/security/limits.conf Cluster test

Once the dependencies have been set up along with ssh key exchange, execute the Columnstore cluster tester tool from PM1 node using mcsadm user.

[mcsadm@pm1 ~]$ ./mariadb/columnstore/bin/columnstoreClusterTester.sh *** This is the MariaDB Columnstore Cluster System Test Tool *** ** Validate local OS is supported Local Node OS System Name : CentOS Linux 7 (Core) ** Run Non-root User directory permissions check on Local Node Local Node permission test on /tmp : Passed Local Node permission test on /dev/shm : Passed ** Run MariaDB Console Password check Passed, no problems detected with a MariaDB password being set without an associated /root/.my.cnf ** Run MariaDB ColumnStore Dependent Package Check Local Node - Passed, all dependency packages are installed Failed, Local Node package mariadb-libs is installed, please un-install Failure occurred, do you want to continue? (y,n) > y *** Finished Validation of the Cluster, Failures occurred. Check for Error/Failed test results *** [mcsadm@pm1 ~]$ rpm -qa | grep mariadb mariadb-libs-5.5.56-2.el7.x86_64 [mcsadm@pm1 ~]$

The above shows a failed cluster test as the default MariaDB libraries that are provided within the Linux distribution are already installed, we need to uninstall these for a clean ColumnStore Installation.

Remove the MariaDB 5.5 libraries and execute the cluster test one more time.

[mcsadm@pm1 ~]$ sudo rpm -e --nodeps mariadb-libs-5.5.56-2.el7.x86_64 [mcsadm@pm1 ~]$ ./mariadb/columnstore/bin/columnstoreClusterTester.sh *** This is the MariaDB Columnstore Cluster System Test Tool *** ** Validate local OS is supported Local Node OS System Name : CentOS Linux 7 (Core) ** Run Non-root User directory permissions check on Local Node Local Node permission test on /tmp : Passed Local Node permission test on /dev/shm : Passed ** Run MariaDB Console Password check Passed, no problems detected with a MariaDB password being set without an associated /root/.my.cnf ** Run MariaDB ColumnStore Dependent Package Check Local Node - Passed, all dependency packages are installed Local Node - Passed, all packages that should not be installed aren't installed *** Finished Validation of the Cluster, all Tests Passed *** [mcsadm@pm1 ~]$

All the tests are cleared, now its time to configure ColumnStore for a 2 UM and 3 PM nodes.

Post Install

Once the cluster test tool is successful, execute post-install and follow the instruction for a distributed install, the user inputs are as follows:

  • Select the type of Data Storage [1=internal, 2=external] () > 1
    • Internal means local disk
  • Enter number of User Modules [1,1024] () > 2

    • Since we are doing a 2UM Nodes setup

  • Enter Nic Interface #1 Host Name () > um1

    • Enter "um1" here based on the /etc/hosts file setup

  • Enter Nic Interface #1 IP Address of um1 (192.168.56.104) >

    • Just press enter to accept the IP already identified by the setup

  • Enter Nic Interface #2 Host Name (unassigned) >

    • Press Enter without any value here as we are only configuring 1 network interface per node

  • The above 3 will repeat depending on the number of UM nodes specified

  • Enter number of Performance Modules [1,1024] () > 3

    • Since we are doing a 3 PM nodes

  • Enter Nic Interface #1 Host Name () > pm1

    • Enter "pm1" here based on the /etc/hosts setup

  • Enter Nic Interface #1 IP Address of pm1 (192.168.56.106) >

    • Just press enter to accept the IP already identified by the setup

  • Enter Nic Interface #2 Host Name (unassigned) >

    • Press Enter without any value here as we are only configuring 1 network interface per node

  • The above three will be repeated depending on the number of PM nodes specified

[mcsadm@pm1 ~]$ ./mariadb/columnstore/bin/post-install --installdir=$HOME/mariadb/columnstore The next steps are: If installing on a pm1 node: export COLUMNSTORE_INSTALL_DIR=/home/mcsadm/mariadb/columnstore export LD_LIBRARY_PATH=/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib/mysql:/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib:/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib:/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib /home/mcsadm/mariadb/columnstore/bin/postConfigure -i /home/mcsadm/mariadb/columnstore If installing on a non-pm1 using the non-distributed option: export COLUMNSTORE_INSTALL_DIR=/home/mcsadm/mariadb/columnstore export LD_LIBRARY_PATH=/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib/mysql:/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib:/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib:/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib /home/mcsadm/mariadb/columnstore/bin/columnstore start

Copy the above export scripts in the ~/.bashrc script for PM1 node install Make sure the tar.gz file is under /home/mcsadm/ home folder before executing postConfigure.

[mcsadm@pm1 ~]$ /home/mcsadm/mariadb/bin/postConfigure -i /home/mcsadm/mariadb/columnstore This is the MariaDB ColumnStore System Configuration and Installation tool. It will Configure the MariaDB ColumnStore System and will perform a Package Installation of all of the Servers within the System that is being configured. IMPORTANT: This tool should only be run on the Parent OAM Module            which is a Performance Module, preferred Module #1 Prompting instructions:         Press 'enter' to accept a value in (), if available or         Enter one of the options within [], if available, or         Enter a new value ===== Setup System Server Type Configuration ===== There are 2 options when configuring the System Server Type: single and multi   'single'  - Single-Server install is used when there will only be 1 server configured               on the system. It can also be used for production systems, if the plan is               to stay single-server.   'multi'   - Multi-Server install is used when you want to configure multiple servers now or               in the future. With Multi-Server install, you can still configure just 1 server               now and add on addition servers/modules in the future. Select the type of System Server install [1=single, 2=multi] (2) > 2 ===== Setup System Module Type Configuration ===== There are 2 options when configuring the System Module Type: separate and combined   'separate' - User and Performance functionality on separate servers.   'combined' - User and Performance functionality on the same server Select the type of System Module Install [1=separate, 2=combined] (1) > 1 Seperate Server Installation will be performed. NOTE: Local Query Feature allows the ability to query data from a single Performance       Module. Check MariaDB ColumnStore Admin Guide for additional information. Enable Local Query feature? [y,n] (n) > n NOTE: The MariaDB ColumnStore Schema Sync feature will replicate all of the       schemas and InnoDB tables across the User Module nodes. This feature can be enabled       or disabled, for example, if you wish to configure your own replication post installation. MariaDB ColumnStore Schema Sync feature is Enabled, do you want to leave enabled? [y,n] (y) > y NOTE: MariaDB ColumnStore Replication Feature is enabled Enter System Name (columnstore-1) > ===== Setup Storage Configuration ===== ----- Setup Performance Module DBRoot Data Storage Mount Configuration ----- There are 2 options when configuring the storage: internal or external   'internal' -    This is specified when a local disk is used for the DBRoot storage.                   High Availability Server Failover is not Supported in this mode   'external' -    This is specified when the DBRoot directories are mounted.                   High Availability Server Failover is Supported in this mode. Select the type of Data Storage [1=internal, 2=external] (1) > 1 ===== Setup Memory Configuration ===== NOTE: Setting 'NumBlocksPct' to 70%       Setting 'TotalUmMemory' to 50% ===== Setup the Module Configuration ===== ----- User Module Configuration ----- Enter number of User Modules [1,1024] (2) > 2 *** User Module #1 Configuration *** Enter Nic Interface #1 Host Name (um1) > um1 Enter Nic Interface #1 IP Address of um1 (192.168.56.104) > Enter Nic Interface #2 Host Name (unassigned) > *** User Module #2 Configuration *** Enter Nic Interface #1 Host Name (um2) > um2 Enter Nic Interface #1 IP Address of um2 (192.168.56.105) > Enter Nic Interface #2 Host Name (unassigned) > ----- Performance Module Configuration ----- Enter number of Performance Modules [1,1024] (3) > 3 *** Parent OAM Module Performance Module #1 Configuration *** Enter Nic Interface #1 Host Name (pm1) > pm1 Enter Nic Interface #1 IP Address of pm1 (192.168.56.106) > Enter Nic Interface #2 Host Name (unassigned) > Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm1' (1) > 1 *** Performance Module #2 Configuration *** Enter Nic Interface #1 Host Name (pm2) > pm2 Enter Nic Interface #1 IP Address of pm2 (192.168.56.107) > Enter Nic Interface #2 Host Name (unassigned) > Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm2' (2) > 2 *** Performance Module #3 Configuration *** Enter Nic Interface #1 Host Name (pm3) > pm3 Enter Nic Interface #1 IP Address of pm3 (192.168.56.108) > Enter Nic Interface #2 Host Name (unassigned) > Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm3' (3) > 3 ===== System Installation ===== System Configuration is complete. Performing System Installation. Performing a MariaDB ColumnStore System install using a Binary package located in the /home/mcsadm directory. Next step is to enter the password to access the other Servers. This is either your password or you can default to using a ssh key If using a password, the password needs to be the same on all Servers. Enter password, hit 'enter' to default to using a ssh key, or 'exit' > ----- Performing Install on 'um1 / um1' ----- Install log file is located here: /tmp/um1_binary_install.log ----- Performing Install on 'um2 / um2' ----- Install log file is located here: /tmp/um2_binary_install.log ----- Performing Install on 'pm2 / pm2' ----- Install log file is located here: /tmp/pm2_binary_install.log ----- Performing Install on 'pm3 / pm3' ----- Install log file is located here: /tmp/pm3_binary_install.log MariaDB ColumnStore Package being installed, please wait ...  DONE ===== Checking MariaDB ColumnStore System Logging Functionality ===== The MariaDB ColumnStore system logging is setup and working on local server ===== MariaDB ColumnStore System Startup ===== System Configuration is complete. Performing System Installation. ----- Starting MariaDB ColumnStore on local server ----- MariaDB ColumnStore successfully started MariaDB ColumnStore Database Platform Starting, please wait ...................... DONE Run MariaDB ColumnStore Replication Setup..  DONE MariaDB ColumnStore Install Successfully Completed, System is Active Enter the following command to define MariaDB ColumnStore Alias Commands . /home/mcsadm/mariadb/columnstore/bin/columnstoreAlias Enter 'mcsmysql' to access the MariaDB ColumnStore SQL console Enter 'mcsadmin' to access the MariaDB ColumnStore Admin console NOTE: The MariaDB ColumnStore Alias Commands are in /etc/profile.d/columnstoreAlias.sh [mcsadm@pm1 ~]$

The above indicates a successful installation of MariaDB AX and ColumnStore.

In case of any errors, the log files for all the nodes are located under PM 1's /tmp folder with node-specific file names.

Test ColumnStore by using mcsmysql from UM1 node

[mcsadm@um1 ~]$ mcsmysql Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 40 Server version: 10.2.15-MariaDB-log Columnstore 1.1.5-1 Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases; +---------------------+ | Database | +---------------------+ | calpontsys | | columnstore_info | | infinidb_querystats | | infinidb_vtable | | information_schema | | mysql | | performance_schema | | test | +---------------------+ 8 rows in set (0.00 sec) MariaDB [(none)]> create database testdb; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> use testdb; Database changed MariaDB [testdb]> show tables; Empty set (0.00 sec) MariaDB [testdb]> create table tab(id int, name varchar(100)); Query OK, 0 rows affected (0.01 sec) MariaDB [testdb]> show create table tab\G *************************** 1. row *************************** Table: tab Create Table: CREATE TABLE `tab` ( `id` int(11) DEFAULT NULL, `name` varchar(100) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec) MariaDB [testdb]> create table tab_cs(id int, name varchar(100)) engine=columnstore; Query OK, 0 rows affected (0.39 sec) MariaDB [testdb]> show create table tab_cs\G *************************** 1. row *************************** Table: tab_cs Create Table: CREATE TABLE `tab_cs` ( `id` int(11) DEFAULT NULL, `name` varchar(100) DEFAULT NULL ) ENGINE=Columnstore DEFAULT CHARSET=latin1 1 row in set (0.00 sec) MariaDB [testdb]> insert into tab_cs select rand()*10000, column_name from information_schema.columns; Query OK, 1921 rows affected (1.70 sec) Records: 1921 Duplicates: 0 Warnings: 0 MariaDB [testdb]> select count(*) from tab_cs; +----------+ | count(*) | +----------+ | 1921 | +----------+ 1 row in set (0.08 sec) MariaDB [testdb]>

This concludes a successful MariaDB AX distributed tarball install.

Conclusion

MariaDB ColumnStore is a powerful distributed analytical storage engine available as part of MariaDB AX along with MariaDB MaxScale. Installation using a tarball is quite an easy task.

Summary:
  • Create a new user/group mcsadm
  • Download and untar the tarball tar.gz file under /home/mcsadm 
  • Perform SSH Key Exchange between PM1 and all nodes
  • Perform SSH Key Exchange between UM1 and UM2 and vice versa
  • Perform a Cluster Test to ensure no problems
  • Execute Post Install
  • Execute Post Configure
References

This blog is about setting up a MariaDB AX Cluster with 2 UM and 3 PM nodes on a CentOS 7 Cluster. We will be using a tarball binary image non-root installation.

Login or Register to post comments

A Kind Introduction to MySQL Windowing Functions II

Before I take up from the last blog, I need to introduce RANGE and ROWs.  Windows over data can be framed and this is where things can get wild and woolly.   Table x has a column named x (me being overly creative again) that has the values one through 10.  If we sum the values of x we can get different values depending on how the frame is constructed.

If the frame for the window is defined as a the range between 'unbounded preceding and current row' the value for the sum of x will the sum for the entire column of data.  However, if the frame is defined as the rows between 'unbounded preceding and current row' it will sum up the values of the current row and the values of the rows that came before; 1, 1+2, 1+2+3. etc.


mysql> SELECT x, sum(x) over w as 'rows',
    -> sum(x) over y as 'range'
    -> from x
    -> window w as 
           (rows between unbounded preceding and current row),
    ->        y as 
           (range between unbounded preceding and current row);
+------+------+-------+
| x    | rows | range |
+------+------+-------+
|    1 |    1 |    55 |
|    2 |    3 |    55 |
|    3 |    6 |    55 |
|    4 |   10 |    55 |
|    5 |   15 |    55 |
|    6 |   21 |    55 |
|    7 |   28 |    55 |
|    8 |   36 |    55 |
|    9 |   45 |    55 |
|   10 |   55 |    55 |
+------+------+-------+
10 rows in set (0.00 sec)

The terminology of frames and windows gets a little confusing and sound like a home remodeling project.  For now consider 'unbounded' to be everything in the column, so unbounded preceding would be everything in the column before this row. 


mysql> SELECT x, 
       sum(x) over w as 'rows', 
       sum(x) over y as 'range' 
       from x 
       window w as 
         (rows between current row and unbounded following), 
         y as (range between unbounded preceding and current row);
+------+------+-------+
| x    | rows | range |
+------+------+-------+
|    1 |   55 |    55 |
|    2 |   54 |    55 |
|    3 |   52 |    55 |
|    4 |   49 |    55 |
|    5 |   45 |    55 |
|    6 |   40 |    55 |
|    7 |   34 |    55 |
|    8 |   27 |    55 |
|    9 |   19 |    55 |
|   10 |   10 |    55 |
+------+------+-------+
10 rows in set (0.00 sec)

So unbounded following would take all the values in the column after the current.  So the rows value where x is equal to 2 of 54 is the grand total sum of 55 less the value of the first row or 54 (and x =3 is 55 less 1 & 2).

More frames and windows next time!


More State Stuff
Last  time was the start of a series on windowing functions that were introduced with MySQL 8. This entry will delve a little bit deeper into this realm.  The data used for these examples starts off with the World database.

Population By State
Previously the data showed a total population for each state but listed an entry for each city in that state, er, district.  It is quite easy to get a per state total by adding a simple GROUP BY to the query.


mysql> SELECT district, Sum(Population) OVER w 
FROM city 
WHERE CountryCode = 'USA' 
GROUP BY District  
WINDOW w AS (PARTITION BY District) 
LIMIT 10;
+----------------------+------------------------+
| district             | Sum(Population) OVER w |
+----------------------+------------------------+
| Alabama              |                 242820 |
| Alaska               |                 260283 |
| Arizona              |                1321045 |
| Arkansas             |                 183133 |
| California           |                3694820 |
| Colorado             |                 554636 |
| Connecticut          |                 139529 |
| District of Columbia |                 572059 |
| Florida              |                 735167 |
| Georgia              |                 416474 |
+----------------------+------------------------+
10 rows in set (0.00 sec)


Going Back to Ohio
But what if we wanted a little more statistical information on a state's population?  Well, there several functions just waiting for you.

mysql> select name, Population as 'Pop', 
       SUM(Population) OVER w AS 'sum', 
       RANK() over w as 'rank', 
       percent_rank() OVER w as '%rank' 
       from city where District='Ohio' 
       WINDOW w AS (ORDER BY Population);
+------------+--------+---------+------+-------+
| name       | Pop    | sum     | rank | %rank |
+------------+--------+---------+------+-------+
| Dayton     | 166179 |  166179 |    1 |     0 |
| Akron      | 217074 |  383253 |    2 |   0.2 |
| Toledo     | 313619 |  696872 |    3 |   0.4 |
| Cincinnati | 331285 | 1028157 |    4 |   0.6 |
| Cleveland  | 478403 | 1506560 |    5 |   0.8 |
| Columbus   | 711470 | 2218030 |    6 |     1 |
+------------+--------+---------+------+-------+
6 rows in set (0.01 sec)


Ohio is used here as it has just enough data to illustrate some concepts.  Looking at the cities in that state starting with the largest population is done by creating a window that is ordered by Population, biggest first.  Columbus has the largest population and its rank is '1'.

The PERCENTAGE_RANK  is the percentage of scores in the window that are equal or lower to the value in the row, excluding the highest value, for the rank on a range from zero to one for that row.  For a fun exercise , rerun the above query with the desc removed from the window definition (ORDER BY Population).

Differences

The functions first_value and last_value provide a way to see how much small the population is in the other cities than Columbus. To do this the current row, or last_value, is subtracted from the  first value  - Columbus's 711,470 - to provide the answer.

 mysql> select name, 
         Population as 'Pop', 
         SUM(Population) OVER w AS 'sum', 
        (first_value(Population) over w - last_value(Population) over w) as 'diff'  
        from city 
        where District='Ohio' 
        WINDOW w AS (ORDER BY Population desc) ;
+------------+--------+---------+--------+
| name       | Pop    | sum     | diff   |
+------------+--------+---------+--------+
| Columbus   | 711470 |  711470 |      0 |
| Cleveland  | 478403 | 1189873 | 233067 |
| Cincinnati | 331285 | 1521158 | 380185 |
| Toledo     | 313619 | 1834777 | 397851 |
| Akron      | 217074 | 2051851 | 494396 |
| Dayton     | 166179 | 2218030 | 545291 |
+------------+--------+---------+--------+
6 rows in set (0.00 sec)

But how much smaller are cities three to six from the second city, Cleveland.  The NTH_VALUE(expression, n) allows us to subtract the smaller values from the second entry.
mysql> select name, Population as 'Pop',         SUM(Population) OVER w AS 'sum',     (first_value(Population) over w - last_value(Population) over w)              as 'diff Columbus',     (nth_value(Population,2) over w - last_value(Population) over w)              as 'diff Cleveland'         from city where District='Ohio'         WINDOW w AS (ORDER BY Population desc) ; +------------+--------+---------+---------------+----------------+ | name       | Pop    | sum     | diff Columbus | diff Cleveland | +------------+--------+---------+---------------+----------------+ | Columbus   | 711470 |  711470 |             0 |           NULL | | Cleveland  | 478403 | 1189873 |        233067 |              0 | | Cincinnati | 331285 | 1521158 |        380185 |         147118 | | Toledo     | 313619 | 1834777 |        397851 |         164784 | | Akron      | 217074 | 2051851 |        494396 |         261329 | | Dayton     | 166179 | 2218030 |        545291 |         312224 | +------------+--------+---------+---------------+----------------+ 6 rows in set (0.01 sec)
But how much bigger is each city than the next biggest?? For that we need to use LAG(),  LAG() takes the value of column from the row before (So Cleveland's value for LAG is Columbus's Population) and the default lag is 1.  The corresponding LEAD() takes the value from the row after.  So now it is easy to see that Toledo is only 17,666 behind Cincinnati.
mysql> select name,         Population as 'Pop',         lag(Population,1) over w as 'lag',         lead(Population,1) over w as 'lead',         Population - lag(Population) over w as 'diff'          from city         WHERE District = 'Ohio'         window w as (ORDER BY Population desc); +------------+--------+--------+--------+---------+ | name       | Pop    | lag    | lead   | diff    | +------------+--------+--------+--------+---------+ | Columbus   | 711470 |   NULL | 478403 |    NULL | | Cleveland  | 478403 | 711470 | 331285 | -233067 | | Cincinnati | 331285 | 478403 | 313619 | -147118 | | Toledo     | 313619 | 331285 | 217074 |  -17666 | | Akron      | 217074 | 313619 | 166179 |  -96545 | | Dayton     | 166179 | 217074 |   NULL |  -50895 | +------------+--------+--------+--------+---------+ 6 rows in set (0.00 sec)


Dividing Up Into Buckets
Another useful function is the NTILE which divvies up the data into a certain number of groups or buckets.  Be sure to use ORDER BY to ensure the rows you have selected are ordered correctly.  Below the cities are split up into a group with four buckets and a group with three buckets.  
mysql> SELECT name,                Population as 'Pop',                NTILE(4) over w as 'ntile4',                NTILE(3) over w as 'ntile3'                from city                where District = 'Ohio'                Window w as (ORDER BY Population desc); +------------+--------+--------+--------+ | name       | Pop    | ntile4 | ntile3 | +------------+--------+--------+--------+ | Columbus   | 711470 |      1 |      1 | | Cleveland  | 478403 |      1 |      1 | | Cincinnati | 331285 |      2 |      2 | | Toledo     | 313619 |      2 |      2 | | Akron      | 217074 |      3 |      3 | | Dayton     | 166179 |      4 |      3 | +------------+--------+--------+--------+ 6 rows in set (0.00 sec)






MariaDB 5.5.61, MariaDB Connector/Node.js 0.7.0 and MariaDB Connector/J 2.2.6 now available

The MariaDB Foundation is pleased to announce the availability of MariaDB 5.5.61, the latest stable release in the MariaDB 5.5 series, as well as MariaDB Connector/Node.js 0.7.0, the first alpha release of the new 100% JavaScript non-blocking MariaDB client for Node.js, compatible with Node.js 6+, and MariaDB Connector/J 2.2.6, the latest stable MariaDB Connector/J release. […]

The post MariaDB 5.5.61, MariaDB Connector/Node.js 0.7.0 and MariaDB Connector/J 2.2.6 now available appeared first on MariaDB.org.

Webinar Wednesday, August 1, 2018: Migrating to AWS Aurora, Monitoring AWS Aurora with PMM

Please join Autodesk’s Senior DevOps Engineer, Sanjeet Deshpande, Autodesk’s Senior Database Engineer, Vineet Khanna, and Percona’s Sr. MySQL DBA, Tate McDaniel as they present Migrating to AWS Aurora, Monitoring AWS Aurora with PMM on Wednesday, August 1st, 2018, at 5:00 PM PDT (UTC-7) / 8:00 PM EDT (UTC-4).

Amazon Web Services (AWS) Aurora is one of the most popular cloud-based RDBMS solutions. The main reason for Aurora’s success is because it’s based on the InnoDB storage engine.

In this session, we will talk about how you can efficiently plan for migrating to AWS Aurora using Terraform and Percona products and solutions. We will share our Terraform code for launching AWS Aurora clusters, look at tricks for checking data consistency, verify migration paths and effectively monitor the environment using Percona Monitoring and Management (PMM).

The topics in this session include:

  • Why AWS Aurora? What is the future of AWS Aurora?
  • Build Aurora infrastructure
  • Using Terraform (without data)
  • Restore using Terraform and Percona XtraBackup (using AWS S3 bucket)
  • Verify data consistency
  • Aurora migration
  • 1:1 migration
  • Many:1 migration using Percona Server for MySQL multi-source replication
  • Show benchmarks and PMM dashboard
  • Demo

Register for the webinar.

Sanjeet Deshpande, Senior DevOps Engineer

Sanjeet is a Senior DevOps having 10+ years of experience and currently working with Autodesk, Singapore. He is experienced in architecting, deploying and managing the cloud infrastructures/applications and automation experience. Sanjeet has worked extensively on AWS services like Lambda, SQS, RDS, SNS to name a few. He has also worked closely with the engineering team for different applications and suggested changes to improve their application uptime.

Tate Mcdaniel, Sr. MySQL DBA

Tate joined Percona in June 2017 as a Remote MySQL DBA. He holds a Bachelors degree in Information Systems and Decision Strategies from LSU. He has 10+ years of experience working with MySQL and operations management. His great love is application query tuning. In his off time, he races sailboats, travels the Caribbean by sailboat, and drives all over in an RV.

Vineet Khanna, Senior Database Engineer

Vineet Khanna, Senior Database Engineer at Autodesk, has 10+ years of experience as a MySQL DBA. His main professional interests are managing complex database environments, improving database performance, and architecting high availability solutions for MySQL. He has handled database environments for organizations like Chegg, Zendesk, and Adobe.

The post Webinar Wednesday, August 1, 2018: Migrating to AWS Aurora, Monitoring AWS Aurora with PMM appeared first on Percona Database Performance Blog.

A beginner’s guide to database multitenancy

Introduction In software terminology, multitenancy is an architectural pattern which allows you to isolate customers even if they are using the same hardware or software components. Multitenancy has become even more attractive with the widespread adoption of cloud computing. A relational database system provides a hierarchy structure of objects which, typically, looks like this: catalog … Continue reading A beginner’s guide to database multitenancy →

The post A beginner’s guide to database multitenancy appeared first on Vlad Mihalcea.

MySQL to Amazon Redshift Replication.

In our work, We used to get a lot of requirements for replicating data from one data source to another. Our team provided solutions to replicate data from MySQL to Vertica, Amazon Redshift, Hadoop. Out of which Amazon Redshift replication is a bit complicated as Amazon Redshift is a Database as a service (DBaaS) and the process is not straightforward.

So, I take this opportunity to guide on how to replicate the specific set of tables from MySQL to AWS Redshift using Tungsten replicator.

1.0. Tungsten Replicator:

Tungsten Replicator is an open source replication engine supports data extract from MySQL, MySQL Variants such as RDS, Percona Server, MariaDB and Oracle and allows the data extracted to be applied on other data sources such as Vertica, Cassandra, Redshift etc.

Tungsten Replicator includes support for parallel replication, and advanced topologies such as fan-in and multi-master, and can be used efficiently in cross-site deployments.

1.1.0. General Architecture:

There are three major components in tungsten replicator
1. Extractor / Master Service
2. Transaction History Log (THL)
3. Applier / Slave Service

1.1.1. Extractor / Master Service:

The extractor component reads data from MySQL’s binary log and writes that information into the Transaction History Log (THL).

1.1.2. Transaction History Log (THL):

The Transaction History Log (THL) acts as a translator between two different data sources. It stores transactional data from different data servers in a universal format using the replicator service acting as a master, That could then be processed Applier / Slave service.

1.1.3. Applier / Slave Service:

All the raw row-data recorded on the THL logs is re-assembled or constructed into another format such as JSON or BSON, or external CSV formats that enable the data to be loaded in bulk batches into a variety of different targets.

Therefore Statement information is not supported for heterogeneous deployments. So It’s mandatory that Binary log format on MySQL is ROW.

1.2.0. Pre Requisites: 1.2.1. Server Packages:
  • JDK 7 or higher
  • Ant 1.8 or higher
  • Ruby
  • Net-SSH
1.2.2. MySQL:
  • All the tables to be replicated must have a primary key.
  • Following MySQL configuration should be enabled on MySQL
    binlog-format             = row
    binlog-row-image      = full
    collation-server          = utf8_general_ci
    character-set-server  = utf8
1.2.3. Redshift:
  • Database name, Schema_name should be same as MySQL Database name of the tables to be replicated.

1.2.4. S3 Bucket:

  •  Read & write access to an AWS S3 Bucket. (Access key, Secret key is required)
2.0. Requirement:
  • Consider the servers with below details are used for Demo.

AWS EC2 MySQL Server  – 172.19.12.234
AWS Redshift                     – 172.19.12.116 (Database as a Service)
AWS S3 bucket                  – s3://mydbops-migration

As Redshift is a database as a service, We just have an endpoint to connect. Therefore We will be installing both the tungsten Master / Slave service on the MySQL server itself.

  • We would need to replicate the tables empemp_records from new_year database on the MySQL server to Redshift. Structures of the table are given below.
CREATE TABLE `emp` ( `no` int(11) NOT NULL, `city` varchar(50) DEFAULT NULL, `state` varchar(50) DEFAULT NULL, PRIMARY KEY (`no`) ) ENGINE=InnoDB; CREATE TABLE `emp_records` ( `no` int(11) NOT NULL, `name` varchar(50) DEFAULT NULL, `address` varchar(50) DEFAULT NULL, PRIMARY KEY (`no`) ) ENGINE=InnoDB;

 

3.0. Implementation:

The implementation consists of following steps.

  1. Installation / Building tungsten from source
  2. Preparing equivalent schema for Redshift
  3. Configuring Master service
  4. Configuring Slave service
  5. Generating worker tables (temp tables used by tungsten) for replication to be created on redshift
  6. Start the replication

3.1. Installation / Building From Source:
  • Download the source package from the GIT.
#git clone https://github.com/continuent/tungsten-replicator.git
  • Compile this package it will generate the tungsten-replicator.tar file.
#sh tungsten-replicator/builder/build.sh #mkdir -p tungsten
  • Once the tar file is generated extract the file to the folder created named tungsten and remove the old tungsten replicator package.
#tar --strip-components 1 -zxvf tungsten-replicator/builder/build/tungsten-replicator-5.2.1.tar.gz -C tungsten/
  • Now we have got tungsten binaries, Clean up source packages unless required.
#rm -rf tungsten-replicator

 

3.2. Preparing equivalent schema for Redshift:
  • Create database new_year on Redshift.
dev=# create database new_year; CREATE DATABASE
  • The new database was created. Now I am going to create a new schema.
  • Before creating schema first you have to switch to new_year database.
dev=# \c new_year psql (9.2.24, server 8.0.2)
  • Then create tables in new_year schema.
new_year=# create table new_year.emp(no int primary key, city varchar(50),state varchar(50)); CREATE TABLE new_year=# create table new_year.emp_records(no int primary key, name varchar(50),address varchar(50) ); CREATE TABLE

Note:

  • If you do not mention schema name while creating the table, the table will create inside the public schema.
  • To check tables are created inside the correct new_year schema.
new_year=# \dt new_year.*; List of relationsList of relations schema   | name | type  |   owner ----------+--------------+-------+----------- new_year | emp     | table | redshift-usr new_year | emp_records  | table | redshift-usr (2 rows)

 

3.3. Configuring Master Service:
  • Create a replication user on MySQL with Replication Slave privilege to stream binlog from MySQL to Tungsten Master service.
mysql> grant replication slave on *.* to 'tungsten'@'localhost' identified by 'secret';
  • Switch to tungsten directory and Reset the defaults configuration file.
#cd ~/tungsten #./tools/tpm configure defaults --reset
  • Configure the Master service on the directory of your choice, We have used /opt/master
  • Following commands will prepare the configuration file for Master service.
#./tools/tpm configure master \ --install-directory=/opt/master \ --enable-heterogeneous-service=true \ --members=mysql-db-master \ --master=mysql-db-master #./tools/tpm configure master --hosts=mysql-db-master \ --replication-user=tungsten \ --replication-password=tungsten \ --skip-validation-check=MySQLUnsupportedDataTypesCheck \ --property=replicator.filter.pkey.addColumnsToDeletes=true \ --property=replicator.filter.pkey.addPkeyToInserts=true
  • Once the configuration is prepared, Then we can install it using tpm.
#./tools/tpm install Configuration is now complete.  For further information, please consult Tungsten documentation, which is available at docs.continuent.com. NOTE  >> Command successfully completed
  • Now Master service will be configured under /opt/master/
  • Start the tungsten Master service.
#/opt/master/tungsten/cluster-home/bin/startall Starting Tungsten Replicator Service... Waiting for Tungsten Replicator Service. running: PID:22291
  • Verify it’s working by checking the master status.
#/opt/master/tungsten/tungsten-replicator/bin/trepctl services Processing services command... NAME              VALUE ----              ----- appliedLastSeqno: 0 appliedLatency  : 1.667 role            : master serviceName     : master serviceType     : local started         : true state           : ONLINE Finished services command... #/opt/master/tungsten/tungsten-replicator/bin/trepctl status Processing status command... NAME                     VALUE ----                     ----- appliedLastEventId     : mysql-bin.000011:0000000000000510;-1 appliedLastSeqno       : 0 appliedLatency         : 1.667 autoRecoveryEnabled    : false autoRecoveryTotal      : 0 channels               : 1 clusterName            : master currentEventId         : mysql-bin.000011:0000000000000510 currentTimeMillis      : 1525355498784 dataServerHost         : mysql-db-master extensions             :  host                   : mysql-db-master latestEpochNumber      : 0 masterConnectUri       : thl://localhost:/ masterListenUri        : thl://mysql-db-master:2112/ maximumStoredSeqNo     : 0 minimumStoredSeqNo     : 0 offlineRequests        : NONE pendingError           : NONE pendingErrorCode       : NONE pendingErrorEventId    : NONE pendingErrorSeqno      : -1 pendingExceptionMessage: NONE pipelineSource         : jdbc:mysql:thin://mysql-db-master:3306/tungsten_master?noPrepStmtCache=true relativeLatency        : 21.784 resourcePrecedence     : 99 rmiPort                : 10000 role                   : master seqnoType              : java.lang.Long serviceName            : master serviceType            : local simpleServiceName      : master siteName               : default sourceId               : mysql-db-master state                  : ONLINE timeInStateSeconds     : 21.219 timezone               : GMT transitioningTo        :  uptimeSeconds          : 21.741 useSSLConnection       : false version                : Tungsten Replicator 5.2.1 Finished status command...
  • If the master did not start properly refer to this (/opt/master/service_logs/trepsvc.log) error log.
3.4. Configuring Slave service:
  • Switch to tungsten directory and Reset the defaults configuration file.
#cd ~/tungsten #./tools/tpm configure defaults --reset
  • Create JSON file with name s3-config-slave.json in the format below, Fill in your AWS S3 Bucket details like Access key, Secrect key, S3 bucket path.
{ "awsS3Path" : "s3://mydbops-migration", "awsAccessKey" : "XXXXXX", "awsSecretKey" : "YYYYYYY", "gzipS3Files" : "false", "cleanUpS3Files" : "true" }
  • Configure the Slave service on the directory of your choice, We have used /opt/slave
  • Following commands will prepare the configuration file for Slave service.
#./tools/tpm configure slave \ --install-directory=/opt/slave \ --enable-heterogeneous-service=true \ --members=mysql-db-master
  • Add the replication filter to only replicate those two tables. Use Redshift host, user, password to configure the slave service.
#./tools/tpm configure slave --hosts=mysql-db-master \ --replication-host=172.19.12.116 \ --replication-user=redshift-usr \ --replication-password='redshift-pass' --datasource-type=redshift \ --batch-enabled=true \ --batch-load-template=redshift \ --redshift-dbname=new_year \ --svc-applier-filters=dropstatementdata,replicate \ --property=replicator.filter.replicate.do=new_year.emp,new_year.emp_records \ --svc-applier-block-commit-interval=10s \ --svc-applier-block-commit-size=5 \ --rmi-port=10002 \ --thl-port=2113 \ --master-thl-port=2112 \ --master-thl-host=mysql-db-master
  • Once the configuration is prepared, Then we can install it using tpm.
#./tools/tpm install Configuration is now complete.  For further information, please consult Tungsten documentation, which is available at docs.continuent.com. NOTE  >> Command successfully completed Once it complete copy the s3-config-slave.json file to slave (share) directory. #cp s3-config-slave.json /opt/slave/share/
  • Now the slave is configured, Before starting we need to create worker/stage table used by tungsten to replicate data on Redshift.
3.5. Generating Worker / Stage tables To Be Created On Redshift:
  • Tungsten provides a utility named ddlscan to generate the Worker  / Stage tables required for the replication functionality to work.
#/opt/slave/tungsten/tungsten-replicator/bin/ddlscan -db new_year -template ddl-mysql-redshift-staging.vm > staging_ddl
  • Apply the schema generated from the above operation on Redshift.
  • Now we have Worker / Stage tables created on redshift.
new_year=# \dt new_year.*; List of relations   schema  |       name       | type  |   owner    ----------+-----------------------+-------+-----------  new_year | emp                 | table | redshift-usr  new_year | emp_records        | table | redshift-usr  new_year | stage_xxx_emp       | table | redshift-usr  new_year | stage_xxx_emp_pkey | table | redshift-usr  new_year | stage_xxx_emp_records | table | redshift-usr  new_year | stage_xxx_emp_pkey | table | redshift-usr (6 rows)

 

3.6. Starting Replication:
  • Once the slave is configured and the stage tables are created in Redshift, then start the slave
#/opt/slave/tungsten/cluster-home/bin/startall Starting Tungsten Replicator Service... Waiting for Tungsten Replicator Service. running: PID:23968
  • Verify it’s working by checking the slave status.
#/opt/slave/tungsten/tungsten-replicator/bin/trepctl services NAME              VALUE ----              ----- appliedLastSeqno: -1 appliedLatency  : -1.0 role            : slave serviceName     : slave serviceType     : local started         : true state           : ONLINE Finished services command... # /opt/slave/tungsten/tungsten-replicator/bin/trepctl status Processing status command... NAME                     VALUE ----                     ----- appliedLastEventId     : mysql-bin.000011:0000000000000510;-1 appliedLastSeqno       : 0 appliedLatency         : 251.018 autoRecoveryEnabled    : false autoRecoveryTotal      : 0 channels               : 1 clusterName            : slave currentEventId         : NONE currentTimeMillis      : 1525355728202 dataServerHost         : 172.19.12.116 extensions             :  host                   : 172.19.12.116 latestEpochNumber      : 0 masterConnectUri       : thl://mysql-db-master:2112/ masterListenUri        : null maximumStoredSeqNo     : 0 minimumStoredSeqNo     : 0 offlineRequests        : NONE pendingError           : NONE pendingErrorCode       : NONE pendingErrorEventId    : NONE pendingErrorSeqno      : -1 pendingExceptionMessage: NONE pipelineSource         : thl://mysql-db-master:2112/ relativeLatency        : 251.202 resourcePrecedence     : 99 rmiPort                : 10002 role                   : slave seqnoType              : java.lang.Long serviceName            : slave serviceType            : local simpleServiceName      : slave siteName               : default sourceId               : 172.19.12.116 state                  : ONLINE timeInStateSeconds     : 12.558 timezone               : GMT transitioningTo        :  uptimeSeconds          : 24.407 useSSLConnection       : false version                : Tungsten Replicator 5.2.1 Finished status command...
  • If the slave did not start properly refer to this (/opt/slave/service_logs/trepsvc.log) error log.
4.0. Testing:
  • Now both master & slave are in sync. Now I am going to insert a few record on MySQL server in emp and emp_records table.
insert into emp values(1,'chennai','tamilnadu'); insert into emp values (2,'Banglore','Karnataka'); insert into emp_records values(1,'suresh','Noth car street'); insert into emp_records values(2,'John','South car street');
  • Above these records are inserted in the master server. At the same I have checked redshift these records are replicated or not.
new_year=# select * from new_year.emp; no | city | state ----+-----------+---------- 1 | chennai   | tamilnadu 2 | Banglore  | Karnataka (2 rows) new_year=# select * from new_year.emp_records; no | name   | address ----+----------+--------- 1 | suresh   | Noth car street 2 | John     | South car street (2 rows)

 

5.0. Troubleshooting:

Replication can be broken due to incorrect data types. During such scenarios, We should analyze the issue and fix the datatype and resume replication.

Sample Error :

# /opt/slave/tungsten/tungsten-replicator/bin/trepctl status Processing status command... NAME                     VALUE ----                     ----- appliedLastEventId     : NONE appliedLastSeqno       : -1 appliedLatency         : -1.0 autoRecoveryEnabled    : false autoRecoveryTotal      : 0 channels               : -1 clusterName            : slave currentEventId         : NONE currentTimeMillis      : 1526577299571 dataServerHost         : 172.25.12.119 extensions             :  host                   : 172.25.12.119 latestEpochNumber      : -1 masterConnectUri       : thl://mysql-db-master:2112/ masterListenUri        : null maximumStoredSeqNo     : -1 minimumStoredSeqNo     : -1 offlineRequests        : NONE pendingError           : Stage task failed: stage=q-to-dbms seqno=75 fragno=0 pendingErrorCode       : NONE pendingErrorEventId    : mysql-bin.000027:0000000000072461;-1 pendingErrorSeqno      : 75 pendingExceptionMessage: CSV loading failed: schema=new table=doc_content CSV file=/opt/slave/tmp/staging/slave/staging0/yp-yp_user_doc_content-69.csv message=Wrapped org.postgresql.util.PSQLException: ERROR: Value too long for character type                            Detail:                             -----------------------------------------------                            error:  Value too long for character type                            code:      8001                            context:   Value too long for type character varying(256)                            query:     1566568                            location:  funcs_string.hpp:395                            process:   query0_75_1566568 [pid=10475]                            -----------------------------------------------                           (/opt/slave/tungsten/tungsten-replicator/appliers/batch/redshift.js#256) pipelineSource         : UNKNOWN relativeLatency        : -1.0

This error info explains that value is too long for character data type for table doc_content on new database in Redshift.

  • In MySQL, the table doc_content consists of column “context” with TEXT data type.
  • Even in Redshift, context is a TEXT column.
  • Here the catch, In Redshift, the text datatype is equivalent to varchar(256).
  • So writing anything above 256 on MySQL will break replication.

So the solution is to increase the varchar length from 255 to varchar of 1000. In Redshift changing the datatype will not work.

yp=# alter table new.doc_content ALTER COLUMN content TYPE varchar(2000); ERROR:  ALTER COLUMN TYPE is not supported
  • We can’t increase the column size in Redshift without recreating the table.
  • The alternate solution is to add a new column with the required changes and move the data and then the old column can be dropped.
ALTER TABLE yp.yp_user_doc_content ADD COLUMN content_new VARCHAR(2000); UPDATE yp.yp_user_doc_content SET content_new = content; ALTER TABLE yp.yp_user_doc_content DROP COLUMN content; ALTER TABLE yp.yp_user_doc_content RENAME COLUMN content_new TO content;
  • Now we’re good to restart the replication again.
6.0. Conclusion:

Tungsten replicator is a great tool when it comes to replication of data with heterogeneous data sources. If we understand it’s working, It’s easy to configure and operate.

MySQL Connector/ODBC 5.3.11 has been released

Dear MySQL users,

MySQL Connector/ODBC 5.3.11, a new version of the ODBC driver for the
MySQL database management system, has been released.

The available downloads include both a Unicode driver and an ANSI
driver based on the same modern codebase. Please select the driver
type you need based on the type of your application – Unicode or ANSI.
Server-side prepared statements are enabled by default. It is suitable
for use with any MySQL version from 5.5.

This is the fifth release of the MySQL ODBC driver conforming to the
ODBC 3.8 specification. It contains implementations of key 3.8
features, including self-identification as a ODBC 3.8 driver,
streaming of output parameters (supported for binary types only), and
support of the SQL_ATTR_RESET_CONNECTION connection attribute (for the
Unicode driver only).

The release is now available in source and binary form for a number of
platforms from our download pages at

http://dev.mysql.com/downloads/connector/odbc/5.3.html

For information on installing, please see the documentation at

http://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation.html

Changes in MySQL Connector/ODBC 5.3.11 (2018-07-30, General Availability) Functionality Added or Changed * Based on MySQL Client library 5.7.23 it has the support for the new caching_sha2_password authentication method, which allows connecting to MySQL Server 8.0. Bugs Fixed * The Unicode version now uses the UTF8MB4 character set as a transport character set between the MySQL server and the ODBC driver. Because text conversions requested by the ODBC user with the CHARSET connection option are done inside the ODBC Driver after the data is received from the server, it now converts text data as UTF8MB4 to USER_CHARSET instead of UTF8 to USER_CHARSET. (Bug #28204756) * Connections to MySQL Server 8.0 could yield "Source character set not supported by client" errors during sorting and case-sensitive operations. Now known UTF8MB4 collations are used for these operations, such as utf8mb4_general_ci (known to libmysqlclient 5.7) instead of utf8mb4_tolower_ci. (Bug #28116892, Bug #90984) * MySQL Server 8.0 connections could fail with unsupported character sets and collations as utf8mb4_xxxxxx collations reported by MySQL Server 8.0 with numbers greater than 250 were unknown to libmysqlclient 5.7. Because collations affect sorting but not the data conversion, they can be substituted with known collations such as utf8mb4_general_ci (45). (Bug #28116892, Bug #90984) * Connector/ODBC is now built with MySQL client library 5.7.23, and commercial builds also use an updated OpenSSL version (v1.0.2o). Issues fixed in the new OpenSSL version are described at http://www.openssl.org/news/vulnerabilities.html. (Bug #27829777) * Reported errors that occurred while executing multiple statements with a single query were generic and without context. For example, SQLMoreResults might return "unhandled error from mysql_next_result()" instead of the error reported by MySQL Server. (Bug #11757423, Bug #49466)

On Behalf of Oracle/MySQL Release Engineering Team,

Kent Boortz

Upcoming Webinar Tuesday, 7/31: Using MySQL for Distributed Database Architectures

Please join Percona’s CEO, Peter Zaitsev as he presents Using MySQL for Distributed Database Architectures on Tuesday, July 31st, 2018 at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4).

Register Now

 

In modern data architectures, we’re increasingly moving from single-node design systems to distributed architectures using multiple nodes – often spread across multiple databases and multiple continents. Such architectures bring many benefits (such as scalability and resiliency), but can also bring a lot of pain if incorrectly designed and executed.

In this presentation, we will look at how we can use MySQL to engineer distributed multi-node systems.

Register for the webinar.

Peter Zaitsev, CEO and Co-Founder Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 140 professionals in 30 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Inc. 5000 named Percona to their list in 2013, 2014, 2015 and 2016. Peter was an early employee at MySQL AB, eventually leading the company’s High-Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High-Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. He has also been tapped as a contributor to Fortune and DZone, and his ebook Practical MySQL Performance Optimization is one of Percona’s most popular downloads.

 

The post Upcoming Webinar Tuesday, 7/31: Using MySQL for Distributed Database Architectures appeared first on Percona Database Performance Blog.

Mastering Continuent Clustering Series: Tungsten and SELinux, a Case Study

In this blog post, we talk about what happened during an installation of the Tungsten Cluster into an environment with SELinux running and mis-configured.

An attempt to execute `tpm install` on v5.3.2 recently failed with the below error:

ERROR >> node3_production_customer_com >> Unable to run 'sudo systemctl status mysqld.service' or the database server is not running (DatasourceBootScriptCheck) Update the /etc/sudoers file or disable sudo by adding --enable-sudo-access=false

Worse, this customer reported that this appeared as a WARNING only in Dev and Staging tests. So we checked, and it seemed we were able to access systemctl properly:

shell> sudo systemctl status mysqld.service ● mysqld.service - MySQL Percona Server Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled) Active: activating (start-post) since Tue 2018-06-19 17:46:19 BST; 1min 15s ago Process: 448996 ExecStart=/usr/bin/mysqld_safe (code=exited, status=0/SUCCESS) Process: 448964 ExecStartPre=/usr/bin/mysql-systemd pre (code=exited, status=0/SUCCESS) Main PID: 448996 (code=exited, status=0/SUCCESS); : 448997 (mysql-systemd) CGroup: /system.slice/mysqld.service └─control ├─448997 /bin/bash /usr/bin/mysql-systemd post └─450505 sleep 1 Jun 19 17:46:19 node3.production.customer.com systemd[1]: Starting MySQL Percona Server... Jun 19 17:46:19 node3.production.customer.com mysqld_safe[448996]: 180619 17:46:19 mysqld_safe Logging to '/var/log/mysqld.log'. Jun 19 17:46:19 node3.production.customer.com mysqld_safe[448996]: 180619 17:46:19 mysqld_safe Starting mysqld daemon with databases from /dbdata/data01 Jun 19 17:46:20 node3.production.customer.com mysqld_safe[448996]: 180619 17:46:20 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

The problem with the systemctl start-up for MySQL turned out to be related to SELinux, with the /data filesystem and the non-standard port having the wrong contexts.

The solution was to inform SELinux about the additional disk and port resources:

  • Ensure the file contexts are set correctly for SELinux, in this case allow MySQL data to be stored in a non-standard location (/data): semanage fcontext -a -t etc_runtime_t /data restorecon -Rv /data/   semanage fcontext -a -t mysqld_db_t "/data(/.*)?" restorecon -Rv /data/*
  • By default, MySQL uses port 3306, but our best practice is to change it to 13306 to prevent accidental connection directly to the database, bypassing the required Connector. If using a non-standard PORT for MySQL and SELinux is enabled, you must also change the port context: semanage port -a -t mysqld_port_t -p tcp 13306

In future articles, we will continue to cover more advanced subjects of interest!

Questions? Contact Continuent

How to Manage Multiple MySQL Binary Installations with SYSTEMD

This blog will go into how to manage multiple MySQL binary installations with SYSTEMD using the systemctl command.  With package installations of MySQL using YUM or APT, it’s quick and easy to manage your server’s state by executing systemctl commands to stop, start, restart, and status.  But what do you do when you want to install MySQL using the binary installation with a single or with multiple MySQL instances? You can still use SYSTEMD to easily manage the MySQL instances. All commands and testing have been done on Debian, and some details may change in other distro’s.

MySQL preparation

These are the steps to set up MySQL with multiple instances. If you currently have a MySQL server package installation using YUM or APT, it will need to be removed first. Make sure you keep your client. I also had to install some base packages for MySQL on Debian

apt install libaio1 libaio-dev numactl Download MySQL binary installation

Download the compressed tar file binary installation and extract to /usr/local, and create a soft link for mysql to the extracted binaries.

Example :

wget https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.22-linux-glibc2.12-x86_64.tar.gz tar zxvf mysql-5.7.22-linux-glibc2.12-x86_64.tar.gz -C /usr/local ln -s /usr/local/mysql-5.7.22-linux-glibc2.12-x86_64/ /usr/local/mysql Example result root@binary:/usr/local# ls -al total 44 drwxrwsr-x 11 root staff 4096 Jun 19 17:53 . drwxr-xr-x 10 root root 4096 Apr 17 18:09 .. drwxrwsr-x 2 root staff 4096 Apr 17 18:09 bin drwxrwsr-x 2 root staff 4096 Apr 17 18:09 etc drwxrwsr-x 2 root staff 4096 Apr 17 18:09 games drwxrwsr-x 2 root staff 4096 Apr 17 18:09 include drwxrwsr-x 4 root staff 4096 Apr 17 18:22 lib lrwxrwxrwx 1 root staff 9 Apr 17 18:09 man -> share/man lrwxrwxrwx 1 root staff 47 Jun 19 17:53 mysql -> /usr/local/mysql-5.7.22-linux-glibc2.12-x86_64/ drwxr-sr-x 9 root staff 4096 Jun 19 17:52 mysql-5.7.22-linux-glibc2.12-x86_64 drwxrwsr-x 2 root staff 4096 Apr 17 18:09 sbin drwxrwsr-x 7 root staff 4096 Apr 17 18:22 share drwxrwsr-x 2 root staff 4096 Apr 17 18:09 src Export path and aliases

Create an export of the MySQL path and aliases to log in to the MySQL instances using pre-made client config files. The password doesn’t matter right now as it will get updated in a couple of steps. Update the socket for each config file so they are unique because this needs to be different for each MySQL instance. Reboot your server to ensure that the configuration is loaded during boot time correctly. Run “echo $PATH” after reboot and validate that the new path is configured to include /usr/local/mysql:/usr/local/mysql/bin.

Example :

echo "export PATH=$PATH:/usr/local/mysql:/usr/local/mysql/bin" >> /etc/profile.d/mysql.sh echo "alias mysql1='mysql --defaults-file=/etc/instance1_client.cnf'" >> /etc/profile.d/mysql.sh echo "alias mysql2='mysql --defaults-file=/etc/instance2_client.cnf'" >> /etc/profile.d/mysql.sh

Example client config : /etc/instance1_client.cnf

[client] user=root password='mysqlpass' socket=/var/run/mysql/mysqld_instance1.sock

Example path :

root@binary:~# echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/mysql:/usr/local/mysql/bin Create user/group, paths, and MySQL permissions

Next, create the user and group that will be used by the MySQL services. Then create the paths and set the proper permissions.

Example :

groupadd mysql useradd -r -g mysql -s /bin/false mysql mkdir -p /mysql/data/instance1 mkdir -p /mysql/data/instance2 mkdir -p /mysql/logs/instance1 mkdir -p /mysql/logs/instance2 mkdir /var/run/mysql/ chown mysql:mysql /var/run/mysql chown -R mysql:mysql /mysql Create MySQL configuration for each instance

Below is an example of the first instance I placed in /etc/my.instance1.cnf. My naming convention is instanceX. As an example, my first instance is instance1, and my second instance is instance2. I then place that naming convention in the configuration filename my.instance1.cnf. I could have done my.cnf.instance1 or instance1.my.cnf.

Having the naming convention in the configuration files is very important as it will come into effect with the configuration of SYSTEMD. I also set my naming convention in the PID file because this will also be used by configuration of SYSTEMD. Make sure the socket you have configured in your configuration files matches what was in your client configuration files in the previous step.

Example :

[mysqld] ## Server basedir = /usr/local/mysql datadir = /mysql/data/instance1 binlog_format = MIXED log_slave_updates = 1 log-bin = /mysql/logs/instance1/mysql-bin relay-log = /mysql/logs/instance1/relay-bin log_error = /mysql/logs/instance1/mysql_error.log slow_query_log_file = /mysql/logs/instance1/slow_query.log socket = /var/run/mysql/mysqld_instance1.sock pid-file = /var/run/mysql/mysqld_instance1.pid port = 3306 user = mysql server-id = 1 Initialize MySQL

Initialize your database and get the temporary password for the database from the error log file so you can log in and update the passwords after the MySQL instances are started. Next, update the MySQL client configuration files (/etc/instance1_client.cnf and /etc/instance2_client.cnf in my example) with the temporary password. This will make it simpler to log in and change the initial password. Repeat this for each instance.

Example :

root@binary:/usr/local# /usr/local/mysql/bin/mysqld --defaults-file=/etc/my.instance1.cnf --initialize Database files are present in the data directory root@binary:/usr/local# ls -al /mysql/data/instance1 total 110628 drwxr-xr-x 5 mysql mysql 4096 Jun 22 13:19 . drwxr-xr-x 4 mysql mysql 4096 Jun 19 18:04 .. -rw-r----- 1 mysql mysql 56 Jun 22 13:18 auto.cnf -rw-r----- 1 mysql mysql 417 Jun 22 13:19 ib_buffer_pool -rw-r----- 1 mysql mysql 12582912 Jun 22 13:19 ibdata1 -rw-r----- 1 mysql mysql 50331648 Jun 22 13:19 ib_logfile0 -rw-r----- 1 mysql mysql 50331648 Jun 22 13:18 ib_logfile1 drwxr-x--- 2 mysql mysql 4096 Jun 22 13:18 mysql drwxr-x--- 2 mysql mysql 4096 Jun 22 13:18 performance_schema drwxr-x--- 2 mysql mysql 12288 Jun 22 13:19 sys Capture the temporary root password root@binary:/usr/local# cat /mysql/logs/instance1/mysql_error.log 2018-06-22T17:18:50.464555Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2018-06-22T17:18:50.978714Z 0 [Warning] InnoDB: New log files created, LSN=45790 2018-06-22T17:18:51.040350Z 0 [Warning] InnoDB: Creating foreign key constraint system tables. 2018-06-22T17:18:51.129954Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 5506e36e-7640-11e8-9b0f-0800276bf3cb. 2018-06-22T17:18:51.132700Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2018-06-22T17:18:51.315917Z 1 [Note] A temporary password is generated for root@localhost: ptraRbBy<6Wm SYSTEMD configuration

Now that the MySQL instances are prepared and ready to be started. We will now configure SYSTEMD so that systemctl can manage the MySQL instances.

SYSTEMD MySQL service

Create the SYSTEMD base configuration at /etc/systemd/system/mysql@.service and place the following contents inside. This is where the naming convention of the MySQL instances comes into effect. In the SYSTEMD configuration file, %I will be replaced with the naming convention that you use. You want to make sure that the PIDfile and the MySQL configuration file in the ExecStart will match up with your previous configurations. You only need to create one SYSTEMD configuration file. As you enable each service in the next step, SYSTEMD will make copies of the configuration for you and replace the %I accordingly with your naming convention.

Example /etc/systemd/system/mysql@.service :

[Unit] Description=Oracle MySQL After=network.target [Service] Type=forking User=mysql Group=mysql PIDFile=/var/run/mysql/mysqld_prd_%I.pid ExecStart= ExecStart=/usr/cd --defaults-file=/etc/my.%I.cnf --daemonize Restart=on-failure RestartPreventExitStatus=1 [Install] WantedBy=multi-user.target Enable and start the MySQL instances

Enable the service, placing the naming convention after the @ symbol using the systemctl command. SYSTEMD will make a copy of the configuration file in the previous step and replace the %I with the text after the @. When viewing the status of the service, you will see that the process is using the correct configuration file based upon the naming convention. Repeat for each instance.

Example :

systemctl enable mysql@instance1 systemctl start mysql@instance1 root@binary:~# systemctl status mysql@instance1 ● mysql@instance1.service - Oracle MySQL Loaded: loaded (/etc/systemd/system/mysql@.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2018-06-22 14:51:48 EDT; 10min ago Process: 11372 ExecStart=/usr/local/mysql/bin/mysqld --defaults-file=/etc/my.instance1.cnf --daemonize (code=exited, status=0/SUCCESS) Main PID: 11374 (mysqld) Tasks: 28 (limit: 4915) CGroup: /system.slice/system-mysql.slice/mysql@instance1.service └─11374 /usr/local/mysql/bin/mysqld --defaults-file=/etc/my.instance1.cnf --daemonize Jun 22 14:51:48 binary systemd[1]: Starting Oracle MySQL... Jun 22 14:51:48 binary systemd[1]: Started Oracle MySQL.

Example PID and Socket files :

root@binary:/var/log# ls -al /var/run/mysql total 16 drwxr-xr-x 2 mysql mysql 160 Jul 20 10:33 . drwxr-xr-x 19 root root 640 Jul 20 10:33 .. -rw-r----- 1 mysql mysql 6 Jul 20 10:33 mysqld_instance1.pid srwxrwxrwx 1 mysql mysql 0 Jul 20 10:33 mysqld_instance1.sock -rw------- 1 mysql mysql 6 Jul 20 10:33 mysqld_instance1.sock.lock -rw-r----- 1 mysql mysql 6 Jul 20 10:33 mysqld_instance2.pid srwxrwxrwx 1 mysql mysql 0 Jul 20 10:33 mysqld_instance2.sock -rw------- 1 mysql mysql 6 Jul 20 10:33 mysqld_instance2.sock.lock Managing MySQL

Now that we have started the two MySQL instances, we can log in to them using the aliases that we created pointing to the client configuration files that we updated to use the temporary root password. Next, we can log in and change the initial root password, and then update the configuration files accordingly with the new credentials.

Change root password

Log in to MySQL using the alias mysql1 and mysql2 which we configured previously and change the root password. Repeat for each instance.

Example :

mysql1 mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass'; mysql> exit Update MySQL client configuration

Update the MySQL client configuration files (/etc/instance1_client.cnf and /etc/instance2_client.cnf in my example) with the new passwords. Repeat for each instance.

Example client config /etc/instance1_client.cnf :

[client] user=root password='MyNewPass' socket=/var/run/mysql/mysqld_instance1.sock Conclusion

Configuring MySQL to be controlled by systemctl makes it much easier to manage your MySQL instances. This process also allows for easy configuration of multiple instances, even beyond two. But keep in mind when configuring multiple MySQL instances on a single server, you allocate the memory for each of the MySQL instances accordingly to allow for overhead.

5 Reasons and 101 Bugfixes – Why You Should Use Hibernate 5.3

The post 5 Reasons and 101 Bugfixes – Why You Should Use Hibernate 5.3 appeared first on Thoughts on Java.


Hibernate 5.3 is available for a little more than 3 months now, and last week, the team released the 3rd maintenance release. So, it’s about time to take a closer look at the new version.

In addition to more than 100 bug fixes, Hibernate 5.3 includes a few features and improvements that might motivate you to update your current project.

 

Improved Memory consumption

Let’s start with probably the best reason to update to Hibernate 5.3: It consumes less memory than the previous versions.

The improvement was triggered by an interesting discussion in the Hibernate forum. A user reported that he tried to migrate his application from Hibernate 3.6 to 5.3. During the migration, he recognized that the memory consumption of Hibernate’s SessionFactory went up to 950MB.

The issue was caused by the size and number of EntityLoaders that Hibernate instantiated. It was fixed in 2 steps:

  1. HHH-12556 – Similar EntityLoaders now share some internal data structures. This reduced the memory consumption of the application by ~50%.
    The fix was backported to Hibernate 5.2.18. So, if you’re using Hibernate 5.2 and don’t want to upgrade to 5.3, you should at least update to 5.2.18.
  2. HHH-12558 – Hibernate supports a bunch of different lock modes with specific loaders. In the past, it instantiated all loaders eagerly. Hibernate 5.3 only instantiates the 2 most common ones and loads all others lazily.

At the end of the discussion, the user who reported the issue wrote that the improved Hibernate version only used ~ 250MB. So, for his application, these two changes reduced the memory consumption by ~70%.

I obviously can’t promise that it will be equally effective for your project. However, the reported improvement is so enormous that you should at least give it a try.

 

JPA 2.2 compliance

Hibernate 5.3.0 is the first version that’s fully compliant with JPA 2.2. However, because the support for all the interesting features was already added in Hibernate 5.0, 5.1 and 5.2, and I already wrote extensive tutorials about all of them, I will not dive any deeper into this topic.

If you’re not already familiar with JPA 2.2, you can read more about it in the following articles:

 

Small Improvements Learn more about primary keys

Follow me on YouTube to not miss any new videos. Implicit ID generator definitions

If you generate your primary key values, you’re probably aware of Hibernate’s @SequenceGenerator and @TableGenerator annotations. You can use them to define which sequence or table Hibernate shall use to generate the primary key value.

Here is a typical example of a primary key mapping that tells Hibernate to use the database sequence author_seq to generate the unique primary key values.

@Entity public class Author { @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "author_seq") @SequenceGenerator(name="author_seq", sequenceName = "author_seq") private Long id; ... }

The mapping definition consists of 2 parts:

  1. The @GeneratedValue annotation defines which generation strategy Hibernate shall use and references a custom generator.
  2. The @SquenceGenerator annotation specifies the generator and tells Hibernate the name of the sequence you want to use.

As you can see in the code snippet, I use the same name for the generator and the database sequence. That’s a pretty common and verbose mapping.

If your mapping looks the same, I have good news for you: You no longer need to define the generator if your database sequence or table has the same name as your @SequenceGenerator or @TableGenerator.

That enables me to shorten the previous mapping and to remove the @SequenceGenerator definition.

@Entity public class Author { @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "author_seq") private Long id; ... }

 

Support for MySQL 8 SKIP LOCKED and NOWAIT

MySQL 8.0.1 added the SKIP LOCKED and NOWAIT feature to provide different options to handle locked rows during read-operations. The MySQL Server team explained them in great detail on their blog: MySQL 8.0.1: Using SKIP LOCKED and NOWAIT to handle hot rows.

Here’s a quick description of both features:

  • SKIP LOCKED enables you to perform a non-deterministic read that skips all locked rows. That means that locked rows are missing in your result set.
  • If you require a deterministic read but don’t want to wait for the locks to be released, you can use the NOWAIT feature. It causes your query to fail immediately if any records in your result set are locked.

The MySQL8Dialect included in Hibernate 5.3 supports both these features so that you can use them in your queries.

 

Learn more about AttributeConverter

Follow me on YouTube to not miss any new videos. Apply AttributeConverter when calling a function

AttributeConverters provide a standardized, easy way to define the mapping of a Java type. You can either use them to add support for unsupported classes or to customize the mapping of an already supported Java type.

The following code snippet shows an example from my JPA Tip: How to map a Duration attribute. JPA 2.2 doesn’t provide a mapping for java.time.Duration objects. If you want to map such an object, you can use this AttributeConverter to map it to a Long.

@Converter(autoApply = true) public class DurationConverter implements AttributeConverter<Duration, Long> { Logger log = Logger.getLogger(DurationConverter.class.getSimpleName()); @Override public Long convertToDatabaseColumn(Duration attribute) { log.info("Convert to Long"); return attribute.toNanos(); } @Override public Duration convertToEntityAttribute(Long duration) { log.info("Convert to Duration"); return Duration.of(duration, ChronoUnit.NANOS); } }

After you applied the AttributeConverter to an attribute or automatically applied them to all attributes of a specific type, it gets transparently used:

  • during all lifecycle state transitions,
  • to map the result of a query and
  • when used in path expressions in a JPQL or CriteriaQuery.

However, for whatever reason, the usage of converted attribute values as function parameters is explicitly undefined.

Previous Hibernate versions didn’t convert the attribute value if you referenced it as a function parameter. This changed with Hibernate 5.3.2. It now converts the attribute value before it provides it as a function parameter.

 

Support for Java 9 and preparations for Java 11

Beginning with Hibernate 5.3.0, all Hibernate modules specify a Java 9 module name following the pattern org.hibernate.orm.${module-name}, e.g., the hibernate-core module defines the name org.hibernate.orm.core.

The Hibernate team also updated the dependencies, prepared the build process and run their test suite with the latest JDK 11 build. So, we can hope for a smooth transition to JDK11.

 

Conclusion

Should you update to Hibernate 5.3?

You probably already know my answer. It’s a clear yes! Especially the improved memory consumption is a great reason to do the update.

However, please keep in mind that every update has its risks and that you obviously shouldn’t update any project dependency without testing it carefully.

So, what about you? Did you already update your application to use Hibernate 5.3 or will you update it in the near future?

The post 5 Reasons and 101 Bugfixes – Why You Should Use Hibernate 5.3 appeared first on Thoughts on Java.

32-bit ODBC Driver 8.0 for Windows is back!

The previous release of MySQL Connector 8.0.11 caused confusion for many users because its lack of support for Windows 32-bit platform, which is still very popular.

The main reason for it was the change in the MySQL Server 8.0 and MySQL client 8.0 library design, which regarded the Windows 32-bit platform as obsolete. Unfortunately, it had a far reaching adverse impact on the family of MySQL Connectors based on libmysqlclient including MySQL Connector/ODBC.

We listened to the voice of the community and did our best to bring back the 32-bit ODBC Driver for Windows to allow using it in ASP, VB, 32-bit MS Office applications etc.

Now we are happy to announce the return of the fully featured 32-bit version of ODBC Driver 8.0 for Windows with the full support for MySQL Server 8.0. The community version of the driver will soon be available for public at the following address:

https://dev.mysql.com/downloads/connector/odbc/

On behalf of the entire MySQL Team I would like to thank everyone who expressed their concerns in the forums and blogs.

A Kind Introduction MySQL Windowing Functions Part I


Windowing functions are a critical tool for grouping rows of data that are related to other rows. But they go far beyond the regular aggregate functions found in MySQL 5.7 and earlier. In MySQL 8 you do not have to collapse all the information down into a single output row. Each row can retain its individual identity but the server can analyze the data as a unit.
Statistics and Damned Lies Finding the total Population of the District Texas from the world.city table is simple. 

SQL> select District, sum(Population)  
from city where district = 'Texas';
+----------+-----------------+
| District | sum(Population) |
+----------+-----------------+
| Texas    |         9208281 |
+----------+-----------------+
1 row in set (0.0068 sec)





 Simple.  But try to expand it to the entire USA and you get problems.

SQL> select District, sum(Population)  
from city where CountryCode = 'USA';
+----------+-----------------+
| District | sum(Population) |
+----------+-----------------+
| New York |        78625774 |
+----------+-----------------+
1 row in set (0.0046 sec)

The results only give out the results for all the cities and lumps them under New York. This is not the desired answer. By the way the only time New York (and New York city in particular) has 78 million people is when they are all on the road in front of me when I am trying to take a taxi to an airport.

With a windowing function it is easy to iterate over a subset of the entire data.  Imagine reading the data through a page with a section cut out to form a window that is just the right size to read only the group of rows desired!

SQL> select name, District, Population, sum(Population) over() as p0, 
     District, sum(population) over( partition by District) as p1
     from city where CountryCode = 'USA' limit 10;
+------------+----------+------------+----------+----------+---------+
| name       | District | Population | p0       | District | p1      |
+------------+----------+------------+----------+----------+---------+
| Birmingham | Alabama  |     242820 | 78625774 | Alabama  |  801519 |
| Montgomery | Alabama  |     201568 | 78625774 | Alabama  |  801519 |
| Mobile     | Alabama  |     198915 | 78625774 | Alabama  |  801519 |
| Huntsville | Alabama  |     158216 | 78625774 | Alabama  |  801519 |
| Anchorage  | Alaska   |     260283 | 78625774 | Alaska   |  260283 |
| Phoenix    | Arizona  |    1321045 | 78625774 | Arizona  | 3178903 |
| Tucson     | Arizona  |     486699 | 78625774 | Arizona  | 3178903 |
| Mesa       | Arizona  |     396375 | 78625774 | Arizona  | 3178903 |
| Glendale   | Arizona  |     218812 | 78625774 | Arizona  | 3178903 |
| Scottsdale | Arizona  |     202705 | 78625774 | Arizona  | 3178903 |
+------------+----------+------------+----------+----------+---------+
10 rows in set (0.0075 sec)

The above query has two windows.  The keyword to notice is OVER().  The window defined is OVER() with nothing within the parenthesis as this should be under stood to mean 'the widow is open wide enough to see all the data'. So sum(Population) over() as p0 will give us the sum of all the Population columns and name the column p0.

The second window is defined as sum(population) over (partition by District) as p1 will provide all of the Population of each district summed in a column named p1.  

Indianapolis 500 winning Lotus driven by Jimmy Clark. Before this car all other winners of the race had the engine in front of the driver.  What does this have to do with Windowing Functions, databases, SQL, or anything else in this blog post. Well, nothing but while at Detroit PHP I stopped by the Henry Ford Museum and took this picture of a very icon car that changed Indy forever.  Windowing Functions can change your SQL skills.
Different Types of Windows The OVER clause has two forms - window_spec and window_name.  The window_spec option declares the specified window with the parenthesis.  While window_name is a window defined elsewhere in the query.  The send window in the above query  (where the output is named p1) is a window_spec.

So here is a window_name example:

SQL> SELECT District, Sum(Population) OVER w 
    FROM city where CountryCode ='USA' 
    WINDOW w AS (partition by District) limit 10;
+----------+------------------------+
| District | Sum(Population) OVER w |
+----------+------------------------+
| Alabama  |                 801519 |
| Alabama  |                 801519 |
| Alabama  |                 801519 |
| Alabama  |                 801519 |
| Alaska   |                 260283 |
| Arizona  |                3178903 |
| Arizona  |                3178903 |
| Arizona  |                3178903 |
| Arizona  |                3178903 |
| Arizona  |                3178903 |
+----------+------------------------+
10 rows in set (0.0032 sec)

The window was given the name w and then defined as WINDOW w AS (partition by District).   By the way the declaration within the window_name is itself as window_spec.

So what is a window_spec??   The definition from the manual (https://dev.mysql.com/doc/refman/8.0/en/window-functions-usage.html) informs that a window_spec is defined as:

window_spec: [window_name] [partition_clause] [order_clause] [frame_clause]

The window_name is an alias that can be used elsewhere in the query.

The partition_clause is how the data is to be divided up into groups. If this is unspecified it makes one big group.  

The order_clause provides the sort_order. Remember the ORDER BY from regular SQL queries?  This is how you can order the groups.

And the frame_clause determines sub groups within the big group.

And as it so often happens, there is a lot of material to be covered under the frame_clause and that will be in a future blog.





MySQL 8.0.12: Instant ALTER TABLE

There are many nice changes included in the MySQL 8.0.12 release that were published a couple of days ago. One of the most exciting is the ability to make instant schema changes to tables. This blog will look into why I think that that is a stand-out change.

I will not go into details with the implementation other than noting that the new data dictionary in MySQL 8.0 has allowed for a nice implementation of the patch that was contributed by the Tencent Games DBA Team. If you are interested in learning more about the implementation, I will recommend you to read the blog by Bin Su (MySQL 8.0: InnoDB now supports Instant ADD COLUMN) and the worklog implementing the feature (WL#11250: Support Instant Add Column).

Thanks to the Tencent Games DBA Team who contributed the patch for this feature. Why Do We Need Instant ALTER TABLE?

As you may know, InnoDB has since version 5.6 supported online schema changes. So, a fair thought may be why that is no longer good enough. The thing is that while online schema changes are very nice and allows you to make changes to the data (DML statements) while the schema change is made, there are still some problems:

  • Online ALTER TABLE still requires a meta data lock for the duration of the operation. That is, it is not possible to make other schema changes until the ALTER TABLE has completed.
  • In replication setups, the SQL thread handling the online schema change will still block other changes coming through the replication channel from occurring. This means that an online schema change that takes an hour suddenly makes the replication lag an hour.
  • This is compounded in chained replication setups where the delay on the last instances in the topology is multiplied with the number of times the change has been replication. Suddenly this schema change that did not really affect the replication master becomes a major pain and delays the deployment of the application upgrade that relies on the schema changes.
  • Even though the change is online, it still is heavy on resources: disk for storing the extra copy of the table when the change cannot be made in-place, CPU and disk activity that may cause other queries to become slow, etc.
  • There is a limit to the amount of DML changes that can be made during the schema change. If you make too many changes, the schema change will fail.

So, in short, online schema changes are only the first stop. Instance schema changes is the future.

Which Changes Can Be Made Instantly?

While it would be great if all schema changes could be made instantly, unfortunately that is not the case. Just as not all schema changes can be made online, there are limitations to the new instant feature. In short, the changes that can be made instantly must only affect the metadata of the table. The metadata is stored in the data dictionary. The changes that can be made with the instant ALTER TABLE feature as per 8.0.12 are:

  • Adding a new column as the last column in the table.
  • Adding a generated virtual column.
  • Dropping a generated virtual column.
  • Setting a default value for an existing column.
  • Dropping the default value for an existing column.
  • Changing the list of values allowed for a column with the ENUM or SET data types. A requirement is that the storage size does not change for the column.
  • Change whether the index type is set explicitly for an existing index.

And who knows, maybe later the feature can be extended to cover more changes. There are also a few limitations that are good to be aware of:

  • The row format cannot be COMPRESSED.
  • The table cannot have a fulltext index.
  • Tables in the data dictionary cannot use the instant algorithm.
  • Temporary tables are not supported.
How to Ensure You are Using the Expected Algorithm?

One problem with schema changes is that here are different algorithms depending on the schema change. Currently there are three different algorithms:

  • INSTANT: the change completes very quickly (yes not quite instantly) as only the metadata in the data dictionary needs to be updated.
  • INPLACE: the changes are made within the existing table, i.e. a complete table copy is avoided.
  • COPY: the table is copied into a new copy with the new definition.

By default, MySQL chooses the algorithm doing the least work. That is, INSTANT if that is supported, otherwise INPLACE if that is supported, and finally COPY. Additionally, there is the concept of locking which can be set to either NONE, SHARED, or EXCLUSIVE.

So, how do you ensure you are not ending up with a copying ALTER TABLE taking exclusive locks when you thought the operation was going to be instant? The answer is to explicitly set the algorithm and lock type. That way, MySQL will throw an error if your schema change is not compatible with the requested algorithm. The same principle can be used to force a full rebuild of the table if you for example have a small table and don’t see a reason to worry about instantly added columns (see also more later).

I will give some examples of specifying the ALGORITHM and LOCK options to ALTER TABLE later. However, we first need an example table to play with.

Tip: Always specify the ALGORITHM and LOCK options explicitly to avoid unpleasant surprises. Creating an Example Table

For the purpose of the upcoming examples of instant schema changes, it is necessary to have a table to play with. The table will be the testtbl table in the my_schema schema. The table can be created and populated with sample data by using MySQL Shell – this allows us to take advantage of the support for scripting. The table will have one million rows.

Note: To limit the amount of scrolling, the prompt in MySQL Shell has been changed to just show the mode (JS, PY, SQL).

JS> \connect root@localhost Creating a session to 'root@localhost' Please provide the password for 'root@localhost': ******** Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No): Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 17 (X protocol) Server version: 8.0.12 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. JS> \sql Switching to SQL mode... Commands end with ; SQL> CREATE SCHEMA my_schema; Query OK, 1 row affected (0.0509 sec) SQL> \use my_schema Default schema set to `my_schema`. Fetching table and column names from `my_schema` for auto-completion... Press ^C to stop. SQL> CREATE TABLE testtbl ( ... id int unsigned NOT NULL auto_increment, ... val varchar(36) NOT NULL, ... PRIMARY KEY (id) ... ) ENGINE=InnoDB; Query OK, 0 rows affected (0.0497 sec) SQL> \py Switching to Python mode... Py> \use my_schema Default schema `my_schema` accessible through db. Py> db.testtbl <Table:testtbl> Py> import uuid Py> for i in range(1000): ... session.start_transaction() ... stmt = db.testtbl.insert("val") ... for j in range(1000): ... stmt = stmt.values(uuid.uuid1().hex) ... stmt.execute() ... session.commit() ... Query OK, 0 rows affected (0.0029 sec) Py> db.testtbl.select("COUNT(*)") +----------+ | COUNT(*) | +----------+ | 1000000 | +----------+ 1 row in set (0.2476 sec)

Note how this takes advantage of the feature in MySQL Shell where the db object in JavaScript and Python mode has the tables as properties. Thus it is possible to refer to the testtbl table in the my_schema schema as db.testtbl after using \use my_schema to read the database objects. The data is inserted in 1000 batches of 1000 rows using a double loop.

Now it is time to play with schema changes.

Schema Changes Examples

For comparison, let’s first add a column using the old algorithms. First, add the column val2 using the copying algorithm – note this is not supported online, so a shared lock is taken:

SQL> ALTER TABLE testtbl ... ADD COLUMN val2 varchar(36) DEFAULT NULL, ... ALGORITHM=COPY, LOCK=SHARED; Query OK, 1000000 rows affected (5.3952 sec)

Note: the test is made on a laptop – the timings themselves are not relevant, but they can be used to compare the time taken for the three algorithms.

So that took around 5.4 seconds. That is not too bad, but we can do much better than that. Let’s try the in-place algorithm. Not only will that require less work (though still a fair bit), it is also possible to perform the change online.

SQL> ALTER TABLE testtbl ... ADD COLUMN val3 varchar(36) DEFAULT NULL, ... ALGORITHM=INPLACE, LOCK=NONE; Query OK, 0 rows affected (1.7511 sec)

So, this is a factor three faster despite the table having an extra column compared to the original table. And it is possible to execute DML statements. That is a big win. But let’s move on to the finale: instant ALTER TABLE:

SQL> ALTER TABLE testtbl ... ADD COLUMN val4 varchar(36) DEFAULT NULL, ... ALGORITHM=INSTANT; Query OK, 0 rows affected (0.0490 sec)

Wow – that’s less than 5/100 of a second – that’s instant enough for me. Note hos the LOCK option is not specified here. The instant algorithm is always as lock free as possible, so the LOCK option is not allowed.

There is a little more to it than this. As mentioned by Bin Su, there are some trade offs such as the row size if not checked upfront. So, in some cases it can be useful to know whether a table contains instantly added columns. This can be seen through the innodb_tables view in the Information Schema on the data dictionary:

Py> schema = session.get_schema("information_schema") Query OK, 1000 items affected (0.0091 sec) Py> inno_tables = schema.get_table("INNODB_TABLES") Query OK, 0 rows affected (0.0002 sec) Py> inno_tables.select("NAME", "INSTANT_COLS").where("INSTANT_COLS > 0") +-------------------+--------------+ | NAME | INSTANT_COLS | +-------------------+--------------+ | my_schema/testtbl | 4 | +-------------------+--------------+ 1 row in set (0.0405 sec)

Now what is that? 4 columns? But we only added one of the columns using the instant algorithm. What the INSTANT_COLS column shows is how many columns existed before the first instant column was added. In the example the columns id, val, val2, and val3 existed, then val4 was added using the instant algorithm. For tables that have never had any columns added instantly, the value of INSTANT_COLS is 0.

Want to Read More?

This blog has just been an introduction to the new MySQL 8.0.12 feature of instant schema changes. I will recommend you to read Bin Xu’s blog as well as the documentation in the MySQL reference manual to fully understand the feature:

Conclusion

The new feature allowing instant schema changes is a great way to avoid time consuming operations. Particularly in replication setups where the replicated statement will block while it applies, the different between an online and an instant schema change is like night and day.

MySQL 8.0.12 can be downloaded from MySQL Community Downloads or you can install it through one of the “native installers” such as MySQL Installer for Microsoft Windows or one of our Linux repositories (Yum, APT, SUSE). If you are a customer and need the commercial version of MySQL Server, I will recommend you to download it from Patches & Updates in My Oracle Support (MOS); there is also a 30 days trial version available from Oracle Software Delivery Cloud.

Please do not hesitate to provide feedback. Bugs and feature requests can be logged in the MySQL Bugs database.

Fun with Bugs #69 - On Some Public Bugs Fixed in MySQL 5.7.23

Several MySQL releases happened yesterday, but of them all I am mostly interested in MySQL 5.7.23, as MySQL 5.7 (either directly or indirectly, via forks and upstream fixes they merge) is probably the most widely used MySQL GA release at the moment.

In this post (in a typical manner for this "Fun with Bugs" series)  I'd like to describe several bugs reported by MySQL Community users and fixed in MySQL 5.7.23. As usual, I'll try to concentrate mostly on InnoDB, replication, partitioning and optimizer-related bugs (if any).

Checking MySQL release notes is like swimming with dolphins - it's pure fun  
I'd like to start with InnoDB and partitioning-related fixes:
  • The most important fix (that will make downgrades from 5.7.23 more problematic probably) is for Bug #86926 - "The field table_name (varchar(64)) from mysql.innodb_table_stats can overflow.", reported by Jean-François Gagné a year ago. The length of the table_name column in mysql.innodb_table_stats and mysql.innodb_index_stats tables has been increased from 64 to 199 characters (not 100% sure where this new value comes from), and one has to run mysql_upgrade while upgrading to 5.7.23.
  • Bug #90296 - "Hard error should be report when fsync() return EIO". Now this recent request from Allen Lai is implemented.
  • Bug #88739 - "Mysql scalability improvement patch: Optimized away excessive condition". The patch that removes unnecessary check for read-only transactions was suggested by Sandeep Sethia.
  • Bug #86370 - "crash on startup, divide by zero, with inappropriate buffer pool configuration". This funny bug was reported by Shane Bester. I wonder how many more bombs of this kind (use of uint or ulint data types) still remain in the code...
  • Bug #87253 - "innodb partition table has unexpected row lock". This bug was reported by  Yx Jiang and a patch was later suggested by Zhenghu Wen. If the comment at the end is correct, the fix is NOT included in MySQL 8.0.12.
Now let's check some replication bugs fixed:

  • Bug #89272 - "Binlog and Engine become inconsistent when binlog cache file gets out of space". This bug was reported by Yoshinori Matsunobu. Good question in the last comment there, by Artem Danilov:
    "I wonder why does only ENOSPC end up with flush errors and clears the binary log cached. What about any other disk error? Can this fix be extended for all disk errors?"
  • Bug #88891 - "Filtered replication leaves GTID holes with create database if not exists". This serious bug was reported by Simon Mudd
  • Bug #89938 - "Rejoin old primary node may duplicate key when recovery", was reported by Zhenghu Wen, who cares about group replication a lot (unlike me, I have Galera for that).
  • Yes another bug report from Jean-François Gagné, this time in group replication - Bug #89146 - "Binlog name and Pos are wrong in group_replication_applier channel's error msgs".
  • Bug #81500 - "dead branch in search_key_in_table()". This bug was reported 2 years ago by Andrei Elkin, now my colleague (again) in MariaDB. Index is properly used on slave to find rows after this fix.
Surely in MySQL 5.7.23 there had to be some community bug fixed that is kept private. This time it's a bug in mysqldump:
"mysqldump exited abnormally for large --where option values. (Bug #26171967, Bug #86496, Bug #27510150)"and this is a complete nonsense to me!

The last but not the least, we all should be thankful to Daniël van Eeden for these bug reports and patches contributed:
  • Bug #89001 - "MySQL aborts without proper error message on startup if grant tables are corrupt".
  • Bug #89578 - "Contribution: Use native host check from OpenSSL"
Had you noted any fixes to numerous public bug reports for InnoDB FULLTEXT indexes, "online" DDL or InnoDB compression, or optimizer maybe? I had not, and no wonder - the way these features are used by Community is obviously not something Oracle cares much about.

Anyway, I'd suggest to consider upgrade to MySQL 5.7.23 to anyone who cares about security and uses OpenSSL, replication of any kind or InnoDB partitioned tables in production.

MySQL Shell 8.0.12 for MySQL Server 8.0 and 5.7 has been released

Dear MySQL users,

MySQL Shell 8.0.12 is a maintenance release of MySQL Shell 8.0 Series
(a component of the MySQL Server). The MySQL Shell is provided under
Oracle’s dual-license.

MySQL Shell 8.0 is highly recommended for use with MySQL Server 8.0 and 5.7.
Please upgrade to MySQL Shell 8.0.12.

MySQL Shell is an interactive JavaScript, Python and SQL console
interface, supporting development and administration for the MySQL
Server. It provides APIs implemented in JavaScript and Python that
enable you to work with MySQL InnoDB cluster and use MySQL as a document
store.

The AdminAPI enables you to work with MySQL InnoDB cluster, providing
an integrated solution for high availability and scalability using
InnoDB based MySQL databases, without requiring advanced MySQL
expertise. For more information about how to configure and work with
MySQL InnoDB cluster see
https://dev.mysql.com/doc/refman/en/mysql-innodb-cluster-userguide.html.

The X DevAPI enables you to create “schema-less” JSON document
collections and perform Create, Update, Read, Delete (CRUD) operations
on those collections from your favorite scripting language.
For more information about how to use MySQL Shell and the MySQL Document
Store support see https://dev.mysql.com/doc/refman/en/document-store.html.
For more information about the X DevAPI see
https://dev.mysql.com/doc/x-devapi-userguide/en/.

If you want to write applications that use the the CRUD based X DevAPI
you can also use the latest MySQL Connectors for your language of
choice. For more information about Connectors see
https://dev.mysql.com/doc/index-connectors.html.

For more information on the APIs provided with MySQL Shell
see https://dev.mysql.com/doc/dev/mysqlsh-api-javascript/8.0/
and https://dev.mysql.com/doc/dev/mysqlsh-api-python/8.0/.

Using MySQL Shell’s SQL mode you can communicate with servers using the
legacy MySQL protocol. Additionally, MySQL Shell provides partial
compatibility with the mysql client by supporting many of the same
command line options.

For full documentation on MySQL Server, MySQL Shell and related topics,
see https://dev.mysql.com/doc/mysql-shell/8.0/en/

For more information about how to download MySQL Shell 8.0.12, see
the “Generally Available (GA) Releases” tab at http://dev.mysql.com/downloads/shell/

We welcome and appreciate your feedback and bug reports, see
http://bugs.mysql.com/

Enjoy!

Changes in MySQL Shell 8.0.12 (2018-07-27, General Availability)

Functionality Added or Changed

* Important Change: An RPM package for installing ARM
64-bit (aarch64) binaries of MySQL Shell on Oracle Linux
7 is now available in the MySQL Yum Repository and for
direct download.
Known Limitation for this ARM release: You must enable
the Oracle Linux 7 Software Collections Repository
(ol7_software_collections) to install this package, and
must also adjust the libstdc++7 path. See Yum’s Platform
Specific Notes
(http://dev.mysql.com/doc/refman/8.0/en/linux-installatio
n-yum-repo.html#yum-install-platform-specifics) for
additional details.

* MySQL Shell now enables you to store user credentials in
an operating system specific secret store. You can then
enter a MySQL user’s password during connection and store
it for future connections. Currently the following secret
stores are supported:

+ MySQL login-path

+ MacOS keychain

+ Windows API
(Bug #23304789, Bug #81484)

* The way you access the online Shell help has been
standardized. Use the \help pattern command to search the
help. The scope of the command has been increased to
support retrieving help for the following categories:

+ Class and function help for the Admin API, X DevAPI
and Shell API. Previously, to retrieve help for API
objects, you had to create an instance of the object
and use the object.help() method.

+ SQL syntax help, provided that a global session
object exists.
Wildcards can now be used to search for help. A number of
additional bugs relating to incomplete help information
have also been fixed. (Bug #23255291, Bug #81277, Bug
#24963435, Bug #25732663, Bug #85481, Bug #25739522, Bug
#85511, Bug #25739664, Bug #85514, Bug #26393155, Bug
#86950, Bug #24943074, Bug #26429399, Bug #87037, Bug
#27870491, Bug #90455, Bug #27870503, Bug #90456, Bug
#27875150, Bug #90474, Bug #24948933, Bug #83527)

* The util.checkForServerUpgrade() operation has an
additional outputFormat parameter that you can specify
when running the utility. The utility can now generate
output in two formats:

+ TEXT format, which is the default. This option
provides output suitable for humans, as previously
returned by the utility.

+ JSON format. This option provides output suitable
for machines, which can be parsed and processed for
various further use cases.

* The cluster.removeInstance() command has been improved,
with the following changes:

+ A new interactive option has been added to enable or
disable interactive mode for the command. The output
displayed in interactive mode has been improved,
displaying more useful information.In interactive
mode, you are prompted to continue with the removal
of the instance (or not) in case it is not
reachable.

+ The operation now ensures that the instance is
removed from the metadata of all the cluster members
and itself. This only applies to ONLINE members.

+ A new global option dba.gtidWaitTimeout is available
to define the timeout to wait for transactions
(GTIDs) to be applied when required by AdminAPI
commands. If the timeout value defined by
dba.gtidWaitTimeout is reached when waiting for the
cluster transactions to be applied for
cluster.removeInstance() and force: false (or not
defined) then an error is issued and the operation
aborted. When force: true then the operation
continues and does not generate an error.
References: See also: Bug #27817894.

* When using the ipWhitelist to define which servers could
access the cluster, the internal user accounts were not
matching the whitelist. Now AdminAPI applies the same
filtering logic from ipWhitelist for the internal
administrative accounts.
References: See also: Bug #26140094, Bug #28165891.

* In order to be compliant with the X DevAPI specification,
the following changes have been made:

+ Collection.modify(condition).arrayDelete() and
Collection.modify(condition).merge() have been
deprecated.

+ Collection.find().limit(x).skip(y) has been renamed
to Collection.find().limit(x).offset(y).

+ Collection.find().limit(x).skip(y) has been
deprecated.

+ Collection.find().limit(x).offset(y) has been
implemented.

+ BaseResult.getAffectedItemsCount() has been
implemented.

+ BaseResult.getWarningCount() has been deprecated.

+ BaseResult.getWarningsCount() has been implemented.

+ Result.getAffectedItemCount() has been deprecated.

+ SqlResult.getAffectedRowCount() has been deprecated.

+ SqlResult.nextDataSet() has been renamed to
SqlResult.nextResult().

+ SqlResult.nextDataSet() has been deprecated.

+ SqlResult.nextResult() has been implemented.

Bugs Fixed

* The sample prompt theme files for MySQL Shell were
deployed to an incorrect location on the Windows
platform, in the root install folder. The files are now
correctly deployed in the \share\mysqlsh\prompt
sub-folder. (Bug #28188761)

* The cluster.forceQuorumUsingPartitionOf() operation sets
the group_replication_force_members variable on the
target instance to force a new group membership and
restore the quorum, but it did not reset the value of the
variable at the end of the process. Consequently, if
Group Replication later needed to be restarted on the
target instance it failed because the
group_replication_force_members variable was still set.
Now, the group_replication_force_members variable is
reset to an empty string at the end of the
cluster.forceQuorumUsingPartitionOf() operation. (Bug
#28064621)

* When upgrading from version 1.0.11 to version 8.0.11 of
MySQL Shell on Linux, the upgrade failed if the original
package was the community edition and the new package was
the commercial edition, or vice versa. Upgrading from one
edition to the other edition is now enabled. (Bug
#28037407)

* The util.checkForServerUpgrade() operation can now use
either an X Protocol connection or a classic MySQL
protocol connection. (Bug #28027707)

* The checkForServerUpgrade() operation to verify upgrade
prerequisites included an unnecessary check relating to
ZEROFILL and display length attributes in columns. The
check has now been removed. (Bug #27927641, Bug #90634)

* Some messages displayed by MySQL Shell were showing a
MySQL server version that does not exist. (Bug #27924694)

* For sessions using the classic MySQL protocol, if the
session_track_gtids system variable is set on the server
to capture and return GTIDs to the client, MySQL Shell
now displays the GTIDs for successfully committed
transactions. The returned GTID values are also now
recorded in tracing information. (Bug #27871148)

* When the defaultMode MySQL Shell configuration option had
been set with the –persist option, batch code execution
from a file was always attempted using the specified
default language, even if the file extension indicated a
different supported language. Now when a file is loaded
for batch processing using the –file or -f option, files
with the extensions .js, .py, and .sql are processed in
the appropriate language mode, regardless of the set
default language. (Bug #27861407)

* The methods provided in the shell.options configuration
interface to set and save persistent option values used
underscores in JavaScript as well as in Python mode. The
methods have now been changed to
shell.options.setPersist() and
shell.options.unsetPersist() in JavaScript to follow the
appropriate naming convention. (Bug #27861141)

* When executing a SQL script using MySQL Shell, delimiters
( such as the default semi-colon character) present in
multi-line comments caused execution to fail. Delimiters
are now ignored inside multi-line comments. (Bug
#27841719)

* MySQL Shell returned an error when querying timestamp
values that were zero, because a zero value for the month
or day in a date was not accepted. Zero timestamp values
can now be used without producing an error. (Bug
#27833822, Bug #90355)

* The shell.getSession() function returns a reference to
the session global object representing the already
established connection between MySQL Shell and a MySQL
server, known as a global session. MySQL Shell now
gracefully handles the situation where the function is
called when no global session has yet been established.
(Bug #27809310)

* It was possible to use AdminAPI operations on server
instances running an incompatible version of MySQL. (Bug
#27765769)

* The setting of the bind_address variable is no longer a
requirement. (Bug #27765484)

* When creating a cluster or adding an instance, if the
localAddress option is not specified, the port used for
group_replication_local_address is automatically assigned
with the value: port * 10 + 1. However, if the resulting
port determined by the previous rule was already in use
then a random port was generated and used. Now MySQL
Shell checks that the group_replication_local_address
port is available, and fails if it is not. (Bug
#27758041)

* The MySQL Shell application icon on Microsoft Windows was
not being displayed for the MySQL Shell 8.0 GA release,
due to an incorrect association introduced for the icon
during code refactoring. The icon is now displayed
correctly. (Bug #27746532)

* The dbPassword option is no longer valid in the options
dictionary of all AdminAPI commands. (Bug #27745106)

* The \status (\s) command in MySQL Shell now displays full
information about the version and build of the connected
MySQL server. (Bug #27740420)

* The check for reserved keywords carried out by the
util.checkForServerUpgrade() operation was updated to
match the list of reserved keywords for the MySQL 8.0 GA
release. (Bug #27724201)

* When handling escape sequences, MySQL Shell now
identifies and skips over SQL comments and string
literals within quotation marks. (Bug #27665229)

* Python’s mapping type has been added to MySQL Shell, so
that dictionary syntax can be used to interact with data
in Python mode. (Bug #27614110)

* When a file was redirected to standard input for
execution in MySQL Shell, on Unix, the first part of the
file was taken as being the password. The password prompt
now looks for user input first before resorting to
standard input. (Bug #27572380)

* It was possible to use the
dba.forceQuorumUsingPartition() operation on a cluster
which had not lost quorum. (Bug #27508698)

* The help message for
dba.rebootClusterFromCompleteOutage() operation was
incorrectly suggesting to use
dba.forceQuorumUsingPartition(). (Bug #27508627)

* If Ctrl + C was entered or an unexpected error occurred
at a password prompt in MySQL Shell, the terminal state
was not restored correctly afterwards. (Bug #27379834)

* The dba.rebootClusterFromCompleteOutage() operation was
creating a new user on the target instances, which could
lead to the existence of an increasing number of users.
The fix ensures that these users are not created by the
dba.rebootClusterFromCompleteOutage() operation. (Bug
#27344040)

* Now when you issue dba.getCluster() and retrieve a
cluster without quorum a warning is issued in addition to
the log message. (Bug #27148943)

* The memberSslMode option could be used with
cluster.addInstance() and cluster.rejoinInstance()
operations but if you specified a different value than
the one used at cluster creation an error was thrown. Now
set the SSL mode at the cluster level only, in other
words when issuing dba.createCluster(). The memberSslMode
option has been removed from cluster.addInstance() and
cluster.rejoinInstance(). (Bug #27062122)

* When you issued dba.configureLocalInsance() on an
instance, it configured the disabled_storage_engines
variable with the MyISAM, BLACKHOLE, FEDERATED, CSV, and
ARCHIVE storage engines to ensure that the storage engine
was set to InnoDB, as required by Group Replication. The
change to this option was not being reported correctly by
AdminAPI, and hence the required restart after changing
the disabled_storage_engines variable was not clear. This
change was deemed a recommendation, rather than a
requirement, hence dba.configureLocalInsance() no longer
configures disabled_storage_engines. (Bug #26754410)

* Creating a cluster using an account which was missing the
global grant option failed with an ambiguous error
message, even though dba.checkInstanceConfiguration() did
not return any errors. Now when you create a cluster, the
account being used to administer the cluster is checked
to ensure that it has the global grant option. (Bug
#25966235)

* MySQL Shell is able to automatically reconnect global
session when running in the interactive mode, but
AdminAPI methods lacked this feature. This resulted in
you having to reconnect manually. Now, the AdminAPI
methods which utilize the global session object have been
improved in order to detect an interrupted session and
trigger the reconnection mechanism. The Cluster object
uses its own internal session instance, which does not
support automatic reconnection. If connection to the
cluster is lost, you need to manually recreate the
Cluster object. (Bug #24702489)

* In the event of a whole cluster stopping unexpectedly,
upon reboot the memberSslMode was not preserved. In a
cluster where SSL had been disabled, upon issuing
dba.rebootClusterFromCompleteOutage() this could prevent
instances from rejoining the cluster. (Bug #90793, Bug
#27986413)

On Behalf of the MySQL/Oracle Release Engineering Team,
Hery Ramilison

MySQL Connector/Node.js 8.0.12 has been released

Dear MySQL users,

MySQL Connector/Node.js is a new Node.js driver for use with the X
DevAPI. This release, v8.0.12, is a maintenance release of the
MySQL Connector/Node.js 8.0 series.

The X DevAPI enables application developers to write code that combines
the strengths of the relational and document models using a modern,
NoSQL-like syntax that does not assume previous experience writing
traditional SQL.

MySQL Connector/Node.js can be downloaded through npm (see
https://www.npmjs.com/package/@mysql/xdevapi for details) or from
https://dev.mysql.com/downloads/connector/nodejs/.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information
about how the X DevAPI is implemented in MySQL Connector/Node.js, and
its usage, see http://dev.mysql.com/doc/dev/connector-nodejs/.

Please note that the X DevAPI requires at least MySQL Server version
8.0 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store,
See http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

Changes in MySQL Connector/Node.js 8.0.12 (2018-07-27, General availability)

X DevAPI Notes

* For compliance with the Core DevAPI, these
Connector/Node.js changes were made:

+ Collection: Deprecated: count(). Changed:
getSchema() now returns a Schema instance instead of
the schema name.

+ CollectionModify: Deprecated: limit(x, y)’s second
parameter, and arrayDelete().

+ CollectionFind: Deprecated: limit(x, y)’s second
parameter. Added: limit(x).offset(y).

+ CollectionRemove: Deprecated: limit(x, y)’s second
parameter.

+ Table: Deprecated: count() and insert(Document) API.
Updated: getSchema() now returns a Schema instance
instead of the Schema name. Removed: as().

+ TableSelect: Deprecated: limit(x, y)’s second
parameter. Added: limit(x).offset(y).

+ TableDelete: Deprecated: limit(x, y)’s second
parameter, and delete(x)’s parameter in favor of
using where(x) instead.

+ TableUpdate: Deprecated: limit(x, y)’s second
parameter, and update(x)’s parameter in favor of
using where(x) instead.

+ SqlExecute: Deprecated: sqlExecute() in favor of
sql(). Added: bind().

+ Column: Added isNumberSigned(), getCollationName(),
getCharacterSetName(), and isPadded()

Bugs Fixed

* The Promise returned by the session.sql().execute()
method resolved to a plain JavaScript object rather than
a proper Result instance. This meant it lacked access to
the API with methods such as getAffectedItemsCount() and
getWarnings(). (Bug #28146988)

* Retrieving rows with NULL fields would emit an unexpected
AssertionError. (Bug #27978594)

* The session.close() method is now asynchronous by
returning a JavaScript Promise, when before it returned
immediately. (Bug #27893001)

* The right-padding mechanism was improved. (Bug #27839295)

* While calling getSession() without arguments yields an
“Invalid parameter.” error, passing in ‘{}’ yielded a
“Cannot read property ‘length’ of undefined.” error. Now
‘{}’ is allowed, and getSession() defaults to using ” as
the user name. (Bug #27730748)

* Improved performance for expression parsing and protocol
message encoding.

On Behalf of the MySQL/Oracle Release Engineering Team,
Hery Ramilison

MySQL Connector/NET 8.0.12 has been released

Dear MySQL users,

MySQL Connector/NET 8.0.12 is the second general availability release of
MySQL Connector/NET to add support for the new X DevAPI. The X DevAPI
enables application developers to write code that combines the strengths
of the relational and document models using a modern, NoSQL-like syntax
that does not assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/index.html. For more information about how the X DevAPI is implemented in Connector/NET, see
http://dev.mysql.com/doc/dev/connector-net.

NuGet packages provide functionality at a project level. To get the
full set of features available in Connector/NET such as availability
in the GAC, integration with Visual Studio’s Entity Framework Designer
and integration with MySQL for Visual Studio, installation through
the MySQL Installer or the stand-alone MSI is required.

Please note that the X DevAPI requires at least MySQL Server version
5.7.12 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/5.7/en/document-store.html.

To download MySQL Connector/NET 8.0.12, see
http://dev.mysql.com/downloads/connector/net/

Installation instructions can be found at
https://dev.mysql.com/doc/connector-net/en/connector-net-installation.html

Changes in MySQL Connector/NET 8.0.12 (2018-07-27, General Availability) Known Limitation of This Release To create a model in Entity Framework (EF) Core v2.0 with scaffolding in this release, use the following alternative procedure: 1. Downgrade to EF Core 1.0 or 1.1 in your project, install the MySql.Data.EntityFrameworkCore.Design NuGet package, and then create your model using the scaffolding or migration command. 2. With the model created, revert back to EF Core 2.0 in your project, update the MySql.Data.EntityFrameworkCore NuGet package, and then remove the package MySql.Data.EntityFrameworkCore.Design from your project. We are sorry for this temporary inconvenience. * Functionality Added or Changed * Bugs Fixed Functionality Added or Changed * Optimistic locking using a database-generated field was improved with the inclusion of the [ConcurrencyCheck, DatabaseGenerated(DatabaseGeneratedOption.Computed)] attribute. Thanks to Tony Ohagan for the patch. (Bug #28095165, Bug #91064) * Members of several classes in the X DevAPI were added, modified, or removed. MySqlX.XDevAPI.Collection and MySqlX.XDevAPI.Collection classes: + Remove(Object) method was marked Obsolete. + Remove(DbDoc) method was marked Obsolete. + CreateIndex() method was modified to be a direct-execute method (no longer requires .Execute() to execute). MySqlX.XDevAPI.Common.Result class: + RecordsAffected property (now obsolete) was replaced with AffectedItemsCount. + WarningCount property (now obsolete) was replaced with WarningsCount. MySqlX.XDevAPI.CRUD.FindStatement class: + GroupBy() method was added. + Having() method was added. + Limit(Int64, Int64) method was marked Obsolete. + Offset() method was added. + Sort() method was added. + OrderBy() method was removed. MySqlX.XDevAPI.CRUD.ModifyStatement class: + ArrayInsert method was added. + ArrayAppend method was added. + Unset() method was modified to accept an array of document paths. MySqlX.XDevAPI.CRUD.RemoveStatement class: + Sort() method was added. + OrderBy() method was removed. MySqlX.XDevAPI.Relational.RowResult class: + ColumnCount property was added. + ColumnNames property was added. MySqlX.XDevAPI.Relational.SqlResult class: + AutoIncrementValue property was marked Obsolete. MySqlX.XDevAPI.Relational.TableSelectStatement class: + Limit(Int64, Int64) method was marked Obsolete. + Offset() method was added. MySqlX.XDevAPI.Session class: + Commit() method was changed to be a direct-execute method and now it returns void. + Rollback() method was changed to be a direct-execute method and now it returns void. + Uri property was added. + DefaultSchema property was added. (Bug #27732098, Bug #27732175, Bug #27732235) * The MySqlX.XDevAPI.MySqlXConnectionStringBuilder class was added to provide connection-string options that apply exclusively to X Protocol connections. In addition, the Auth, SslCa, and SslCrl properties in the MySql.Data.MySqlClient.MySqlConnectionStringBuilder class were marked Obsolete. Bugs Fixed * X DevAPI: Decimal numbers passed to the DbDoc class were not parsed properly when the values included a decimal separator other than a period (.) character. (Bug #28112229) * X DevAPI: The list of members shown with IntelliSense did not match the members provided in the reference documentation. (Bug #27918879, Bug #90615) * The Entity Framework Core implementation did not render accented characters correctly on bases with different UTF-8 encoding. Thanks to Kleber kleberksms for the patch. (Bug #27818822, Bug #90316) * The TreatTinyAsBoolean connection option was ignored when the MySqlCommand.Prepare() method was called. (Bug #27113566, Bug #88472) * All columns of type TINYINT(1) stopped returning the expected Boolean value after the connector encountered a NULL value in any column of this type. Thanks to David Warner for the patch. (Bug #22101727, Bug #78917)

On Behalf of MySQL Release Engineering team,
Surabhi Bhat

MySQL Connector/J 8.0.12 has been released

Dear MySQL users,

MySQL Connector/J Version 8.0.12 is the second GA release of the 8.0 branch of MySQL Connector/J. It is suitable for use with MySQL Server versions 5.5, 5.6, 5.7, and 8.0. It supports the Java Database Connectivity
(JDBC) 4.2 API, and implements the X DevAPI.

This release includes the following new features and changes, also described in more detail on https://dev.mysql.com/doc/relnotes/connector-j/8.0/en/news-8-0-12.html As always, we recommend that you check the “CHANGES” file in the download archive to be aware of changes in behavior that might affect your application.

To download MySQL Connector/J 8.0.12 GA, see the “Generally Available (GA)
Releases” tab at http://dev.mysql.com/downloads/connector/j/

Enjoy!

Changes in MySQL Connector/J 8.0.12 (2018-07-27, General Availability) Version 8.0.12 is the latest General Availability release of the 8.0 series of MySQL Connector/J. It is suitable for use with MySQL Server versions 8.0, 5.7, 5.6, and 5.5. * Functionality Added or Changed * Bugs Fixed Functionality Added or Changed * X DevAPI: The following changes have been made to the API: + Removed ModifyStatement.arrayDelete() and ModifyStatement.merge(). + Renamed Colletion.find().limit().skip() to Colletion.find().limit().offset(). + To simplify the class hierarchy and to have the class names reflect better the classes' functions, the following changes have been made: o The FindParams class has been renamed to FilterParams o The AbstractFindParams class has been renamed to AbstractFilterParams o The DocFindParams class has been renamed to DocFilterParams o The TableFindParams class has been renamed to TableFilterParams Notice that the methods in the original FilterParams class have been moved under the new AbstractFilterParams class. (Bug #28027459) * X DevAPI: Connector/J now uses synchronous client sockets (java.net.Socket) by default to communicate with MySQL servers for X Protocol connections. While asynchronous sockets can still be used by setting the connection property xdevapi.useAsyncProtocol=true, this is not recommended, as it might result in performance degradation for Connector/J. (Bug #27522054) * X DevAPI: Connector/J now gives provision for the use of a custom socket factory for X Protocol connections to MySQL Servers using Unix domain sockets. See Section 6.8, "Connecting Using Unix Domain Sockets" for details. * Connector/J now retrieves the MySQL keyword list from the INFORMATION_SCHEMA.KEYWORDS (http://dev.mysql.com/doc/refman/8.0/en/keywords-table.html) table on the MySQL server when a connection session is established. The list can then be accessed by calling DatabaseMetaData.getSQLKeywords(). * To simplify the code, the ReadableProperty and ModifiableProperty classes have been consolidated into the RuntimeProperty class. Bugs Fixed * X DevAPI: When creating an X DevAPI session using a Properties map instead of a connection string, referring to property keys like host, port, and protocol in lowercase caused a NullPointerException. With the fix, both upper and lower cases can now be used. (Bug#27652379) * X DevAPI: When using the getConnection() method with the mysqlx: scheme in the connection URL, Connector/J returned an ordinary JDBC connection instead of an X-Protocol connection. (Bug #26089880) * If wait_timeout was set on the server and the Connector/J had the connection property interactiveClient=false, or if interactive_timeout was set on the server and Connector/J had the connection property interactiveClient=true, a connection is invalidated when it has idled for a longer time than the set timeout. When such a timeout occurred, Connector/J threw a CJCommunicationsException, without indicating it was a timeout. With this fix, the error message returned explains the issue and suggests how to avoid it. (Bug#27977617, Bug #90753) * When an application tried to connect to a non-MySQL database through some JDBC driver and Connector/J happened to be on the class path also, Connector/J threw a SQLNonTransientConnectionException, which prevented the application from connecting to its database. With this fix, Connector/J returns null whenever a connection string does not start with jdbc:mysql: or mysqlx:, so connections to non-MySQL databases are not blocked. (Bug#26724154, Bug #87600) * A wasNull() call on a ResultSet did not return the proper value unless AbstractResultsetRow.getNull() or AbstractResultsetRow.getValueFromByte() was called before. This caused data loss when Connector/J was used with frameworks like Hibernate, which rely on wasNull() calls to properly retrieve data. With this fix, wasNull() returns a correct value as long as some getter method has been called before on the ResultSet. (Bug #25924324, Bug#85941)

On Behalf of Oracle/MySQL Release Engineering Team
Prashant Tekriwal

MySQL Connector/Python 8.0.12 has been released

Dear MySQL users,

MySQL Connector/Python 8.0.12 is the second GA release version of the
MySQL Connector Python 8.0 series. This series adds support for the new
X DevAPI. The X DevAPI enables application developers to write code
that combines the strengths of the relational and document models using
a modern, NoSQL-like syntax that does not assume previous experience
writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information
about how the X DevAPI is implemented in MySQL Connector/Python, and its
usage, see http://dev.mysql.com/doc/dev/connector-python.

Please note that the X DevAPI requires MySQL Server version 5.7.12 or
higher with the X Plugin enabled. For general documentation about how to
get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/5.7/en/document-store.html.

To download MySQL Connector/Python 8.0.12, see the “General Available
(GA) releases”
tab at http://dev.mysql.com/downloads/connector/python/

Enjoy!

Changes in MySQL Connector/Python 8.0.12 (2018-07-27, General Availability) * Installation Notes * X DevAPI Notes * Functionality Added or Changed * Bugs Fixed Installation Notes * Because the Microsoft Visual C++ 2017 Redistributable installer deletes the Microsoft Visual C++ 2015 Redistributable registry keys that identify its installation, standalone MySQL MSIs may fail to detect the Microsoft Visual C++ 2015 Redistributable if both it and the Microsoft Visual C++ 2017 Redistributable are installed. The solution is to repair the Microsoft Visual C++ 2017 Redistributable via the Windows Control Panel to recreate the registry keys needed for the runtime detection. Unlike the standalone MSIs, MySQL Installer for Windows contains a workaround for the detection problem. (Bug #28345281, Bug #91542) X DevAPI Notes * For compliance with the Core DevAPI, these Connector/Python changes were made: + DatabaseObject: Deprecated: am_i_real() and who_am_i(). Added: get_session() and the session property. + Collection.modify(): Deprecated: limit(x, y)'s second parameter, where(condition), and change(CollectionField, ExprOrLiteral). Changed: the modify(condition) condition is now mandatory. + Collection.find(): Deprecated: limit(x, y)'s second parameter. Added: limit(x).offset(y). + Collection.remove(): Deprecated: limit(x, y)'s second parameter and where(condition). Changed: the modify(condition) condition is now mandatory. + Table.select(): Deprecated: limit(x, y)'s second parameter and sort(). Added: limit(x).offset(y). + Table.delete(): Deprecated: limit(x, y)'s second parameter and sort(). Removed: delete(x)'s parameter in favor of using where() instead. Added: order_by(). + Table.update(): Deprecated: limit(x, y)'s second parameter, and the sort() method. Added: order_by(). + Session: Added: get_schemas(). + Result: Deprecated: get_document_id() and get_generated_insert_id(). Moved: get_affected_items_count() to the BaseResult class. + RowResult: Added: get_columns(). + SqlResult: Added: has_data(). + Column: Renamed: ColumnMetaData to Column. Added properties: schema_name, table_name, table_label, column_name, column_label, type, length, fractional_digits, collation_name, character_set_name. Functionality Added or Changed * Removed MySQL Fabric support. * An RPM package for installing ARM 64-bit (aarch64) binaries of Connector/Python on Oracle Linux 7 is now available in the MySQL Yum Repository and for direct download. Known Limitation for this ARM release: You must enable the Oracle Linux 7 Software Collections Repository (ol7_software_collections) to install this package, and must also adjust the libstdc++7 path. See Yum's Platform Specific Notes (http://dev.mysql.com/doc/refman/8.0/en/linux-installation-yum-repo.html#yum-install-platform-specifics) for additional details. Bugs Fixed * The default character set changed from 'utf8' (an alias to the deprecated 'utf8mb3' character set) to 'utf8mb4'. (Bug #28188883) * Fixed datetime conversion compatibility between Django 2.0 and MySQL 8.0. A workaround was to use Connector/Python's pure Python implementation instead the C extension by setting "use_pure=True" in Django's database options. (Bug #27962293, Bug #90541) * The get_row() and get_rows() behavior differed with the C (connections with CMySQLConnection) and pure Python (connections with MySQLConnection) implementations of the connector. The resolved differences are: + With the pure Python implementation, all data was returned as bytearrays; while the C implementation returned all data as Python types with CMySQLConnection (cext). Both now return Python types. + With the pure Python implementation, they returned a tuple with (row(s), eof), but with the C Extension they only returned the row(s). Now both implementations return the tuple form; (row(s), eof). + For queries returning a result, with pure Python the warning count was part of the returned eof. With the C extension, warning count was only available using the warning_count property of the connection object. Related, the warning_count property was not available in the pure Python implementation. Now, result includes the warning count for both implementations. + Fetching rows using pure Python would automatically set the unread_rows property to False. With the C extension, explicitly calling the free_result() method of the connection object was required. Related, free_result() was only available with the C extension. Now, both implementations set unread_rows to False. (Bug #27411275, Bug #27991948, Bug #27802700, Bug #28133321, Bug #27650437, Bug #89305, Bug #90799, Bug #90292, Bug #91107) * Connecting with a collation unknown to Connector/Python would yield an unknown character set error. It now properly references the unknown collation. (Bug #27277937) * Deprecated the Row.get_string() method in favor of __getitem__. (Bug #26834200, Bug #87777)

Enjoy and thanks for the support!

On behalf of the MySQL Release Team,
Nawaz Nazeer Ahamed

Pages