Planet MySQL

Improving performance – A full stack problem

Improving the performance of a web system involves knowledge of how the entire technology stack operates and interacts. There are many simple and common tips that can provide immediate improvements for a website. Some examples include:

  • Using a CDN for assets
  • Compressing content
  • Making fewer requests (web, cache, database)
  • Asynchronous management
  • Optimizing your SQL statements
  • Have more memory
  • Using SSD’s for database servers
  • Updating your software versions
  • Adding more servers
  • Configuring your software correctly
  • … And the general checklist goes on

Understanding were to invest your energy first, knowing what the return on investment can be, and most importantly the measurement and verification of every change made is the difference between blind trial and error and a solid plan and process. Here is a great example for the varied range of outcome to the point about “Updating your software versions”.

On one project the MySQL database was reaching saturation, both the maximum number of database connections and maximum number of concurrent InnoDB transactions. The first is a configurable limit, the second was a hard limit of the very old version of the software. Changing the first configurable limit can have dire consequences, there is a tipping point, however that is a different discussion. A simple software upgrade of MySQL which had many possible improvement benefits, combined with corrected configuration specific for this new version made an immediate improvement. The result moved a production system from crashing consistently under load, to at least barely surviving under load. This is an important first step in improving the customer experience.

In the PHP application stack for the same project the upgrading of several commonly used frameworks including Slim and Twig by the engineering department seemed like a good idea. However applicable load testing and profiling (after it was deployed, yet another discussion point) found the impact was a 30-40% increase in response time for the application layer. This made the system worse, and cancelled out prior work to improve the system.

How to tune a system to support 100x load increase with no impact in performance takes knowledge, experience, planning, testing and verification.

The following summarized graphs; using New Relic monitoring as a means of representative comparison; shows three snapshots of the average response time during various stages of full stack tuning and optimization. This is a very simplified graphical view that is supported by more detailed instrumentation using different products, specifically with much finer granularity of hundreds of metrics.

These graphs represent the work undertaken for a system under peak load showing an average 2,000ms response time, to the same workload under 50ms average response time. That is a 40x improvement!

If your organization can benefit from these types of improvements feel free to Contact Me.

There are numerous steps to achieving this. A few highlights to show the scope of work you need to consider includes:

  • Knowing server CPU saturation verses single core CPU saturation.
  • Network latency detection and mitigation.
  • What are the virtualization mode options of virtual cloud instances?
  • Knowing the network stack benefits of different host operating systems.
  • Simulating production load is much harder than it sounds.
  • Profiling, Profiling, Profiling.
  • Instrumentation can be misleading. Knowing how different monitoring works with sampling and averaging.
  • Tuning the stack is an iterative process.
  • The simple greatest knowledge is to know your code, your libraries, your dependencies and how to optimize each specific area of your technology stack.
  • Not everything works, some expected wins provided no overall or observed benefits.
  • There is always more that can be done. Knowing when to pause and prioritize process optimizations over system optimizations.

These graphs show the improvement work in the application tier (1500ms to 35ms to 25ms) and the database tier (500ms to 125ms to 10ms) at various stages. These graphs do not show for example improvements made in DNS resolution, different CDNs, managing static content, different types and ways of compression, remove unwanted software components and configuration, standardized and consistent stack deployments using chef, and even a reduction in overall servers. All of these successes contributed to a better and more consistent user experience.


PlanetMySQL Voting: Vote UP / Vote DOWN

What stopped MySQL? Tracing back signals sent to MySQL

Have you ever had a case where you needed to find a process which sent a HUP/KILL/TERM or other signal to your database? Let me rephrase. Did you ever have to find which process messed up your night? If so, you might want to read on. I’m going to tell you how you can find it.

Granted, on small and/or meticulously managed systems tracking down the culprit is probably not a big deal. You can likely identify your process simply by checking what processes have enough privileges to send mysqld a HUP/KILL/TERM signal. However, frequently we see cases where this may not work or the elimination process would be too tedious to execute.

We recently had a case where a process was frequently sending SIGHUPs to mysqld and the customer asked us to see if we could get rid of his annoyance. This blog is the direct result of a discussion I had with my colleague Francisco Bordenave, on options available to deal with his issue. I’m only going to cover a few of them in this blog but I imagine that most of you will be able to find one that will work for your case. Note that most tracing tools add some overhead to the system being investigated. The tools presented in the following are designed to be lightweight so the impact should be well within acceptable range for most environments.

DISCLAIMER: While writing this blog I discovered that David Busby has also discussed one of the tools that I’m going to cover in his article. For those who have read the article note that I’m going to cover other tools as well and I will also cover a few extra SystemTap details in this blog. For those who haven’t yet had chance to read David’s blog, you can read it here.

All right, let’s see what “low hanging tools” there are available to us to deal with our issue!

Linux
  • SystemTap: widely available on Linux but usually not enabled by default. You need to install debuginfo and devel kernel packages and systemtap itself. Similar to DTrace.
  • Perf: although not quite written for generic tracing, due to its ability to trace system calls we can use it to our advantage if we trace sys_enter_sigkill.
  • Audit: generic system auditing platform. Given its nature, we can use it to track down many things, including rogue processes sending HUP signals to our poor mysqld!
  • Code!: Given that MySQL is opensource, you could customize the signal handler to obtain extra information. See more in sigaction(2) and the SA_SIGINFO flag. I’m not sure if this should be listed as a more efficient solution but it’s an option nevertheless. I guess one could also preload/inject his own singal handler via an LD_PRELOAD trick and a custom library but that’s beyond the scope what I intend to cover. However, for certain signals (most notably, SIGSEGV) you may not need to write your own tools as the OS may already come with libs/tools that can assist you. See Ulrich Drepper’s catchsegv or /usr/lib64/libSegFault.so, for instance.
  • Debuggers: These may be efficient to use in some cases but I won’t cover them this time, either.
FreeBSD/Solaris
  • DTrace: a very decent, stable tracing platform. Included in most recent kernels by default for the mentioned platforms (FreeBSD 9.2+, FreeBSD 10+, Solaris 10+).

In this article I’m going to focus on Linux as that’s what people in the MySQL community seem to care about most nowadays. The tools that I will discuss will be SystemTap, Perf and Audit. If you feel that you would like to read about the rest, let me know and I will cover the rest of the options in a followup article.

SystemTap

I’m going to set up SystemTap on a recent, 64 bit CentOS 7 box. I will only cover basic install, you can find more about how to install SystemTap here.

The strength of SystemTap is definitely its flexibility, potentially the best tool for solving our problem on the Linux platform. It’s been around for some time and is generally regarded mature but I would recommend to test your “tapscripts” in dev/qa before you run them in production.

Installing SystemTap

Follow below steps to install SystemTap:

[root@centos7]~# sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/CentOS-Debuginfo.repo [root@centos7]~# yum repolist ... base-debuginfo/x86_64 CentOS-7 - Debuginfo 1,688 ...

[root@centos7]~# yum install kernel-debuginfo kernel-debuginfo-common kernel-devel [root@centos7]~# yum install systemtap systemtap-runtime

Tracing with SystemTap

Create a tapscript like the one below:

[root@centos7]~# cat find_sighupper.stp #!/usr/bin/stap # Prints information on process which sent HUP signal to mysqld probe begin { printf("%-26s %-8s %-5s %-8s %-5sn", "TIME", "SOURCE", "SPID", "TARGET", "TPID"); } probe nd_syscall.kill.return { sname = @entry(execname()); spid = @entry(pid()); sig = @entry(uint_arg(2)); tpid = @entry(uint_arg(1)); tname = pid2execname(tpid); time = ctime(gettimeofday_s()); if (sig == 1 && tname == "mysqld") printf("%-26s %-8s %-5d %-8s %-5dn", time, sname, spid, tname, tpid); }

Then run the tap script in a dedicated terminal:

[root@centos7]~# stap find_sighupper.stp TIME SOURCE SPID TARGET TPID

Send your HUP signal to mysqld from another terminal:

[root@centos7]~# kill -1 1984

The culprit should will show up on your first window like so:

[root@centos7]~# stap find_sighupper.stp TIME SOURCE SPID TARGET TPID Thu Feb 26 21:20:44 2015 kill 6326 mysqld 1984 ^C

Note that with this solution I was able to define fairly nice constraints relatively easily. With a single probe (well, quasi, as @entry refers back to the callee) I was able to get all this information and filter out HUP signals sent to mysqld. No other filtering is necessary!

Perf

Perf is another neat tool to have. As its name implies, it was originally developed for lightweight profiling, to use the performance counters subsystem in Linux. It became fairly popular and got extended many times over these past years. Since it happens to have probes we can leverage, we are going to use it!

Installing Perf

As you can see, installing Perf is relatively simple.

# yum install perf

Start perf in a separate terminal window. I’m only going to run it for a minute but I could run it in screen for a longer period of time.

[root@centos7 ~]# perf record -a -e syscalls:sys_enter_kill sleep 60

In a separate terminal window send your test and obtain the results via “perf script”:

[root@centos7 ~]# echo $$ 11380 [root@centos7 ~]# pidof mysqld 1984 [root@centos7 ~]# kill -1 1984 [root@centos7 ~]# perf script # ======== # captured on: Thu Feb 26 14:25:02 2015 # hostname : centos7.local # os release : 3.10.0-123.20.1.el7.x86_64 # perf version : 3.10.0-123.20.1.el7.x86_64.debug # arch : x86_64 # nrcpus online : 2 # nrcpus avail : 2 # cpudesc : Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz # cpuid : GenuineIntel,6,70,1 # total memory : 1885464 kB # cmdline : /usr/bin/perf record -a -e syscalls:sys_enter_kill sleep 60 # event : name = syscalls:sys_enter_kill, type = 2, config = 0x9b, config1 = 0x0, config2 = 0x0, excl_usr = 0, exc # HEADER_CPU_TOPOLOGY info available, use -I to display # HEADER_NUMA_TOPOLOGY info available, use -I to display # pmu mappings: software = 1, tracepoint = 2, breakpoint = 5 # ======== # bash 11380 [000] 6689.348219: syscalls:sys_enter_kill: pid: 0x000007c0, sig: 0x00000001

As you can see in above output process “bash” with pid of 11380 signalled pid 0x07c0 (decimal: 1984) a HUP signal (0x01). Thus, we found our culprit with this method as well.

Audit

You can read more about Audit in the Red Hat Security Guide.

Installing Audit

Depending on your OS installation, it may be already installed.

If case it is not, you can install it as follows:

[root@centos7 ~]# yum install audit

When you are done installing, start your trace and track 64 bit kill system calls that send HUP signals with signal ID of 1:

[root@centos7]~# auditctl -l No rules [root@centos7]~# auditctl -a exit,always -F arch=b64 -S kill -F a1=1 [root@centos7]~# auditctl -l LIST_RULES: exit,always arch=3221225534 (0xc000003e) a1=1 (0x1) syscall=kill [root@centos7]~# auditctl -s AUDIT_STATUS: enabled=1 flag=1 pid=7010 rate_limit=0 backlog_limit=320 lost=0 backlog=0 [root@centos7]~# pidof mysqld 1984 [root@centos7]~# kill -1 1984 [root@centos7]~# tail -2 /var/log/audit/audit.log type=SYSCALL msg=audit(1425007202.384:682): arch=c000003e syscall=62 success=yes exit=0 a0=7c0 a1=1 a2=a a3=7c0 items=0 ppid=11380 pid=3319 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=1 comm="zsh" exe="/usr/bin/zsh" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null) type=OBJ_PID msg=audit(1425007202.384:682): opid=1984 oauid=-1 ouid=995 oses=-1 obj=system_u:system_r:mysqld_t:s0 ocomm="mysqld"

As you can see from above output, the results showed up nicely in the system audit.log. From the log it’s clear that I sent my SIGHUP to mysqld (pid 1984, “opid” field) from zsh (see the command name in the “comm” field) via the 64 bit kill syscall. Thus, mischief managed, once again!

Summary

In this blog I presented you three different tools to help you trace down sources of signals. The three tools each have their own strengths. SystemTap is abundant of features and really nicely scriptable. The additional features of auditd may make it appealing to deploy to your host. Perf is a great tool for CPU profiling and you might want to install it solely for that reason. On the other hand, your distribution might not have support compiled in its kernel or may make the setup harder for given tool. In my experience most modern distributions support the tools discussed here so the choice comes down to personal preference or convenience.

In case you were wondering, I often pick auditd because it is often already installed. SystemTap might be a bit more complicated to setup but I would likely invest some extra time into the setup if my case is more complex. I primary use perf for CPU tracing and tend to think of the other two tools before I think of perf for tracing signals.

Hope you enjoyed reading! Happy [h/t]racking!

The post What stopped MySQL? Tracing back signals sent to MySQL appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Shinguz: Rename MySQL Partition

Taxonomy upgrade extras: partitionrenameDDL

Before I forget it and have to search again here a short note about how to rename a MySQL Partition:

My dream:

ALTER TABLE history RENAME PARTITION p2015_kw10 INTO p2015_kw09;
In reality: ALTER TABLE history REORGANIZE PARTITION p2015_kw10 INTO ( PARTITION p2015_kw09 VALUES LESS THAN (UNIX_TIMESTAMP('2015-03-02 00:00:00')) );

Caution: REORGANIZE PARTITION causes a full copy of the whole partition!

Hint: I assume it would be very easy for MySQL or MariaDB to make this DDL command an in-place operation...

MySQL Partitioning was introduced in MySQL 5.1.


PlanetMySQL Voting: Vote UP / Vote DOWN

Changes in the MariaDB Foundation board and management

Year 2014 was an important year for the MariaDB project with the release of MariaDB 10.0. Adoption continued to grow both organically and by MariaDB being included both in the Red Hat Enterprise Linux and Suse Linux Enterprise Server distributions as the default database option. Ubuntu started providing MariaDB as an option since their release 14.04. MariaDB also came available in many cloud services, e.g. DBaaS in the Rackspace Cloud and Cloud Foundry. Those are just a few highlights. There is of course a lot of other news from last year which has already been covered earlier.

If you’re interested in what the MariaDB Foundation worked on last year, Monty wrote a wrap-up on it. You can find it here.

In this article I want to focus on a few recent changes related to the MariaDB Foundation. On Monday this week, the MariaDB Foundation had its first board meeting of the year. In addition to all the general things on a board meeting agenda there were the following topics:

  • New memberships
  • Approval of new board member
  • Approval of new CEO

A Norwegian company called Visma has become a member of the MariaDB Foundation. Visma is a 6,000 person software company focused on finance and accounting software and services.

As part of Visma becoming a member it was also suggested that Visma gets a representative on the board. It would be hard to find a more suitable person for the job than Espen Håkonsen, CIO of Visma and Managing Director of Visma IT & Communications.

The MariaDB Foundation has been looking for a new CEO since Simon Phipps left last fall. Late last year discussions started with Otto Kekäläinen and he got interested. Otto is an entrepreneur in the open source business in Finland. He is the CEO of the open source consulting company Seravo. In addition he has several important open source roles in Finland and Europe. Otto is Finland’s representative in the Free Software Foundation Europe and in the steering group of COSS, the Finnish Centre for Open Systems and Solutions, the umbrella open source association in Finland. Otto will also serve as the Secretary of the board.

These changes in the board and management of the MariaDB Foundation were approved in the board meeting. I’m delighted to have Espen and Otto to join the MariaDB Foundation. They bring a lot of new experience and ideas to the foundation. Welcome!

Chairman of the board
Rasmus Johansson

 


PlanetMySQL Voting: Vote UP / Vote DOWN

mysqlfailover – Installation and Operations tutorial

Configuration

 

1) Download the utilities. Python connector is a prerequisite and this needs to be installed along with mysql utilities rpm

wget http://dev.mysql.com/get/Downloads/Connector-Python/mysql-connector-python-2.0.2-1.el6.noarch.rpm

wget http://dev.mysql.com/get/Downloads/MySQLGUITools/mysql-utilities-1.5.3-1.el6.noarch.rpm

 

2) Install the RPMs. “Mysqlfailover –help” will confirm the install of the utilities

rpm –ivh mysql-connector-python-2.0.2-1.el6.noarch.rpm

rpm -ivh mysql-utilities-1.5.3-1.el6.noarch.rpm

 

3) Set the GTID MODE to ON. GTID mode is a prerequisite for mysqlfailover. GTID tracks the transaction IDs for replication instead of tracking log file and position

log-bin=1

gtid_mode=ON

enforce_gtid_consistency=true

log-slave-updates=1

4) Setup replication using change master to command like below. Notice the difference in the change master to statement. We are not giving the binary log file and position

CHANGE MASTER TO

MASTER_HOST = ’104.xxx.xx.xx’,

MASTER_PORT = 3306,

MASTER_USER = ‘slaveuser’,

MASTER_PASSWORD = ‘slavepass’,

MASTER_AUTO_POSITION = 1;

 

5) Setup report host. Report host has to be updated on the slave. You can validate this configuration by issuing “show slave hosts” on the master server

report-host is the variables that needs to be updated in my.cnf on slave

e.g report-host=100.xx.xx.x

 

6) Master-info-repository has to be set to TABLE on slave. Implement the below changes in slave configuration

master-info-repository=TABLE

log-slave-updates=1

 

Operations

 

1)    Checking health: Below command can be issued to check the status of the replication servers. This will connect to all slaves and list the health of each server. If you do not see any of the slave servers please go and fix it

mysqlfailover –master=root:sairam@localhost:3306 –discover-slaves-   login=root:sairam health

 

2)    Testing failover : Run the below command and kill mysql on the master server. Status should automatically get updated and the failover should start . Once failover is successful , old master can be configured to slave manually

 

mysqlfailover –master=root:sairam@localhost:3306 –discover-slaves-login=root:sairam –failover-mode=auto –force

 

3)    Running as daemon : Below command can be used to run in the background. Process will run in the background and initiate the transfer when needed

 

mysqlfailover –master=root:sairam@localhost:3306 –discover-slaves-login=root:sairam –failover-mode=auto –force –deamon start

 

4)    Post failover operations : You can use execute after option to execute post failover actions . For e.g if heartbeat is installed you can initiate failover of virtual IP as a post failover action

 

 


PlanetMySQL Voting: Vote UP / Vote DOWN

The design of the SCAN algorithm within one LDM instance

As part of optimising scan execution by about 50% in MySQL Cluster 7.4
we made an effort to also write down a description of the scan protocol
locally within the LDM threads. This is documented in the source code of
MySQL Cluster 7.4 and here is an excerpt from the source code.

DBLQH controls the execution of scans on tables on behalf of DBTC and
DBSPJ. Here follows a signal overview of how a scan is performed within
one LDM instance. For a description of the global scan protocol
see DbtcMain.cpp as a comment before execSCAN_TABREQ.

DBLQH only controls execution of a scan towards one partition of a
table. DBTC/DBSPJ is responsible for execution of scans toward the
entire table and ensuring that the API sees a consistent view of the
table.

There are currently four types of scans implemented in one LDM
instance:

Full table scan using hash index. This is implemented in DBACC.
Full table scan using row by row. This is implemented in DBTUP.
Full table scan using row by row in disk order. This is implemented in
DBTUP.
Index scan using one or several ranges. This is implemented in DBTUX.

DBLQH controls execution of one partition scan, Dependent on the scan
type, DBACC/DBTUP/DBTUX is responsible to get the row references to
the tuple scanned. DBTUP is responsible for reading of those rows and
finally DBACC is responsible for any locking of rows required as part
of the scan.

Each scan is controlled by an interpreted program created by the API
and transported down to DBTUP. This program is sent as part of the
SCAN_FRAGREQ signal and passed to DBTUP in the STORED_PROCREQ signal.
This program is applied on each row reference passed to DBTUP by
execution of the execTUPKEYREQ signal.

In index ranges one or more ranges is sent in the keyinfo part of the
SCAN_FRAGREQ. This range information is sent to DBTUX one range at a
time. Actually with multiple ranges, DBLQH will treat each range as a
separate scan towards the other blocks, so a scan will be started and
closed towards DBACC/DBTUP/DBTUX for each range involved.

As an optimisation all signals locally in one LDM instance have been
converted to direct signals.
The following signals are used as part of the scan of one partition.
ACC_SCANREQ:
  This signal initialises an operation record in DBACC/DBTUP/DBTUX for
  scan of one range or a full partition. Always sent as a direct signal
  and returned immediately through signal object on return.

STORED_PROCREQ:
  This signal stores the interpreted program used to read each tuple
  as part of the scan. The same signal is also used to deallocate the
  the interpreted program when the entire scan of all ranges have been
  completed. Always sent as a direct signal and returned immediately
  through signal object on return.

ACC_LOCKREQ:
  Certain scans require a lock on the row before the row is read, this
  signal acquires such a lock. Always sent as a direct signal. Return
  signal not always sent immediately.

ACCKEYCONF:
  Signal returned when the lock have been acquired, the signal is
  normally sent directly when the row is not locked, but for a locked
  row the signal can be sent even a second or more later. When sent the
  signal is sent as a direct signal.

ACCKEYREF:
  Signal returned when acquiring lock failed, e.g. due to record deleted
  while waiting for it.

ACC_ABORTCONF:
  Signal returned after aborting a scan using an asynchronous message to
  ensure that all asynchronous messages are delivered since setting the
  scan state as aborted.

NEXT_SCANREQ:
  This signal is used with different meaning:
  ZSCAN_NEXT:
    Get the next row reference to read, returned in NEXT_SCANCONF signal.
  ZSCAN_NEXT_COMMIT:
    Get the next row reference to read AND unlock the specified row.
    Returned in NEXT_SCANCONF signal.
  ZSCAN_COMMIT:
    Unlock the specified row. Return signal is simply returned when
    returning from call to execNEXT_SCANREQ.
  ZSCAN_CLOSE:
    Close the scan in DBACC/DBTUP/DBTUX.

  When sent as ZSCAN_COMMIT and ZSCAN_CLOSE it is always sent as a direct
  signal. Otherwise it is sent as direct or asynchronous signal dependent
  on the value of the scan_direct_count variable in the DBLQH scan
  record. The scan_direct_count variable ensures that we keep the number
  of direct signals sent bounded.

NEXT_SCANCONF:
  Return signal to NEXT_SCANREQ containing row reference to read or
  indication of close completed. Always sent as a direct signal.

TUPKEYREQ:
  This signal does the actual read of the row and sends the read row data
  directly to the API using the TRANSID_AI signal. This signal is always
  sent as a direct signal.

ACC_CHECK_SCAN:
  Continue scanning from specified place. Used by DBACC/DBTUP/DBTUX as an
  internal signal as part of the scan. This signal can be sent both as
  an asynchronous signal and as a direct signal.

SCAN_FRAGCONF:
  Return signal sent to DBTC/DBSPJ after completing a part of the scan,
  the signal carries a set of references to rows sent to the API. After
  sending this signal DBLQH will stop and wait for a SCAN_NEXTREQ to
  signal asking DBLQH to continue the scan of the partition. The number
  of rows scanned before sending SCAN_FRAGCONF is dependent on both
  configuration parameters and information in the SCAN_FRAGREQ signal.

  This signal is also sent when the scan is fully completed.
  This signal is normally a distributed signal, so it is always sent as
  an asynchronous signal.

SCAN_NEXTREQ:
  Request to continue scanning from DBTC/DBSPJ as requested to them from
  API.
  This signal is normally a distributed signal, so it is always sent as
  an asynchronous signal.

 Below follows an example signal diagram of a scan of one partition.

 DBLQH          ACC      TUP    ACC/TUP/TUX    API      DBTC
   |ACC_SCANREQ
   |----------------------------------------- >|
   |< -----------------------------------------|
   | STORED_PROCREQ
   |------------------------- >|
   |< -------------------------|
   | NEXT_SCANREQ (ZSCAN_NEXT)
   |-----------------------------------------  >|
   |                          prepareTUPKEYREQ
   |                          |< --------------|
   |                          |-------------- >|
   | NEXT_SCANCONF
   |< -----------------------------------------|
   | TUPKEYREQ
   |------------------------- >|  TRANSID_AI
   |                          |--------------------------------------- >|
   |< -------------------------|
   | NEXT_SCANREQ (ZSCAN_NEXT_COMMIT)
   |----------------------------------------- >|
   |                          prepareTUPKEYREQ
   |                          |< --------------|
   |                          |-------------- >|
   | NEXT_SCANCONF
   |< -----------------------------------------|
   | TUPKEYREQ
   |------------------------- >|  TRANSID_AI
   |                          |--------------------------------------- >|
   |< -------------------------|
   Repeat above for as many rows as required before returning to the
   API. The above TRANSID_AI isn't necessary, the interpreted program
   could perform selection and decide to not send a specific row since
   it doesn't match the condition checked by the interpreted program.
   |
   | SCAN_FRAGCONF
   |---------------------------------------------------------------- >|
   .... Some time for API and DBTC to process things.
   | SCAN_NEXTREQ
   |< ----------------------------------------------------------------|
   | NEXT_SCANREQ (ZSCAN_NEXT_COMMIT)
   |----------------------------------------- >|
   |                          prepareTUPKEYREQ
   |                          |< --------------|
   |                          |-------------- >|
   | NEXT_SCANCONF
   |< -----------------------------------------|
   | TUPKEYREQ
   |------------------------- >|  TRANSID_AI
   |                          |-------------------------- >|
   |< -------------------------|
   Repeat above again until time for next SCAN_FRAGCONF to be sent.
   When scan from NEXT_SCANCONF indicates there are no more tuples to
   fetch one starts to close the scan.

   |
   | NEXT_SCANREQ (ZSCAN_NEXT_COMMIT)
   |----------------------------------------- >|
   | NEXT_SCANCONF(no more tuples)
   |< -----------------------------------------|
   | NEXT_SCANREQ (ZSCAN_CLOSE)
   |----------------------------------------- >|
   | NEXT_SCANCONF
   |< -----------------------------------------|
   | STORED_PROCREQ (delete interpreted program)
   |------------------------- >|
   |< -------------------------|
   | SCAN_FRAGCONF (close flag set)
   |---------------------------------------------------------------- >|
   Now the scan is completed.

   Now a number of variations on the above signal diagrams:
   Scan with locking:
   In this we use the flag ZSCAN_NEXT all the time and never
   ZSCAN_NEXT_COMMIT, we handle things a bit differently instead when
   receiving SCAN_NEXTREQ where we perform a signal diagram like this:

   | NEXT_SCANREQ (ZSCAN_COMMIT)
   |----------------------------------------- >|
   |< -----------------------------------------|
   This is repeated for each row sent to the API in the previous
   SCAN_FRAGCONF signal.

   If the application wants the row locked for longer time he have had
   the chance to perform a key lookup operation that took over the lock
   such that even when we unlock the scan lock, the transaction still
   retains a lock on the row.

   After each row scanned we check if we've reached a scan heartbeat
   timeout. In case we have we send a SCAN_HBREP signal to DBTC/DBSPJ
   to inform about that we're still actively scanning even though no
   result rows have been sent. Remember here that a scan in DBLQH can
   potentially scan billions of rows while only returning very few to
   the API. Thus we can scan for an extended time without returning to
   the API. This is handled by the method check_send_scan_hb_rep.

   Already from returning from ACC_SCANREQ we can discover that the
   partition (== fragment) is empty and go immediately to the close down code.
   For index scans we will send TUX_BOUND_INFO after ACC_SCANREQ and
   before sending STORED_PROCREQ to DBTUX. This will provide one range
   to DBTUX for scanning, if multiple ranges are to be scanned we
   startup a new scan as if it was a new SCAN_FRAGREQ received, but we
   don't need to send STORED_PROCREQ since the same interpreted program
   will be used. We will however send ACC_SCANREQ and TUX_BOUND_INFO
   also for this new range.

  There are various reasons for temporarily stopping a scan, this could
  lack of operation records, holding too many row locks, one could also
  end up in this situation after waiting for a row lock.

  To restart the scan again after any type of temporary stop one sends
  the signal ACC_CHECK_SCAN either as direct or as an asynchronous signal
  to DBACC/DBTUP/DBTUX. This signal is sent from many different places in
  DBLQH, DBACC, DBTUP and DBTUX. It is always sent as part of NEXT_SCANREQ
  processing.

  When executing ACC_CHECK_SCAN one can flag to DBACC/DBTUP/DBTUX
  that one  should check for a 10 ms delay with the flag ZCHECK_LCP_STOP. In
  previous versions this was also related to local checkpoints, this is no longer
  the case. Now it's only related to situations where it is required to
  perform an extra wait such that resources becomes available again.

  DBTUP and DBTUX sends the signal CHECK_LCP_STOP to DBLQH in a number of
  situations, among other things when a locked key has been encountered.
  When the ACCKEYCONF signal then is received indicating that one acquired
  the lock, DBLQH will still wait for CHECK_LCP_STOP from DBQLH to return
  after 10ms delay. This is on the TODO-list to fix to ensure that we can
  proceed with these locked rows immediately after delivery. As it is now
  we can get up to 10ms delay each time we encounter a locked row.
PlanetMySQL Voting: Vote UP / Vote DOWN

Handling hierarchy and travesing Social networks in MySQL with OQGraph

From time to time we detect query patterns that are not well fitted to the BTree+ structures provided by InnoDB. One such situation is when you need to traverse a hierarchy (tree) or graph structure with many nodes. Specialist databases exist for this such as Neo4J. However there exists a simple solution in the form of  OQGraph which is distributed with MariaDB and is documented here.


The OQGRAPH engine is based on an original idea by Open Query founder Arjen Lentz, and was developed in-house with Antony Curtis at Open Query.

A good simple example of how OQGraph can be applied, is the friend of a friend traversal where you want to determine the path between two people in a social network. I’ve used MariaDB, installing OQGraph as a plugin.

INSTALL SONAME 'ha_oqgraph'; create database test; use test;

First I’ll create a person table to hold a bunch of people from my facebook account, a friendship table to represent the various connections that exist between those people, and lastly the OQGraph table which is what we will query in order to find the relationships.

CREATE TABLE `person` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` char(12) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=17 DEFAULT CHARSET=latin1 CREATE TABLE `friendship` ( `person` int(10) unsigned NOT NULL, `friend` int(10) unsigned NOT NULL, PRIMARY KEY (`person`,`friend`), KEY `idx_friend_person` (`friend`,`person`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 CREATE TABLE `friendship_graph` ( `latch` varchar(32) DEFAULT NULL, `origid` bigint(20) unsigned DEFAULT NULL, `destid` bigint(20) unsigned DEFAULT NULL, `weight` double DEFAULT NULL, `seq` bigint(20) unsigned DEFAULT NULL, `linkid` bigint(20) unsigned DEFAULT NULL, KEY `latch` (`latch`,`origid`,`destid`) USING HASH, KEY `latch_2` (`latch`,`destid`,`origid`) USING HASH ) ENGINE=OQGRAPH DEFAULT CHARSET=latin1 `data_table`=friendship `origid`=person `destid`=friend

In the CREATE statement for the OQGraph table, we pass the origin table (data_table) and fields so it understands where to source the data.

The person table simply contains names and IDs.

+----+----------+ | id | name | +----+----------+ | 1 | ewen | | 2 | piotr | | 3 | mixa | | 4 | maciek | | 5 | kenny | | 6 | lefred | | 7 | peter | | 8 | celeste | | 9 | stephen | | 10 | carolina | | 11 | mark | | 12 | harriet | | 13 | dani | | 14 | juanjo | | 15 | ander | | 16 | ana | +----+----------+

And the friendship table was populated with the combined ID values to represent each connection. For example here we record Ander knows Ana, and Ana knows Ander.

insert into friendship values (16,15); insert into friendship values (15,16);

Now we can find the shortest path between two nodes (people).

MariaDB [test]> select p.name from friendship_graph fg join person p on (fg.linkid=p.id)where fg.latch = '1' and origid = 1 and destid = 2; +-------+ | name | +-------+ | ewen | | piotr | +-------+

Since Piotr and I know each other directly, there is just a straight link between the two of us. What about something less direct?

MariaDB [test]> select p.name from friendship_graph fg join person p on (fg.linkid=p.id)where fg.latch = '1' and origid = 8 and destid = 2; +---------+ | name | +---------+ | celeste | | ewen | | piotr | +---------+

In this case Celeste knows me, and since I know Piotr we have linked the two nodes (people) via a third node. Can this follow a longer chain, can we traverse more than one level of abstraction?

MariaDB [test]> select p.name from friendship_graph fg join person p on (fg.linkid=p.id)where fg.latch = '1' and origid = 8 and destid = 16; +---------+ | name | +---------+ | celeste | | ewen | | juanjo | | ander | | ana | +---------+

Again, Celeste knows Ewen, who knows Juanjo, he in turn knows Ander and Ander is friends with Ana.

What is the “latch” field in the CREATE TABLE statement?

The latch field is the given mechanism for communicating with the OQGraph engine and it allows us to set the algorithm to use.

Summary of Implemented latch commands

Latch Alternative additional where clause fields Graph operation NULL (unspecified) (none) List original data (empty string) 0 (none extra) List all vertices in linkid column (empty string) 0 origid List all first hop vertices from origid in linkid column dijkstras 1 origid, destid Find shortest path using Dijkstras algorithm between origid and destid, with traversed vertex ids in linkid column dijkstras 1 origid Find all vertices reachable from origid, listed in linkid column, and report sum of weights of vertices on path to given vertex in weight dijkstras 1 destid Find all vertices reachable from destid, listed in linkid column, and report sum of weights of vertices on path to given vertex in weight breadth_first 2 origid List vertices reachable from origid in linkid column breadth_first 2 destid List vertices from which a path can be found to destid in linkid column breadth_first 2 origid, destid Visit all vertices possible between origid and destid, report in linkid column

 

The MariaDB documentation on OQGraph can be found on the MariaDB knowledge base.

To solve Graph type queries natively in MySQL, it is necessary to both consider the schema model to use such as nested sets or adjacency lists and in some cases write stored procedures to provide the recursion necessary. OQGraph presents a simple solution to a complex problem without introducing additional complications to the stack and without the need to develop sophisticated SQL. Facebook have created their own solution called TAO which interacts with MySQL, but to my knowledge is not currently open source.


PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB 10.1.3 Overview and Highlights

MariaDB 10.1.3 was recently released, and is available for download here:

https://downloads.mariadb.org/mariadb/10.1.3/

This is the 1st beta, and 4th overall, release of MariaDB 10.1, so there are a lot of new changes, functionalities added, defaults changed, and many bugs fixed (I counted 420 – 117 in 10.1.2 & 637 in 10.1.1, fwiw).

Since it’s beta, I’ll only cover the major changes and additions, and omit covering general bug fixes (feel free to browse them all here).

To me, these are the highlights of the new features:

Of course it goes without saying that do not use this for production systems as it is only the 1st beta release. However, I definitely recommend installing it on a test server and giving it a go, test-drive the new features, throw some load at it, try to break it, and so forth.

You can read more about the 10.1.3 release here:

https://mariadb.com/kb/en/mariadb-1013-release-notes/

And if interested, you can review the full list of changes in 10.1.3 (changelogs) here:

https://mariadb.com/kb/en/mariadb-1013-changelog/

Hope this helps.

 


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL 5.6.23 Overview and Highlights

MySQL 5.6.23 was recently released (it is the latest MySQL 5.6, is GA), and is available for download here.

For this release, there is 1 “Security Note”, 3 “Functionality Changed”, and 5 “Compilation Notes”, all benign, but let me address them:

  1. Security Note: The linked OpenSSL library for the MySQL Commercial Server has been updated from version 1.0.1j to version 1.0.1k. Issues fixed in the new version are described at http://www.openssl.org/news/vulnerabilities.html.
  2. Functionality Changed: Support for the SSL 2.0 and SSL 3.0 protocols has been disabled because they provide weak encryption. (Bug #19820550, Bug #19921150)
  3. Functionality Changed: yaSSL was upgraded to version 2.3.7. (Bug #19695101, Bug #20201864)
  4. Functionality Changed: The valid date range of the SSL certificates in mysql-test/std_data has been extended to the year 2029. (Bug #18366947)

In addition to those, there were 37 other bug fixes:

  • 13 InnoDB
  •   5 Replication
  • 18 Miscellaneous
  •   1 Partitioning

The highlights for me are the Partitioning bug, 1 of the Replication bugs, and 8 of the InnoDB bugs, as 1 was a regression bug (crashing/corruption) and the others include bugs that raise invalid assertions, server halts, break replication, and so forth, though all in all, I wouldn’t say any of these are very common and require immediate attention:

  1. InnoDB: If a DROP DATABASE statement failed on the master, mismatched tables could be left on the slave, breaking replication. This was caused by the DROP TABLE statement being binary logged if at least one table was deleted during the DROP DATABASE operation. The fix ensures that in such a situation the DROP TABLE statement is binary logged with the IF EXISTS option. (Bug #74890, Bug #20041860)
  2. InnoDB: A tablespace export operation set the purge state to PURGE_STATE_STOP but the purge thread did not check the purge state until the current purge operation was completed. In the case of a large history list, the tablespace export operation was delayed, waiting for the current purge operation to finish. The purge state is now checked with every purge batch. (Bug #20266847, Bug #75298)
  3. InnoDB: An ALTER TABLE … ADD INDEX operation raised an assertion due to assertion code that did not allow an online index status of ONLINE_INDEX_ABORTED_DROPPED. The assertion code has been relaxed. (Bug #20198726)
  4. InnoDB: DML operations on a table with full-text search indexes raised an invalid assertion. (Bug #19905246) References: This bug is a regression of Bug #19314480.
  5. InnoDB: A multiple-table delete operation caused the server to halt. (Bug #19815702)
  6. InnoDB: A FLUSH TABLES operation raised an assertion. (Bug #19803418)
  7. InnoDB: With change buffering enabled, a buffered sequence of operations that should not have been buffered resulted in an Unable to purge a record error. (Bug #19528825, Bug #73767)
  8. InnoDB: A slow shutdown (innodb_fast_shutdown=0) after crash recovery raised an assertion. Slow shutdown did not wait for background rollback operations to finish before proceeding. (Bug #16862810)
  9. Partitioning: A failed ALTER TABLE … TRUNCATE PARTITION statement or a failed TRUNCATE TABLE statement against a partitioned table sometimes left inconsistent metadata in the table cache; subsequent SQL statements reusing this metadata failed, and could in some cases also lead to a failure of the server. (Bug #74292, Bug #19786861)
  10. Replication: When using SHOW SLAVE STATUS to monitor replication performance, Seconds_Behind_Master sometimes displayed unexpected lag behind the master. This was caused by Previous_gtids log events being written to the slave’s relay log with a timestamp behind the master, and then being used to calculate the Seconds_Behind_Master. This fix ensures that events generated on the slave that are added to the relay log and are not used when calculating Seconds_Behind_Master. (Bug #72376, Bug #18622657)

Conclusions:

So while there were no major changes, those 8 InnoDB bugs, especially in total, are of concern, so I’d consider upgrading if I were running InnoDB on a prior version of 5.6.

And with the yaSSL updates, if you use SSL connections, you may want to consider upgrading as well.

The full 5.6.23 changelogs can be viewed here (which has more details about all of the bugs listed above):

http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-23.html

Hope this helps.

 


PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB 5.5.42 Overview and Highlights

MariaDB 5.5.42 was recently released (it is the latest MariaDB 5.5), and is available for download here:

https://downloads.mariadb.org/mariadb/5.5.42/

This is a maintenance release, and so there were not too many changes at all and only 3 changes (enhancements) I felt noteworthy:

  • The new version of the Audit Plugin is 1.2 is included with the following new features:
    • In the audit log passwords are now masked, i.e. the password characters are replaced with asterisks.
    • It’s now possible to filter logging to include only DDL (CREATE, ALTER, etc.) or DML (INSERT, UPDATE, etc.) statements.
    • For more information please refer to the About the MariaDB Audit Plugin page. The plugin is disabled by default.
  • With this release we introduce a low-level Client API. Applications, linked with libmysqlclient client library can use these functions to read and parse raw protocol packets:
    • unsigned long mysql_net_read_packet(MYSQL *mysql);
    • unsigned long mysql_net_field_length(unsigned char **packet);
  • Includes all bugfixes and updates from MySQL 5.5.42 (MySQL 5.5.42 Overview and Highlights)
  • TokuDB upgraded to 7.5.5

So there are no real crucial fixes requiring an upgrade, however, if you’re running the audit plugin, or TokuDB, or you want the benefits of the new fixes in general, then you should consider an upgrade.

If interested, the official MariaDB 5.5.42 release notes are here:

https://mariadb.com/kb/en/mariadb/development/release-notes/mariadb-5542-release-notes/

And the full list of fixed bugs and changes in MariaDB 5.5.42 can be found here:

https://mariadb.com/kb/en/mariadb/development/changelogs/mariadb-5542-changelog/

Hope this helps.

 


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL 5.5.42 Overview and Highlights

MySQL 5.5.42 was recently released (it is the latest MySQL 5.5, is GA), and is available for download here:

http://dev.mysql.com/downloads/mysql/5.5.html

This release, similar to the last 5.5 release, is mostly uneventful.

There were only 3 “Functionality Added or Changed” bugs this time, all related to SSL, and only 9 bugs overall fixed.

Out of the 9 bugs, there were 2 InnoDB bugs, and 1 replication bug, all of which seemed rather minor or obscure. Here are the ones worth noting:

  • Support for the SSL 2.0 and SSL 3.0 protocols has been disabled because they provide weak encryption. (Bug #19820550, Bug #19921150)
  • yaSSL was upgraded to version 2.3.7. (Bug #19695101, Bug #20201864)
  • The valid date range of the SSL certificates in mysql-test/std_data has been extended to the year 2029. (Bug #18366947)
  • InnoDB: An error occurred when the push_warning_printf function was invoked during server recovery. This function was previously used to print a warning message to the client. Also, current_thd was NULL when the server was restarted. (Bug #20144839)
  • InnoDB: A multiple-table delete operation caused the server to halt. (Bug #19815702)
  • InnoDB: If a DROP DATABASE statement failed on the master, mismatched tables could be left on the slave, breaking replication. This was caused by the DROP TABLE statement being binary logged if at least one table was deleted during the DROP DATABASE operation. The fix ensures that in such a situation the DROP TABLE statement is binary logged with the IF EXISTS option. (Bug #74890, Bug #20041860)

I don’t think I’d call any of these critical, but similar to the last release, if you depend on yaSSL, you may want to consider upgrading to ensure you’re using the latest yaSSL.

For reference, the full 5.5.42 changelog can be viewed here:

http://dev.mysql.com/doc/relnotes/mysql/5.5/en/news-5-5-42.html

Hope this helps.

 


PlanetMySQL Voting: Vote UP / Vote DOWN

How to test if CVE-2015-0204 FREAK SSL security flaw affects you

The CVE-2015-0204 FREAK SSL vulnerability abuses intentionally weak “EXPORT” ciphers which could be used to perform a transparent Man In The Middle attack. (We seem to be continually bombarded with not only SSL vulnerabilities but the need to name vulnerabilities with increasing odd names.)

Is your server vulnerable?

This can be tested using the following GIST

If the result is 0; the server is not providing the EXPORT cipher; and as such is not vulnerable.

Is your client vulnerable?

Point your client to https://oneiroi.co.uk:4443/test if this returns “Vulnerable” then the client is vulnerable, if you find a connection error your client should not be vulnerable for example:

root@host:/tmp$ openssl version
OpenSSL 1.0.1e 11 Feb 2013
root@host:/tmp$ curl https://oneiroi.co.uk:4443/test -k
Vulnerable

root@host:/tmp$ openssl s_client -connect oneiroi.co.uk:4443
CONNECTED(00000003)
depth=0 C = XX, L = Default City, O = Default Company Ltd
verify error:num=18:self signed certificate
verify return:1
depth=0 C = XX, L = Default City, O = Default Company Ltd
verify return:1

Certificate chain
0 s:/C=XX/L=Default City/O=Default Company Ltd
i:/C=XX/L=Default City/O=Default Company Ltd

Server certificate
—–BEGIN CERTIFICATE—–
MIIDVzCCAj+gAwIBAgIJANvTn7jl

[root@3654e4df1cc2 bin]# curl https://oneiroi.co.uk:4443/test -k
curl: (35) Cannot communicate securely with peer: no common encryption algorithm(s).
[root@3654e4df1cc2 bin]# openssl s_client -connect oneiroi.co.uk:4443
CONNECTED(00000003)
139942442694560:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:744:

In short a vulnerable client will complete the connection, and a non vulnerable client should present an SSL handshake failure error.

DIY

You can recreate this setup yourself


openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout mycert.pem -out mycert.pem;
openssl s_server -cipher EXPORT -accept 4443 -cert mycert.pem -HTTP;

Is MySQL affected ?

Some of the code per the POODLE Blog post can be re-purposed here.


mysql -Bse "SHOW STATUS LIKE 'Ssl_cipher_list'" | sed 's/:/n/g' | grep EXP | wc -l

A result of 0 means the MySQL instance does not support any of the EXPORT ciphers, and thus should not be vulnerable to this attack.

How about other clients?

Most clients link to another library for SSL purposes; however there are examples where this is not the case; take for example golang http://golang.org/pkg/crypto/tls/ which partially implements the TLS1.2 RFC.

The following test code however shows golang does not appear to be affected.


package main

import (
“fmt”
“net/http”
“crypto/tls”
)

func main() {
tr := &http.Transport{
TLSClientConfig: &tls.Config{},
DisableCompression: true,
}
client := &http.Client{Transport: tr}
resp, err := client.Get(“https://oneiroi.co.uk:4443/test”)
fmt.Println(err)
fmt.Println(resp)
}

Get https://oneiroi.co.uk:4443/test: remote error: handshake failure

References

The post How to test if CVE-2015-0204 FREAK SSL security flaw affects you appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Using Master-Master for failover? :(

Using master-master for MySQL? To be frankly we need to get rid of that architecture. We are skipping the active-active setup and show why master-master even for failover reasons is the wrong decision.

So why does a DBA thinks master-master is good for in a failover scenario?

  • The recovered node does get his data automatically.
  • You need not to use a backup for recovery.

Please remember: MySQL Replication is async

Again: MySQL Replication is async. Even the so called semi-sync Replication!

So following is quite likely.

See a nice master-master setup:

activ standby +------+ c +------+ | |------------->| | |abcd | |ab | | | | | | |<-------------| | +------+ +------+

Oh my god the node went down:

RIP activ +------+ +------+ | |-----||------>| | |abcd | |abc | | | | | | |<----||-------| | +------+ +------+

Np, we’ve got master-master. After the takeover the recovering node fetches up. (As a fact it has one transaction more:( )

recovered activ +------+ +------+ | |------------->| | |abcd | |abce | | | e | | | |<-------------| | +------+ +------+

Great we got no sync data anymore!

recovered activ +------+ +------+ | |------------->| | |abcde | |abce | | | | | | |<-------------| | +------+ +------+

As a fact there is no need for master-master anyway. We’ve got GTID nowadays. Use a simple replication. In a failover you can use GTID to check if you got extra transactions on the recovering node.

If not then simply create a replication and you get all the missing data.

But if there are extra transactions on the recovering node you got to rebuild the node anyway.

FYI: This works with GTID@MariaDB and GTID@MySQL.

Welcome to the GTID era! \o/

Viel Spaß

Erkan :)


PlanetMySQL Voting: Vote UP / Vote DOWN

Steve ‘Woz’ Wozniak to speak at Percona Live MySQL Conference &amp; Expo 2015

Apple’s Steve “Woz” Wozniak will talk at Percona Live 2015

I am thrilled to announce that Apple co-founder and Silicon Valley icon and philanthropist Steve Wozniak will participate in a moderated Q&A on creativity and innovation April 14 during this year’s Percona Live MySQL Conference and Expo in Santa Clara, California.

In addition to “The Woz,” as Steve is nicknamed, we have a great lineup of esteemed industry luminaries, with a broad range of talks and tutorials along with fun and informative networking events during the four-day conference (April 13-16).

Harrison Fisk of Facebook’s Database Performance Team will deliver a keynote titled “Polyglot Persistence @Facebook” exploring why Facebook has so many different data solutions deployed at scale and how the company decides to deploy a new one. He’ll also talk about how Facebook is able to manage all these solutions and what types of optimizations are possible across all their data systems.

‘MySQL 101’ program announced
I wrote about the new “MySQL 101” program earlier this week. Largely driven by the broad demand for (and scarcity of) MySQL expertise across many industries, Percona Live is adding dual tracks for both developers and system administrators that will provide a 2-day course on MySQL fundamentals.

The two days of practical training will include everything needed to handle day-to-day MySQL DBA tasks. The motto of this new program is, “You send us developers and admins, and we will send you back MySQL DBAs.” You can check out the full schedule for MySQL 101 here.

Lightning Talks and Birds of a Feather Sessions (BOFs)
In related news, today we also announced the schedules for the popular “Lightning Talks” and “Birds of a Feather Sessions” (BOFs).

Lightning Talks provide an opportunity for attendees to propose, explain, exhort, and rant on any MySQL-related topic for five minutes. Lightning Talks will take place Wednesday night, (April 15) during the MySQL Community Networking Reception, which begins immediately following the breakout sessions.

This year’s topics include:

  • “Everything About MySQL Bugs” presented by Seta Smirnova, senior principal technical support engineer, and Valerii Kravchuk, principle support engineer at Percona
  • “MySQL Galera Cluster, Percona XtraDB Cluster, and MariaDB Galera Cluster” presented by Jay Janseen, managing consultant at Percona
  • “Data Security – Emerging Legal Trends” presented by S. Keith Moulsdale, partner at Whiteford, Taylor & Preston
  • “WebScaleSQL Meeting/Hacking” presented by Steaphan Greene, software engineer at WebScaleSQL

BOFs sessions enable attendees with interests in the same project or topic to enjoy some quality face time. The BOF sessions will take place Tuesday night, April 14, from 6:00 p.m. to 7:00 p.m. This year’s topics include:

  • “Working From Home – Fun, Facts and Scares” presented by Raghavendra Prabhu, product lead, Percona XtraDB Cluster at Percona
  • “The Future of MySQL Quality Assurance: Introducing pquery” presented by Roel Van de Paar, senior QA lead, and Ramesh Sivaraman, junior QA engineer at Percona

Community Reception
One of the most fun and compelling aspects of the Percona Live is the networking, providing the opportunity to make connections that can help enhance a career, facilitate a current project, or inspire new ideas. This year’s conference features the Community Networking Reception in the Expo Hall on Wednesday, April 15, from 5:30 p.m. to 7:00 p.m. The event will include the MySQL Community Awards and the Lightning Talks.

Conference Registration Promotions
Advanced pricing discounts are still available but will end on March 8. Register soon to take advantage of the best conference rates available. A Percona Live MySQL Conference & Expo Ticket provides you with access to all OpenStack Live sessions. In addition, in the spirit of the new MySQL 101 track, the first 101 people who sign-up for MySQL 101 using the “101” discount code will get their tickets for a $101 – a $299 savings! Attendees of MySQL 101 will have full access to Percona Live Keynotes and the Expo Hall.

See you next month!

The post Steve ‘Woz’ Wozniak to speak at Percona Live MySQL Conference & Expo 2015 appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Resources for Highly Available Database Clusters: ClusterControl Release Webinar, Support for Postgres, New Website and More

Check Out Our Latest Technical Resources for MySQL, MariaDB, Postgres and MongoDB Clusters

Like every month, we have created new content and tools for you; here is a summary of what we’ve published. Please do check it out and let us know if you have any comments or feedback.

New Live Technical Webinar

Introducing the New ClusterControl with Live Demo on 24th of March

Johan Andersson, CTO at Severalnines will walk us through the latest edition of ClusterControl and its new features in the form of a live demo. Johan will go over the latest features in ClusterControl including the recently announced support for Postgres.

Register here

 

Product Announcements & Resources

Management and Monitoring of Existing Postgres Servers

This week we announced support for PostgreSQL and ClusterControl can now be used to monitor database metrics, queries and schedule backups. With this new addition, ClusterControl supports MySQL with its main variations MariaDB, Percona XtraDB (single or clustered), MongoDB and PostgreSQL. Companies can manage all these databases through one unified interface.

Find out more

ClusterControl Template for Zabbix

We also announced a ClusterControl Template for Zabbix, so Zabbix users can now get information about the status of their database clusters, backups and alarms. We have previously published integrations with other monitoring systems including Nagios and PagerDuty.

Find out more

 

Technical Webinar - Replay

A DevOps Guide to Database Infrastructure Automation for eCommerce

We were lucky enough to recently have Riaan Nolan of Foodpanda/Rocket Internet as our guest speaker for a session during which he shared his experience and advice on how to automate highly available Commerce infrastructures.

If you’re interested in how to deal with the following topics (and more), then do check out the replay of this webinar:

  • eCommerce infrastructure challenges in 2014, including a sample workflow chart outlining: Puppet, GitHub, Capistrano, Nginx, PHP5-FPM / Ruby, Percona XtraDB, Couchbase, SOLR, GlusterFS
  • Provisioning of test/QA and highly available production environments across multi-datacenter and multi-cloud environments (AWS Cloud Formation)
  • Building and maintaining configuration management systems such as Puppet and Chef

Watch the replay and read the slides here

 

Technical Blogs

As many of you will know, GTID was introduced along with MySQL 5.6, and brought along some major changes in the way MySQL operates. When previously one had to work around limitations related to binary logs, it’s now all in the GTID. GTIDs are not flawless though, find out more in this blog post:   

MySQL Replication and GTID-based failover - A Deep Dive into Errant Transactions

Severalnines Company News

The past twelve months have been quite an exiciting time for us and we decided to publish a status update on where we are today as a company and on what we’re looking forward to for the future. If you haven’t read it yet, do have a look at our Momentum Press Release, which we published recently.

Read the Momentum Press Release

And, last but not least, we’ve also published a brand new website last month! Most of you will have noticed the change already: we’re sporting a new look and have introduced a website with better ease of navigation and simpler paths to get to the information our users are looking for. Do let us know if you have any feedback on it!

We trust these resources are useful. If you have any questions on them or on related topics, please do contact us!

Your Severalnines Team

 

Blog category: Tags:
PlanetMySQL Voting: Vote UP / Vote DOWN

Percona XtraDB Cluster 5.6.22-25.8 is now available

Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on March 5th 2015. Binaries are available from downloads area or from our software repositories.

Based on Percona Server 5.6.22-72.0 including all the bug fixes in it, Galera Replicator 3.9, and on Codership wsrep API 25.8, Percona XtraDB Cluster 5.6.21-25.8 is now the current General Availability release. All of Percona’s software is open-source and free, and all the details of the release can be found in the 5.6.22-25.8 milestone at Launchpad.

Bugs Fixed:

  • XtraBackup SST wouldn’t stop when MySQL was SIGKILLed. This would prevent MySQL to initiate a new transfer as port 4444 was already utilized. Bug fixed #1380697.
  • wsrep_sst_xtrabackup-v2 script was causing innobackupex to print a false positive stack trace into the log. Bug fixed #1407599.
  • MyISAM DDL (CREATE/DROP) isn’t replicated any more when wsrep_replicate_myisam is OFF. Note, for older nodes in the cluster, wsrep_replicate_myisam should work since the TOI decision (for MyISAM DDL) is done on origin node. Mixing of non-MyISAM and MyISAM tables in the same DDL statement is not recommended with wsrep_replicate_myisam OFF since if any table in list is MyISAM, the whole DDL statement is not put under TOI (total order isolation), hence not replicated. Bug fixed #1402338.
  • gcache.mem_size option has been deprecated. A warning will now be generated if the variable has value different than 0. Bug fixed #1392408.
  • stderr of SST/Innobackupex is logged to syslog with appropriate tags if sst-syslog is in [sst] or [mysqld_safe] has syslog in my.cnf. This can be overriden by setting the sst-syslog to -1 in [sst]. Bug fixed #1399134.
  • clustercheck can now check if the node is PRIMARY or not, to allow for synced nodes which go out of PRIMARY not to take any writes/reads. Bug fixed #1403566.
  • SST will now fail early if the xtrabackup_checkpoints is missing on the joiner side. Bug fixed #1405985.
  • socat utility was not properly terminated after a timeout. Bug fixed #1409710.
  • When started (without bootstrap), the node would hang if it couldn’t find a primary node. Bug fixed #1413258.
  • 10 seconds timeout in Xtrabackup SST script was not enough for the joiner to delete existing files before it started the socat receiver on systems with big datadir. Bug fixed #1413879.
  • Non booststrap node could crash while attempting to perform table%cache operations with the BF applier failed to open_and_lock_tables warning. Bug fixed #1414635.
  • Percona XtraDB Cluster 5.6 would crash on ALTER TABLE / CREATE INDEX with Failing assertion: table->n_rec_locks == 0 error. Bug fixed #1282707.
  • Variable length arrays in WSREP code were causing debug builds (-DWITH_DEBUG=ON) to fail. Bug fixed #1409042.
  • Race condition between donor and joiner in Xtrabackup SST configuration has been fixed. This caused XtraBackup SST to fail when joiner took longer to spawn the second listener for SST. Bug fixed #1405668.
  • Signal handling in mysqld has been fixed for SST processes. Bug fixed #1399175.
  • SST processes are now spawned with fork/exec instead of posix_spawn to allow for better cleanup of child processes in event of non-graceful termination (SIGKILL or a crash etc.). Bug fixed #1382797.
  • wsrep_local_cached_downto would underflow when the node on which it is queried had no writesets in gcache. Bug fixed #1262179.
  • A typo in wsrep_provider_options could cause an unhandled exception. Bug fixed #215.
  • Interrupted IST would result in HA_ERR_KEY_NOT_FOUND error in subsequent IST. Bug fixed #210.

Other bugs fixed: #1275814.

Known Issue:

  • For those affected by crashes on donor during SST due to backup locks (#1401133), please add the following to your my.cnf configuration file:[sst]
    inno-backup-opts='--no-backup-locks'

    option as a workaround to force FLUSH TABLES WITH READ LOCK (NOTE: This workaround will is available only if you’re using Percona XtraBackup 2.2.9 or newer.). Or, as an alternative you can set your environment variable FORCE_FTWRL to 1 (for passing environment variables, see description of bug #1381492 in the previous release notes).

Release notes for Percona XtraDB Cluster 5.6.22-25.8 are available in our online documentation along with the installation and upgrade instructions.

Percona XtraDB Cluster code hosting has been moved to Github. The Bazaar branches will not be updated further.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

The post Percona XtraDB Cluster 5.6.22-25.8 is now available appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Workbench 6.3.1 Beta has been released

The MySQL Workbench team is announcing availability of the first beta release of its upcoming major product update, MySQL  Workbench 6.3. 

The MySQL developer tools team announces the 6.3.1 beta as our first
release for MySQL Workbench 6.3. This release contains a number of new
and enhanced features which are feature complete but might still contain
bugs. So use them with care.

For the full list of changes in this revision, visit
http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-3.html

For detailed information about the new features, see What’s
New in MySQL Workbench 6.3
http://dev.mysql.com/doc/workbench/en/wb-what-is-new-63.html

For discussion, join the MySQL Workbench Forums:
http://forums.mysql.com/index.php?151

Download MySQL Workbench 6.3.1 beta now, for Windows, Mac OS X 10.7+,
Oracle Linux 6 and 7, Fedora 20 and Fedora 21, Ubuntu 14.04 and Ubuntu
14.10 or sources, from:

http://dev.mysql.com/downloads/tools/workbench/

Enjoy


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Workbench 6.3.1 Beta has been released

Dear MySQL users,

The MySQL developer tools team announces the 6.3.1 beta as our first
release for MySQL Workbench 6.3. This release contains a number of new
and enhanced features which are feature complete but might still contain
bugs. So use them with care.

For the full list of changes in this revision, visit
http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-3.html

For detailed information about the new features, see What’s
New in MySQL Workbench 6.3
http://dev.mysql.com/doc/workbench/en/wb-what-is-new-63.html

For discussion, join the MySQL Workbench Forums:
http://forums.mysql.com/index.php?151

Download MySQL Workbench 6.3.1 beta now, for Windows, Mac OS X 10.7+,
Oracle Linux 6 and 7, Fedora 20 and Fedora 21, Ubuntu 14.04 and Ubuntu
14.10 or sources, from:

http://dev.mysql.com/downloads/tools/workbench/


PlanetMySQL Voting: Vote UP / Vote DOWN

Heartbeat Inclusion Protocol Handling in MySQL Cluster

The below description was added to the MySQL Cluster 7.4 source code
and describes how new nodes are added into the heartbeat protocol at
startup of a node.

The protocol to include our node in the heartbeat protocol starts when
we call execCM_INFOCONF. We start by opening communication to all nodes
in the cluster. When we start this protocol we don't know anything about
which nodes are up and running and we don't which node is currently the
president of the heartbeat protocol.

For us to be successful with being included in the heartbeat protocol we
need to be connected to all nodes currently in the heartbeat protocol. It
is important to remember that QMGR (the source code module that
controls the heartbeat handling) sees a node as alive if it is included
in the heartbeat protocol. Higher level notions of aliveness is handled
primarily by the DBDIH block (DBDIH is responsible for the database
level of distribution such as which nodes have up-to-date replicas of a
certain database fragment), but also to some extent by NDBCNTR
(NDBCNTR is a source module that controls restart phases and is a
layer on top of QMGR and below the database handling level).

The protocol starts by the new node sending CM_REGREQ to all nodes it is
connected to. Only the president will respond to this message. We could
have a situation where there currently isn't a president choosen. In this
case an election is held whereby a new president is assigned. In the rest
of this comment we assume that a president already exists.

So if we were connected to the president we will get a response to the
CM_REGREQ from the president with CM_REGCONF. The CM_REGCONF contains
the set of nodes currently included in the heartbeat protocol.

The president will send in parallel to sending CM_REGCONF a CM_ADD(prepare)
message to all nodes included in the protocol.

When receiving CM_REGCONF the new node will send CM_NODEINFOREQ with
information about version of the binary, number of LDM workers and
MySQL version of binary.

The nodes already included in the heartbeat protocol will wait until it
receives both the CM_ADD(prepare) from the president and the
CM_NODEINFOREQ from the starting node. When it receives those two
messages it will send CM_ACKADD(prepare) to the president and
CM_NODEINFOCONF to the starting node with its own node information.

When the president received CM_ACKADD(prepare) from all nodes included
in the heartbeat protocol then it sends CM_ADD(AddCommit) to all nodes
included in the heartbeat protocol.

When the nodes receives CM_ADD(AddCommit) from the president then
they will enable communication to the new node and immediately start
sending heartbeats to the new node. They will also include the new
node in their view of the nodes included in the heartbeat protocol.
Next they will send CM_ACKADD(AddCommit) back to the president.

When the president has received CM_ACKADD(AddCommit) from all nodes
included in the heartbeat protocol then it sends CM_ADD(CommitNew)
to the starting node.

This is also the point where we report the node as included in the
heartbeat protocol to DBDIH as from here the rest of the protocol is
only about informing the new node about the outcome of inclusion
protocol. When we receive the response to this message the new node
can already have proceeded a bit into its restart.

The starting node after receiving CM_REGCONF waits for all nodes
included in the heartbeat protocol to send CM_NODEINFOCONF and
also for receiving the CM_ADD(CommitNew) from the president. When
all this have been received the new nodes adds itself and all nodes
it have been informed about into its view of the nodes included in
the heartbeat protocol and enables communication to all other
nodes included therein. Finally it sends CM_ACKADD(CommitNew) to
the president.

When the president has received CM_ACKADD(CommitNew) from the starting
node the inclusion protocol is completed and the president is ready
to receive a new node into the cluster.

It is the responsibility of the starting nodes to retry after a failed
node inclusion, they will do so with 3 seconds delay. This means that
at most one node per 3 seconds will normally be added to the cluster.
So this phase of adding nodes to the cluster can add up to a little bit
more than a minute of delay in a large cluster starting up.

We try to depict the above in a graph here as well:

New node           Nodes included in the heartbeat protocol     President
------------------------------------------------------------------------------------
----CM_REGREQ--------------------->->
----CM_REGREQ---------------------------------------------------------->

< ---------------CM_REGCONF---------------------------------------------
                                  << ------CM_ADD Prepare ---------------

-----CM_NODEINFOREQ--------------- >>

Nodes included in heartbeat protocol can receive CM_ADD(Prepare) and
CM_NODEINFOREQ in any order.

<< ---CM_NODEINFOCONF-------------- --------CM_ACKADD(Prepare)--------->>

                                  << -------CM_ADD(AddCommit)------------

Here nodes enables communication to new node and starts sending heartbeats

                                  ---------CM_ACKADD(AddCommit)------- >>

Here we report to DBDIH about new node included in heartbeat protocol
in master node.

< ----CM_ADD(CommitNew)--------------------------------------------------

Here new node enables communication to new nodes and starts sending
heartbeat messages.

-----CM_ACKADD(CommitNew)---------------------------------------------- >

Here the president can complete the inclusion protocol and is ready to
receive new nodes into the heartbeat protocol.
PlanetMySQL Voting: Vote UP / Vote DOWN

Analyzing Related Metrics With VividCortex

Last week we announced our new query and metric listing and detail pages, which provide deep drilldown into individual queries and metrics for exploration and analysis. Today I want to show you one of the new features included in the Metric detail page. We have used it in a variety of scenarios for customers and for our internal analysis. We’ll cover some of the usage cases and success stories in future blog posts; here I’ll just give an overview of the capability.

To begin with, suppose you found a metric with an interesting shape and wanted to know something about it. This is very typical, by the way – you will see a bump or notch in a graph and wonder what it means, what else bumped or notched or behaved oddly around this time. Here’s our metric. In this case, CPU makes a good demo:

Look, ma! There’s a bump in that metric! Clicking anywhere on that metric leads to the details page, where we can see it up close and personal.

At the top, you see a bigger version of the sparkline that led you here, which is great for close inspection (reminder: we offer 1-second detail for all metrics).

Just below it is a “Find Related Metrics” section on the page. This shows all of the metrics that exist during the selected time period: query metrics, MySQL metrics, operating system metrics, hardware metrics, process metrics, and so on.

They’re clustered into groups that have similar shapes. Here we’re using the default, 40 clusters. You can change that to influence the quality of the results. We use K-Means clustering. It’s performed client-side in JavaScript and is fast and efficient, so you get an interactive, instant experience. (Many thanks to math, dataviz, and graphics genius Michael Holroyd for helping us select the algorithm.)

Each cluster of metrics is represented by a folder with the number of metrics it contains, and an average of the metrics inside it. The clusters are sorted by similarity to the metric at the top of the page, so similar-shaped clusters are first. Clicking on one of the clusters reveals the metrics inside it:

As you can probably guess, the metrics inside this cluster are often CPU metrics, but there’s also network and disk activity that is closely related to the CPU activity. Each of these metrics is also linked in turn to its own details page, so you can browse fluidly from metric to metric.

But similar-shaped metrics aren’t the whole story. Sometimes you don’t want to know what goes bump together in the night. You want to know what huddles down in fright when something else bumps. That’s easy to find, too. Scrolling down a bit, you’ll find clusters like this in the navbar:

Clicking on one of those reveals what has notches. There are various metrics with notches of various sizes, but interestingly, if you scroll down you’ll find that a number of them are query throughput metrics:

So clearly, even though there is still idle CPU on this box, there is some contention for resources that is negatively impacting some queries!

I could continue showing you lots of screenshots, and all the other interesting shapes that occur around a CPU bump like this (there’s always more than just 1 or 2 patterns), but you should really experience this yourself. It’s so much better when you see it in action. You can browse all the metrics on your system, organized logically in groups that impart structure to the system’s activity. This helps you discover things you’d never have found otherwise, because really, who’s got time to look through 50,000 metrics per server? If you’re like most people you’ll say “my metrics are doing WHAT???” when you see this interface.

This is good for more than just impressing your friends. It helps you analyze problems faster than ever. It helps you form hypotheses and drill down to just the data you need to confirm or reject them. In other words, I believe we are one of the few (only?) products on the market that helps you discover the meaning of the metrics.

Now it’s your turn to discover something you’d never have known about your systems:

We love hearing from customers, and we would really appreciate your suggestions and feedback. Use the in-app messaging system to send us your comments. Happy clustering!


PlanetMySQL Voting: Vote UP / Vote DOWN

Pages