Planet MySQL

ClusterControl Tips & Tricks: Securing your MySQL Installation (Updated)

Requires ClusterControl 1.2.11 or later. Applies to MySQL based clusters.

During the life cycle of Database installation it is common that new user accounts are created. It is a good practice to once in a while verify that the security is up to standards. That is, there should at least not be any accounts with global access rights, or accounts without password.

Using ClusterControl, you can at any time perform a security audit.

In the User Interface go to Manage > Developer Studio. Expand the folders so that you see s9s/mysql/programs. Click on security_audit.js and then press Compile and Run.

If there are problems you will clearly see it in the messages section:

Enlarged Messages output:

Here we have accounts that can connect from any hosts and accounts which do not have a password. Those accounts should not exist in a secure database installation. That is rule number one. To correct this problem, click on mysql_secure_installation.js in the s9s/mysql/programs folder.

Click on the dropdown arrow next to Compile and Run and press Change Settings. You will see the following dialog and enter the argument “STRICT”:

Then press Execute. The mysql_secure_installation.js script will then do on each MySQL database instance part of the cluster:

  1. Delete anonymous users
  2. Dropping 'test' database (if exists).
  3. If STRICT is given as an argument to mysql_secure_installation.js it will also do:
    • Remove accounts without passwords.

In the Message box you will see:

The MySQL database servers part of this cluster have now been secured and you have reduced the risk of compromising your data.

You can re-run security_audit.js to verify that the actions have had effect.

Happy Clustering!

PS.: To get started with ClusterControl, click here!

Tags:  clustercontrol Database galera innodb management MariaDB MySQL operations security

Presentation: Highly efficient backup with Xtrabackup

 

Xtrabackup is the most widely open source hot backup tool for MySQL. Xtrabackup support InnoDB and MyISAM engines.It supports both MySQL Server , Percona Server and MariaDB. This presentation covers some of the important features in Xtrabackup.

Pic Courtesy : https://www.flickr.com/photos/robbiewendt/8108744459/

Compare Current and Past Time Series Graphs in Percona Monitoring and Management (PMM)

In this short blog post, I will show you how you can compare current and past time series in Percona Monitoring and Management (PMM).

Recently, a support customer shared a graph with us that compared query throughput between today and yesterday as a confirmation that optimizing their server variables improved performance.

Do you want to compare workload between today and yesterday? This week and last week? Or this month and last month? You can do this by simply duplicating an existing data source of a graph, but add a time offset so it will render past data. You can specify past offsets in (s)econds, (m)inutes, (h)ours, (d)ays, (w)eeks and even (y)ears.

Say you want to graph query throughput from a certain time period and one day before that. To do this, select the “MySQL Overview” dashboard, and then click the header of “MySQL Questions” panel. The panel menu should appear:

Click the Edit option to edit this graph:

On the “Metrics” tab, click the Add Query button and copy the options as listed in the first data source, but change the “Legend format” as “Questions(yesterday)”:


Now, you’ll need to add the offset modifier right after the selector. So between, “[$interval]” and “)” and also “[5m]” and “)”, add ” offset 24h” to add offsets. For more information on selectors, you can review this link:


So, as you can see in the graph above, the query throughput is around 6.13k qps compared to yesterday’s performance of 4.42k qps (on average).

To save the changes, click the floppy disk icon located at the top of PMM, and the “Save Changes” dialog will appear. Click the Save button to save the changes.

There you have it. By duplicating data source and adding offsets to it, you can easily compare a selected time series and its past.

Do note that in this example, we’ve modified the system dashboard and so all changes made to this are overwritten when the PMM server is upgraded. To keep all of your changes, you can create your own custom dashboard and just copy the panel from the system dashboard. You can have a look at this article on the proper way of importing graphs/panels from the system dashboard.

Another thing to note is that as of this writing, the “Trends Dashboard” has this feature built-in on comparing the time series against the past day and week for CPU usage, I/O Read/Write Activity, Queries, and InnoDB Rows Read/Changed. If you need to customize this even further, you have this article to see it through.

Meltdown vs MySQL part 2: in-memory sysbench and a core i5 NUC

This is my second performance report for the Meltdown patch using in-memory sysbench and a small server. In this test I used a core i5 NUC with the 4.13 and 4.8 kernels. In the previous test I used a core i3 NUC with the 4.4 kernel.
  • results for 4.13 are mixed -- sometimes there is more QPS with the fix enabled, sometimes there is more with the fix disabled. The typical difference is small, about 2%.
  • QPS for 4.8, which doesn't have the Meltdown fix, are usually better than with 4.13, the largest difference is ~10% and the difference tend to be larger at 1 client than at 2 or 8.

Configuration

My usage of sysbench is described here. The servers are described here. For this test I used the core i5 NUC (NUC7i5bnh) with Ubuntu 16.04. I have 3 such servers and ran tests with the fix enabled (kernel 4.13.0-26), the fix disabled via pti=off (kernel 4.13.0-26) and the old kernel (4.8.0-36) that doesn't have the fix. From cat /proc/cpuinfo I see pcid. This server uses the HWE kernels to make wireless work. I repeated tests after learning that 4.13 doesn't support the nobarrier mount option for XFS. My workaround was to switch to ext4 and the results here are from ext4.
The servers have 2 cores and 4 HW threads. I normally use them for low-concurrency benchmarks with 1 or 2 concurrent database clients. For this test I used 1, 2 and 8 concurrent clients to determine whether more concurrency and more mutex contention would cause more of a performance loss.
The sysbench test was configured to use 1 table with 4M rows and InnoDB. The InnoDB buffer pool was large enough to cache the table. The sysbench client runs on the same host as mysqld.

I just noticed that all servers had the doublewrite buffer and binlog disabled. This was leftover from debugging the XFS nobarrier change.
Results
My usage of sysbench is described here which explains the tests that I list below. Each test has QPS for 1, 2 and 8 concurrent clients. Results are provided for
  • pti enabled - kernel 4.13.0-26 with the Meltdown fix enabled
  • pti disabled - kernel 4.13.0-26 with the Meltdown fix disabled via pti=off
  • old kernel, no pti - kernel 4.8.0-36 which doesn't have the Meltdown fix
After each of the QPS sections, there are two lines for QPS ratios. The first line compares the QPS for the kernel with the Meltdown fix enabled vs disabled. The second line compares the QPS for the kernel with the Meltdown fix vs the old kernel. A value less than one means that MySQL gets less QPS with the Meltdown fix.

update-inlist
1       2       8       concurrency
5603    7546    8212    pti enabled
5618    7483    8076    pti disabled
5847    7613    8149    old kernel, no pti
-----   -----   -----
0.997   1.008   1.016   qps ratio: pti on/off
0.958   0.991   1.007   qps ratio: pti on / old kernel

update-one
1       2       8       concurrency
11764   18880   16699   pti enabled
12074   19475   17132   pti disabled
12931   19573   16559   old kernel, no pti
-----   -----   -----
0.974   0.969   0.974   qps ratio: pti on/off
0.909   0.964   1.008   qps ratio: pti on / old kernel

update-index
1       2       8       concurrency
7202    12688   16738   pti enabled
7197    12581   17466   pti disabled
7443    12926   17720   old kernel, no pti
-----   -----   -----
1.000   1.000   0.958   qps ratio: pti on/off
0.967   0.981   0.944   qps ratio: pti on / old kernel

update-nonindex
1       2       8       concurrency
11103   18062   22964   pti enabled
11414   18208   23076   pti disabled
12395   18529   22168   old kernel, no pti
-----   -----   -----
0.972   0.991   0.995   qps ratio: pti on/off
0.895   0.974   1.035   qps ratio: pti on / old kernel

delete
1       2       8       concurrency
19197   30830   43605   pti enabled
19720   31437   44935   pti disabled
21584   32109   43660   old kernel, no pti
-----   -----   -----
0.973   0.980   0.970   qps ratio: pti on/off
0.889   0.960   0.998   qps ratio: pti on / old kernel

read-write range=100
1       2       8       concurrency
11956   20047   29336   pti enabled
12475   20021   29726   pti disabled
13098   19627   30030   old kernel, no pti
-----   -----   -----
0.958   1.001   0.986   qps ratio: pti on/off
0.912   1.021   0.976   qps ratio: pti on / old kernel

read-write range=10000
1       2       8       concurrency
488     815     1080    pti enabled
480     768     1073    pti disabled
504     848     1083    old kernel, no pti
-----   -----   -----
1.016   1.061   1.006   qps ratio: pti on/off
0.968   0.961   0.997   qps ratio: pti on / old kernel

read-only range=100
1       2       8       concurrency
12089   21529   33487   pti enabled
12170   21595   33604   pti disabled
11948   22479   33876   old kernel, no pti
-----   -----   -----
0.993   0.996   0.996   qps ratio: pti on/off
1.011   0.957   0.988   qps ratio: pti on / old kernel

read-only.pre range=10000
1       2       8       concurrency
392     709     876     pti enabled
397     707     872     pti disabled
403     726     877     old kernel, no pti
-----   -----   -----
0.987   1.002   1.004   qps ratio: pti on/off
0.972   0.976   0.998   qps ratio: pti on / old kernel

read-only range=10000
1       2       8       concurrency
394     701     874     pti enabled
389     698     871     pti disabled
402     725     877     old kernel, no pti
-----   -----   -----
1.012   1.004   1.003   qps ratio: pti on/off
0.980   0.966   0.996   qps ratio: pti on / old kernel

point-query.pre
1       2       8       concurrency
18490   31914   56337   pti enabled
19107   32201   58331   pti disabled
18095   32978   55590   old kernel, no pti
-----   -----   -----
0.967   0.991   0.965   qps ratio: pti on/off
1.021   0.967   1.013   qps ratio: pti on / old kernel

point-query
1       2       8       concurrency
18212   31855   56116   pti enabled
18913   32123   58320   pti disabled
17907   32941   55430   old kernel, no pti
-----   -----   -----
0.962   0.991   0.962   qps ratio: pti on/off
1.017   0.967   1.012   qps ratio: pti on / old kernel

random-points.pre
1       2       8       concurrency
3043    5940    8131    pti enabled
2944    5681    7984    pti disabled
3030    6015    8098    old kernel, no pti
-----   -----   -----
1.033   1.045   1.018   qps ratio: pti on/off
1.004   0.987   1.004   qps ratio: pti on / old kernel

random-points
1       2       8       concurrency
3053    5930    8128    pti enabled
2949    5756    7981    pti disabled
3058    6011    8116    old kernel, no pti
-----   -----   -----
1.035   1.030   1.018   qps ratio: pti on/off
0.998   0.986   1.001   qps ratio: pti on / old kernel

hot-points
1       2       8       concurrency
3931    7522    9500    pti enabled
3894    7535    9214    pti disabled
3914    7692    9448    old kernel, no pti
-----   -----   -----
1.009   0.998   1.031   qps ratio: pti on/off
1.004   0.977   1.005   qps ratio: pti on / old kernel

insert
1       2       8       concurrency
12469   21418   25158   pti enabled
12561   21327   25094   pti disabled
13045   21768   21258   old kernel, no pti
-----   -----   -----
0.992   1.004   1.002   qps ratio: pti on/off
0.955   0.983   1.183   qps ratio: pti on / old kernel

XFS, nobarrier and the 4.13 Linux kernel

tl;dr

My day
  • nobarrier isn't supported as a mount option for XFS in kernel 4.13.0-26 with Ubuntu 16.04. I assume this isn't limited to Ubuntu. Read this for more detail on the change.
  • write throughput is much worse on my SSD without nobarrier
  • there is no error on the command line when mounting a device that uses the nobarrier option
  • there is an error message in dmesg output for this

There might be two workarounds:
  • switch from XFS to ext4
  • echo "write through" > /sys/block/$device/queue/write_cache

The Story

I have a NUC cluster at home for performance tests with 3 NUC5i3ryh and 3 NUC7i5bnh. I recently replaced the SSD devices in all of them because previous testing wore them out. I use Ubuntu 16.04 LTS and recently upgraded the kernel on some of them to get the fix for Meltdown.

The NUC7i5bnh server has a Samsung 960 EVO SSD that uses NVMe. I use the HWE kernel to make wireless work. The old kernel without the Meltdown fix is 4.8.0-36 and the kernel with the Meltdown fix is 4.13.0-26. Note that with the old kernel I used XFS with the nobarrier option. With the new kernel I assumed I was still getting nobarrier, but I was not. I have since switched from XFS to ext4.

The NUC5i3ryh server has a Samsung 850 EVO SSD that uses SATA. The old kernel without the Meltdown fix is 4.4.0-38 and the kernel with the Meltdown fix is 4.4.0-109. I continue to use XFS on these.

Results sysbench for NUC5i3ryh show not much regression from the Meltdown fix. Results for the NUC7i5bnh show a lot of regression for the write-heavy tests and not much for the read-heavy tests.
  • I started to debug the odd 7i5bnh results and noticed that write IO throughput was much lower for servers with the Meltdown fix using 4.13.0-26. 
  • Then I used sysbench fileio to run IO tests without MySQL and noticed that read IO was fine, but write IO throughput was much worse with the 4.13.0-26 kernel.
  • Then I consulted my local experts, Domas Mituzas and Jens Axboe.
  • Then I noticed the error message in dmesg output

Meltdown vs MySQL part 1: in-memory sysbench and a core i3 NUC

This is my first performance report for the Meltdown patch using in-memory sysbench and a small server.
  • the worst case overhead was ~5.5%
  • a typical overhead was ~2%
  • QPS was similar between the kernel with the Meltdown fix disabled and the old kernel
  • the overhead with too much concurrency (8 clients) wasn't worse than than the overhead without too much concurrency (1 or 2 clients)

Configuration
My usage of sysbench is described here. The servers are described here. For this test I used the core i3 NUC (NUC5i3ryh) with Ubuntu 16.04. I have 3 such servers and ran tests with the fix enabled (kernel 4.4.0-109), the fix disabled via pti=off (kernel 4.4.0-109) and the old kernel (4.4.0-38) that doesn't have the fix. From cat /proc/cpuinfo I see pcid.
The servers have 2 cores and 4 HW threads. I normally use them for low-concurrency benchmarks with 1 or 2 concurrent database clients. For this test I used 1, 2 and 8 concurrent clients to determine whether more concurrency and more mutex contention would cause more of a performance loss.
The sysbench test was configured to use 1 table with 4M rows and InnoDB. The InnoDB buffer pool was large enough to cache the table. The sysbench client runs on the same host as mysqld.
Results
My usage of sysbench is described here which explains the tests that I list below. Each test has QPS for 1, 2 and 8 concurrent clients. Results are provided for
  • pti enabled - kernel 4.4.0-109 with the Meltdown fix enabled
  • pti disabled - kernel 4.4.0-109 with the Meltdown fix disabled via pti=off
  • old kernel, no pti - kernel 4.4.0-38 which doesn't have the Meltdown fix
After each of the QPS sections, there are two lines for QPS ratios. The first line compares the QPS for the kernel with the Meltdown fix enabled vs disabled. The second line compares the QPS for the kernel with the Meltdown fix vs the old kernel. A value less than one means that MySQL gets less QPS with the Meltdown fix.
update-inlist 1       2       8       concurrency 2039    2238    2388    pti enabled 2049    2449    2369    pti disabled 2059    2199    2397    old kernel, no pti -----   -----   ----- 0.995   0.913   1.008   qps ratio: pti on/off 0.990   1.017   0.996   qps ratio: pti on / old kernel
update-one 1       2       8       concurrency 8086    11407   9498    pti enabled 8234    11683   9748    pti disabled 8215    11708   9755    old kernel, no pti -----   -----   ----- 0.982   0.976   0.974   qps ratio: pti on/off 0.984   0.974   0.973   qps ratio: pti on / old kernel
update-index 1       2       8       concurrency 2944    4528    7330    pti enabled 3022    4664    7504    pti disabled 3020    4784    7555    old kernel, no pti -----   -----   ----- 0.974   0.970   0.976   qps ratio: pti on/off 0.974   0.946   0.970   qps ratio: pti on / old kernel
update-nonindex 1       2       8       concurrency 6310    8688    12600   pti enabled 6103    8482    11900   pti disabled 6374    8723    12142   old kernel, no pti -----   -----   ----- 1.033   1.024   1.058   qps ratio: pti on/off 0.989   0.995   1.037   qps ratio: pti on / old kernel
delete 1       2       8       concurrency 12348   17087   23670   pti enabled 12568   17342   24448   pti disabled 12665   17749   24499   old kernel, no pti -----   -----   ----- 0.982   0.985   0.968   qps ratio: pti on/off 0.974   0.962   0.966   qps ratio: pti on / old kernel
read-write range=100 1       2       8       concurrency  9999   14973   21618   pti enabled 10177   15239   22088   pti disabled 10209   15249   22153   old kernel, no pti -----   -----   ----- 0.982   0.982   0.978   qps ratio: pti on/off 0.979   0.981   0.975   qps ratio: pti on / old kernel
read-write range=10000 1       2       8       concurrency 430     762     865     pti enabled 438     777     881     pti disabled 439     777     882     old kernel, no pti -----   -----   ----- 0.981   0.980   0.981   qps ratio: pti on/off 0.979   0.980   0.980   qps ratio: pti on / old kernel
read-only range=100 1       2       8       concurrency 10472   19016   26631   pti enabled 10588   20124   27587   pti disabled 11290   20153   27796   old kernel, no pti -----   -----   ----- 0.989   0.944   0.965   qps ratio: pti on/off 0.927   0.943   0.958   qps ratio: pti on / old kernel
read-only.pre range=10000 1       2       8       concurrency 346     622     704     pti enabled 359     640     714     pti disabled 356     631     715     old kernel, no pti -----   -----   ----- 0.963   0.971   0.985   qps ratio: pti on/off 0.971   0.985   0.984   qps ratio: pti on / old kernel
read-only range=10000 1       2       8       concurrency 347     621     703     pti enabled 354     633     716     pti disabled 354     638     716     old kernel, no pti -----   -----   ----- 0.980   0.981   0.988   qps ratio: pti on/off 0.980   0.973   0.981   qps ratio: pti on / old kernel
point-query.pre 1       2       8       concurrency 16104   29540   46863   pti enabled 16716   30052   49404   pti disabled 16605   30392   49872   old kernel, no pti -----   -----   ----- 0.963   0.982   0.948   qps ratio: pti on/off 0.969   0.971   0.939   qps ratio: pti on / old kernel
point-query 1       2       8       concurrency 16240   29359   47141   pti enabled 16640   29785   49015   pti disabled 16369   30226   49530   old kernel, no pti -----   -----   ----- 0.975   0.985   0.961   qps ratio: pti on/off 0.992   0.971   0.951   qps ratio: pti on / old kernel
random-points.pre 1       2       8       concurrency 2756    5202    6211    pti enabled 2764    5216    6245    pti disabled 2679    5130    6188    old kernel, no pti -----   -----   ----- 0.997   0.997   0.994   qps ratio: pti on/off 1.028   1.014   1.003   qps ratio: pti on / old kernel
random-points 1       2       8       concurrency 2763    5177    6191    pti enabled 2768    5188    6238    pti disabled 2701    5076    6182    old kernel, no pti -----   -----   ----- 0.998   0.997   0.992   qps ratio: pti on/off 1.022   1.019   1.001   qps ratio: pti on / old kernel
hot-points 1       2       8       concurrency 3414    6533    7285    pti enabled 3466    6623    7287    pti disabled 3288    6312    6998    old kernel, no pti -----   -----   ----- 0.984   0.986   0.999   qps ratio: pti on/off 1.038   1.035   1.041   qps ratio: pti on / old kernel
insert 1       2       8       concurrency 7612    10051   11943   pti enabled 7713    10150   12322   pti disabled 7834    10243   12514   old kernel, no pti -----   -----   ----- 0.986   0.990   0.969   qps ratio: pti on/off 0.971   0.981   0.954   qps ratio: pti on / old kernel

This Week in Data with Colin Charles 23: CPU security continues to draw attention

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Much is still being talked about the Meltdown/Spectre CPU security vulnerabilities. There’s an interesting Twitter thread showing performance impact on Kafka brokers. Jaime Crespo also writes about how it affects MariaDB Server 10.1.30 – Finding out the MySQL performance regression due to kernel mitigation for Meltdown CPU vulnerability. It’s worth checking out the thread for PostgreSQL too.

Something else worth thinking about, Stephen O’Grady writes Whither the DBA. What do you think? Don’t forget to look below the fold for Robert Treat’s comment too.

Releases Link List Upcoming appearances
  • FOSDEM 2018 – Brussels, Belgium – February 3-4 2018
  • SCALE16x – Pasadena, California, USA – March 8-11 2018
Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

The State of MySQL High Availability Going in to 2018

High availability for MySQL has become increasingly relevant given the ever increasing rate of adoption and implementation. It’s no secret to anyone in the community that the popularity of MySQL has become noteworthy. I still remember my start with MySQL in the early 5.0 days and people told me that I may not want to consider wasting my time training on a database that didn’t have a large industry adoption, but look at where we are now! One of my favorite pages to cite when trying to exhibit this fact is the db-engines.com ranking trend page where we can see that MySQL is right up there and contending with enterprise products such as Microsoft SQL Server and Oracle.

MySQL has gone from being part of the ever famous LAMP stack for users looking to set up their first website to seeing adoption from major technical players such as Twitter, Wikipedia, and Facebook. With these type of implementations we’re seeing MySQL used in places where failure can have massive associated cost, as such high availability for MySQL has become critical, and that’s what I’m here to talk to you about today.

Up until a few years ago we were looking at situations where there may have been at least some reliance on storage or general resource high availability tools that worked for the most part, but weren’t exactly designed with MySQL in mind. I’ve heard many horror stories of the infamous DRBD split brain which was common in the typical DRBD heartbeat / corosync solution. Even today we’re still seeing clients that are leveraging generalized HA solutions like RHEL Cluster where the people driving the solution may actually not even be your DBA, but your Linux sys admin.

These days the power is coming back to the MySQL DBA in a big way and I think this is clearly evident by looking at tools like MHA, Orchestrator, and most importantly SQL interpreting proxies like ProxySQL. These tools have been designed by those that exist within the MySQL community to serve MySQL specifically.

Replication Managers

This started off in 2011 with the initial public release of Yoshinori Matsunobu’s MHA, which became wildly popular as a way to programmatically approach monitoring and failover management within standard mysql replication topologies. Despite never having reached a 1.0 release (still at 0.57 at the time of this writing), we still see wide adoption of this solution. Given that it included features like VIP management and admin initiated failover and promotion, it greatly eased the process of creating a viable high availability solution for MySQL administrators.

That’s not to say that it didn’t have its faults. There is really no great way to set up a fully atomic VIP failover script, the command set for using it was pretty far from intuitive but has improved, and active development for the solution declined sharply at the end of 2012 with the last release being in May of 2015. But despite these shortcomings you won’t find many in the community that would be hesitant to praise this product as a great contribution to solving the MySQL HA challenge.

The next big solution to emerge was Shlomi Noach’s Orchestrator, a fairly recent addition to the menu of available options for your replication management needs. In order to make things a little more intuitive it included a graphical web interface. The installation and configuration are both fairly simple and it even comes with various ways to make Orchestrator highly available, so you don’t have to worry about single point of failure when it comes to this layer in your HA stack.

Just like any other technology, this one also does not come without its faults. The first and foremost is the reliance on GTID, pseudo GTID, or binlog servers for anything more than re-arrangement of slaves in the topology (no master failover, either automatic or otherwise). We know that GTID isn’t really new to MySQL anymore, it was released with MySQL 5.6 which means it’s been around since 2013 but not everyone is willing or ready to make the leap from standard binary log and position replication to GTID, so that particular group of MySQL users may find that they have no use for Orchestrator at the moment.

Furthermore, if you want to do a simple master failover you’ll find this is very simple with a tool like MHA where you can specify what server should be the new master and if the old master should become a slave. However with Orchestrator you need to set all of your slaves to replicate in a multi tiered fashion under the slave you want to promote, then you have to initiate the failover to the new master, and at that point the old master will become isolated from the rest of the group, forcing you to manually rejoin it to the topology.

Even with its shortcomings we are starting to see organizations adopt this as part of their HA solution, but typically only as part. This is an important distinction. While you can have a VIP failover process similar to what is done in MHA using hooks, we have yet to use this as all implementation projects have elected to have Orchestrator simply manipulate the read_only variable and let a MySQL monitoring proxy like ProxySQL, Maxscale, or ScaleArc handle traffic redirection.

Proxies

The most commonly discussed proxy of the three listed above has been ProxySQL, created by René Cannaò. There are a number use cases for a ProxySQL such as query read write splitting via the use of query rules, but in this case we want to remain focused on the high availability features.

Using a fairly simple installation and configuration process you can group MySQL servers together in replication hostgroups. Database servers can then be monitored by checking the read_only global variable to determine whether or not they are a server that can be written to, and traffic can be directed to the appropriate server leveraging replication hostgroups without modification of the application. Additionally, if you are concerned about network partitioning level failures you can implement a solution such as Consul for service discovery, or you can rely on STONITH logic that would be initiated by the replication manager, such as Orchestrator, to ensure proper node fencing. You can read more about this here.

It should also be noted that ProxySQL is attempting to implement a cluster solution called ProxySQL Cluster that, much like Orchestrator, will attempt to remove single point of failure from its layer in the stack. This is already available but is considered to be experimental, is not GA, and thus should not be implemented in production environments at this time.

Virtually Synchronous Replication

Another technology to note is virtually synchronous replication methods such as Galera, as incorporated as part of Percona XtraDB Cluster or MariaDB Galera Cluster, or with MySQL Group Replication. Each of these solutions are very similar in the sense that they ensure that a transaction is repeatable on a majority of other nodes in the cluster prior the commit of a transaction, then pass along the full transaction details to all nodes in the cluster for replay. This leaves the cluster in a “mostly fully synchronous state”. The big drawback here is that there can be network related delays at transaction commit, or even full stops placed on incoming transaction if flow control detects that the other nodes in the cluster aren’t keeping up.

This has been a pretty commonly implemented solution for organizations with moderate queries per second and where there is a strong need for replication level consistency and high availability. While it’s still fairly common to see Galera base products grouped with HAproxy, there are options to implement SQL interpreting proxies like ProxySQL with Galera, but it’s a little more complicated given that Galera node that’s “up” can be in several states where traffic may or may not be permitted and the proxy needs to be aware of this.

Public Cloud DBaaS

All of the aforementioned technologies are important for consideration when working with on-prem or Infrastructure as a Service (IaaS) solutions, but doesn’t really cover what’s going on with with HA in Database as a Service (DBaaS) solutions from popular public cloud providers.

When using Cloud SQL as part of the Google Cloud Platform, you’re covered so long as you’ve provisioned a failover node, but there are a few things to take into consideration. Firstly, we need keep in mind that failover nodes in Cloud SQL are reliant on MySQL semi synchronous replication which requires the semi sync node in question to confirm that the replication events have been received prior to transaction commit, which can slow down processing of transactions on the master. Secondly, if a failover is initiated on a failover node where replication lag is excessive, the failover will not compete and traffic will not be served until replication has completely caught up, so it’s highly advised to monitor replication lag on your failover node in Cloud SQL.

The advantage of the Cloud SQL solution is that you can connect to your failover instance and read from it, unlike Amazon RDS where the instance isn’t available to you until the event of a failover.

Relational Database Services (RDS), as offered by Amazon Web Services (AWS), has a similar solution in place where you can setup high availability for your RDS instance by specifying is as a multi availability zone (Multi-AZ) instance. You can specify this when you create your RDS instance, or after the fact by modifying it. With this enabled, Amazon will use its internal failover technology to keep a standby replica available in a different availability zone in the case that a failover is required. The drawback here is that you can’t connect to your standby replica, however replication lag doesn’t have a factor in how long it will take to fail over to the standby.

Aurora, as also offered by AWS, automatically partitions your data into 10G segments replicated six ways over three availability zone, making it very fault tolerant. By default it is also Multi-AZ and self healing as data blocks are scanned and any errors that are found are automatically repaired. It should also be noted that failovers will occur regardless of whether or not you have provisioned any Aurora replicas. There are a lot of advantages of high availability in Aurora that you can read about in the FAQ, but I think one of the biggest things to note is the recent announcement of Aurora Multi Master and Aurora Serverless, which very well may change not just how we think about high availability in MySQL, but how we approach implementation and scaling all together.

Conclusion

There are a lot of great solutions out there to help you keep your MySQL environment up and running in the event of a failure. If you haven’t implemented GTID and aren’t quite willing to use it just yet, consider MHA. If you have a system running GTID, you can consider Orchestrator, but you should certainly be familiar with the gotchas and thoroughly proof of concept the solution in QA before implementing in production.

Virtually synchronous products that leverage Galera or MySQL Group Replication can be used to keep your data set mostly synchronous across all the servers in your topology so long as you can accept the requirements of network activity at transaction commit.

If SQL interpreting proxies like ProxySQL aren’t already on your radar, now is a good of a time as any to get familiar and see how it can assist you, not just will high availability, but for scale out.

Finally, public cloud providers are making it easier than ever to help remove the anxiety and cost of management aspects of making your MySQL solution highly available. Make sure you’re familiar with their offerings and that HA is taken into consideration when provisioning your DBaaS instances.

Learn more about Pythian’s end-to-end MySQL support, from consulting and implementation to 24×7 management.

Insert Random Data into Tables Using mysql_random_data_load

In this short blogpost, we’ll show you how to use the mysql_random_data_load tool to insert random data into tables. This is a great aide in testing when you have empty tables and need them to be populated with data. We’ve all done it manually (the INSERT INTO … VALUES … way), but that is surely a waste of time. It can add up to a lot if you need to test tables with many columns, or even worse, with foreign keys.

Before saying anything else, we would like to mention that this tool is still under development, and that it will insert random data into tables. Be aware if running it in a production environment!

mysql_random_data_load is a tool created by PerconLabs.

Note: PerconaLabs and Percona-QA are open source GitHub repositories for unofficial scripts and tools created by Percona staff. While not covered by Percona support or services agreements, these handy utilities can help you save time and effort.

Now, back to the disclaimer-free zone, the tool in question is named mysql_random_data_load, and can be currently found in the Percona Lab GitHub repository:

https://github.com/Percona-Lab/mysql_random_data_load

The README.md file has a lot of information on it already, so feel free to pause this reading and come back after you’ve skimmed through it. If you are eager to start using it, though, we give you the crash course below.

We offer two binaries in the releases tab, one for Darwin and another one for Linux. We will show examples of the linux_amd64 build here, but they are the same for the darwin_amd64 one.

First, get the latest build (0.1.6 as of this writing):

shell> cd ~/bin/ shell> wget https://github.com/Percona-Lab/mysql_random_data_load/releases/download/0.1.6/mysql_random_data_loader_linux_amd64.tar.gz shell> tar xzf mysql_random_data_loader_linux_amd64.tar.gz shell> chmod +x mysql_random_data_loader

We will use the sakila database schema for the tests, so the second and final step is to download and install it.

shell> wget http://downloads.mysql.com/docs/sakila-db.tar.gz shell> tar xzf sakila-db.tar.gz && rm -f sakila-db.tar.gz shell> mysql < sakila-db/sakila-schema.sql

Ok, we are now ready to test inserting some rows with random data! Let’s try populating the actor table first. Since its primary key is a SMALLINT, we have a maximum of 65535 rows.

shell> mysql_random_data_load sakila actor 65535 --host=127.0.0.1 --port=3306 --user=my_user --password=my_password 2017/12/22 00:17:32 Starting   4s [===============================================] 100% Total rows inserted: 65535

Let’s see what the data looks like:

mysql> SELECT * FROM sakila.actor LIMIT 2G *************************** 1. row ***************************   actor_id: 1 first_name: dolores nulla sunt velit placeat minima adipi  last_name: quo non similique enim inventore cupiditate. last_update: 2017-01-08 13:06:44 *************************** 2. row ***************************   actor_id: 2 first_name: at commodi tenetur est maiores nobis accusamu  last_name: quod suscipit provident est voluptatem quis t last_update: 2017-03-07 00:05:01 2 rows in set (0.00 sec)

Adding data to tables with Foreign Keys

Now that we have the actor table filled with data, let’s try the foreign key functionality, which is one of the tool’s biggest strengths (since coming up with data that makes sense in this context is even more laborious).

If we look at the sakila structure, a good candidate for trying this is film_actor. Apart from the actor table, it also needs the film table to have data (due to FK constraints), which in turn needs the language table to have data. Easy enough!

shell> mysql_random_data_load sakila language 15 --host=127.0.0.1 --port=3306 --user=my_user --password=my_password 2017/12/22 00:48:30 Starting   0s [===============================================] 100% Total rows inserted: 15 shell>  mysql_random_data_load sakila film 10000 --host=127.0.0.1 --port=3306 --user=my_user --password=my_password 2017/12/22 00:51:18 There are triggers on the film table that might affect this process: 2017/12/22 00:51:18 Trigger "ins_film", AFTER INSERT [...output trimmed...] 2017/12/22 00:51:18 Trigger "upd_film", AFTER UPDATE [...output trimmed...] 2017/12/22 00:51:18 Trigger "del_film", AFTER DELETE [...output trimmed...] 2017/12/22 00:51:18 Starting   1s [===============================================] 100% Total rows inserted: 10000

In this last example, we already used the foreign key functionality without even modifying the command we used for “regular” tables. You can disregard the notes about the triggers in this case.

We are now ready to fill the film_actor table. Let’s see two new variables: –bulk-size and –max-fk-samples. The former can help us with creating INSERTs with as many rows as we want, which helps expedite the process if many are needed. The latter chooses the number of samples from the film and actor tables, to use for the FK constraints. Note that these two variables already have defaults, so we can choose not to include them and the tool will still work.

shell> mysql_random_data_load sakila film_actor 20000 --bulk-size=5000 --max-fk-samples=8000 --host=127.0.0.1 --port=3306 --user=my_user --password=my_password 2017/12/22 01:05:28 Starting   1s [===============================================] 100% Total rows inserted: 20000

A quick test shows that things went well:

mysql> SELECT a.last_name, f.title, f.special_features, f.rental_rate FROM film_actor fa JOIN actor a  USING (actor_id) JOIN film f  USING (film_id) LIMIT 1G *************************** 1. row ***************************       last_name: enim sequi recusandae dolores voluptatem!           title: quia provident nemo labore architecto totam. special_features: Behind the Scenes     rental_rate: 3.15 1 row in set (0.00 sec)

Specifying queries per second

Starting from version 0.1.6, there is a new experimental feature to insert rows at a specific rate: –qps <queries per second>. This option could be used when you need not only data but also at a specific rate (for example, to test a monitoring system like PMM). Since the total number of rows to be inserted is still in effect, this option makes more sense when –bulk-size=1 is used.

Summary

As we have seen, it is really easy to insert random data into tables so you can later use them in your tests. There is still more work to do on the tool, but we have found that it is already of great help. We would love to hear back from you on it either here or, even better, via the issues GitHub page directly. If there are any use-cases or functionality you can think of, let us know!

Measuring the potential overhead of PMM Client on MySQL workloads

Having good historial metrics monitoring in place is critical for properly operating, maintaining and troubleshooting database systems, and Percona Monitoring and Management is one of the options we recommend to our clients for this.

One common concern among potential users is how using this may impact their database’s performance. As I could not find any conclusive information about this, I set out to do some basic tests and this post shows my results.

To begin, let me describe my setup. I used the following Google Cloud instances:

  • One 4 vCPU instance for the MySQL server
  • One 2 vCPU instance for the sysbench client
  • One 1 vCPU instance for the PMM server

I used Percona Server 5.7 and PMM 1.5.3 installed via Docker. Slow query log was enabled with long_query_time set to 0 for all tests.

I ran sysbench with 200k, 1M and 10M rows using the legacy oltp script with the pareto and special distributions, and up to 64 client threads. After several runs and reviews of the results, I settled on 1M rows with pareto for extended tests, as other combinations showed minor variations on the results from this one.

I am well aware a synthetic workload is not representative but I think the results are still useful, though I would love to measure this on a real life workload (do let me know in the comments if you have done this already).

In a nutshell, I found some impact in performance (measured as throughput in transactions per second) when running sysbench with the PMM exporters which in my case was eliminated when I configured them to serve their metrics by HTTP instead of HTTPS.

The following graph shows box plots for throughput for pmm enabled or disabled, for a different number of threads, with and without ssl:

We can see that with SSL enabled there is a noticeable drop in throughput when the exporters are running, while this is not the case when SSL is disabled.

I arrived at the conclusion that it was worth repeating the tests with SSL disabled after creating Flame Graphs from perf captures during sample runs. On them, the only significant increases were due to the exporters (mysqld_exporter and node_exporter, the qan exporter did not have any noticeable impact during my tests). The results from the tests show that this analysis pointed me in the right direction so while they are worth of separate blog posts, it is worth to at least recommend our readers to get familiar with this performance analysis tool.

Next is a scatter plot of throughput over time with ssl enabled:

On it we get a more clear picture of the impact of having the exporters running during the test.

Next is the same graphs but with SSL disabled:

Now it is much more difficult to differentiate the runs.

This is confirmed if we look at the 99 percentile of throughput for each case (here for 32 threads):

PMM SSL tps (p99) enabled enabled 1167.518 enabled disabled 1397.397 disabled disabled 1429.097 Conclusion

PMM is a very good Open Source option for monitoring but as every instrumentation and monitoring layer you add to your stack, it won’t come for free. My very simple tests show that its impact may be significant under some scenarios, yet if it’s bad enough it may be mitigated by using HTTP instead of HTTPS for the exporters. Given the events that are unfolding in IT security as I type this, it may seem reckless to recommend disabling SSL as an “optimization”, but I think good engineering is all about informed tradeoffs and if you’re running this on a secure private network, how risky is it to expose monitoring metrics over HTTP instead of HTTPS? I would love to read answers to this question in the comments!

Finally, I think a similar cost is probably paid for the TLS layer on the pmm-server end. It would be very interesting to see an experiment like this repeated but on a different scenario: one pmm-server with several monitored clients.

More Write Set in MySQL: Group Replication Certification

This is the third post in the series on Write Set in MySQL.  In the first post, we explore how Write Set allows to get better parallel replication in MySQL 8.0.  In the second post, we saw how the MySQL 8.0 improvement is an extension of the work done in MySQL 5.7 to avoid replication delay/lag in Group Replication.  In this post, we will see how Write Set is used in Group Replication to detect

How to Install Snipe-IT Asset Management Software on Debian 9

Snipe-IT is a free and open source IT assets management web application that can be used for tracking licenses, accessories, consumables, and components. It is written in PHP language and uses MySQL to store its data. In this tutorial, we will learn how to install Snipe-IT on Debian 9 server.

Ten Tips on How to Achieve MySQL and MariaDB Security

Security of data is a top priority these days. Sometimes it’s enforced by external regulations like PCI-DSS or HIPAA, sometimes it’s because you care about your customers’ data and your reputation. There are numerous aspects of security that you need to keep in mind - network access, operating system security, grants, encryption and so on. In this blog post, we’ll give you 10 tips on what to look at when securing your MySQL or MariaDB setup.

1. Remove users without password

MySQL used to come with a set of pre-created users, some of which can connect to the database without a password or, even worse, anonymous users. This has changed in MySQL 5.7 which, by default, comes only with a root account that uses the password you choose at installation time. Still, there are MySQL installations which were upgraded from previous versions and these installations keep the legacy users. Also, MariaDB 10.2 on Centos 7 comes with anonymous users:

MariaDB [(none)]> select user, host, password from mysql.user where user like ''; +------+-----------------------+----------+ | user | host | password | +------+-----------------------+----------+ | | localhost | | | | localhost.localdomain | | +------+-----------------------+----------+ 2 rows in set (0.00 sec)

As you can see, those are limited only to access from localhost but regardless, you do not want to have users like that. While their privileges are limited, they still can run some commands which may show more information about the database - for example, the version may help identify further vectors of attack.

[root@localhost ~]# mysql -uanonymous_user Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 19 Server version: 10.2.11-MariaDB MariaDB Server Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> SHOW GRANTS\G *************************** 1. row *************************** Grants for @localhost: GRANT USAGE ON *.* TO ''@'localhost' 1 row in set (0.00 sec) MariaDB [(none)]> \s -------------- mysql Ver 15.1 Distrib 10.2.11-MariaDB, for Linux (x86_64) using readline 5.1 Connection id: 19 Current database: Current user: anonymous_user@localhost SSL: Not in use Current pager: stdout Using outfile: '' Using delimiter: ; Server: MariaDB Server version: 10.2.11-MariaDB MariaDB Server Protocol version: 10 Connection: Localhost via UNIX socket Server characterset: latin1 Db characterset: latin1 Client characterset: utf8 Conn. characterset: utf8 UNIX socket: /var/lib/mysql/mysql.sock Uptime: 12 min 14 sec Threads: 7 Questions: 36 Slow queries: 0 Opens: 17 Flush tables: 1 Open tables: 11 Queries per second avg: 0.049 --------------

Please note that users with very simple passwords are almost as insecure as users without any password. Passwords like “password” or “qwerty” are not really helpful.

2. Tight remote access

First of all, remote access for superusers - this is taken care of by default when installing the latest MySQL (5.7) or MariaDB (10.2) - only local access is available. Still, it’s pretty common to see superusers being available for various reasons. The most common one, probably because the database is managed by humans who want to make their job easier, so they’d add remote access to their databases. This is not a good approach as remote access makes it easier to exploit potential (or verified) security vulnerabilities in MySQL - you don’t need to get a connection to the host first.

Another step - make sure that every user can connect to MySQL only from specific hosts. You can always define several entries for the same user (myuser@host1, myuser@host2), this should help to reduce a need for wildcards (myuser@’%’).

3. Remove test database

The test database, by default, is available to every user, especially to the anonymous users. Such users can create tables and write to them. This can potentially become a problem on its own - any writes would add some overhead and reduce database performance. Currently, after the default instalation, only MariaDB 10.2 on Centos 7 is affected by this - Oracle MySQL 5.7 and Percona Server 5.7 do not have the ‘test’ schema available.

[root@localhost ~]# mysql -uanonymous_user Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 13 Server version: 10.2.11-MariaDB MariaDB Server Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> SHOW GRANTS\G *************************** 1. row *************************** Grants for @localhost: GRANT USAGE ON *.* TO ''@'localhost' 1 row in set (0.00 sec) MariaDB [(none)]> USE test; Database changed MariaDB [test]> CREATE TABLE testtable (a INT); Query OK, 0 rows affected (0.01 sec) MariaDB [test]> INSERT INTO testtable VALUES (1), (2), (3); Query OK, 3 rows affected (0.01 sec) Records: 3 Duplicates: 0 Warnings: 0 MariaDB [test]> SELECT * FROM testtable; +------+ | a | +------+ | 1 | | 2 | | 3 | +------+ 3 rows in set (0.00 sec)

Of course, it may still happen that your MySQL 5.7 has been upgraded from previous versions in which the ‘test’ schema was not removed - you should take care of this and check if you have it created.

4. Obfuscate access to MySQL

It is well known that MySQL runs on port 3306, and its superuser is called ‘root’. To make things harder, it is quite simple to change this. To some extent, this is an example of security through obscurity but it may at least stop automated attempts to get access to the ‘root’ user. To change port, you need to edit my.cnf and set ‘port’ variable to some other value. As for users - after MySQL is installed, you should create a new superuser (GRANT ALL … WITH GRANT OPTION) and then remove existing ‘root@’ accounts.

5. Network security

Ideally, MySQL would be not available through the network and all connections would be handled locally, through the Unix socket. In some setups, this is possible - in that case you can add the ‘skip-networking’ variable in my.cnf. This will prevent MySQL from using any TCP/IP communication, only Unix socket would be available on Linux (Named pipes and shared memory on Windows hosts).

Most of the time though, such tight security is not feasible. In that case you need to find another solution. First, you can use your firewall to allow traffic only from specific hosts to the MySQL server. For instance, application hosts (although they should be ok with reaching MySQL through proxies), the proxy layer, and maybe a management server. Other hosts in your network probably do not need direct access to the MySQL server. This will limit possibilities of attack on your database, in case some hosts in your network would be compromised.

If you happen to use proxies which allow regular expression matching for queries, you can use them to analyze the SQL traffic and block suspicious queries. Most likely your application hosts shouldn’t run “DELETE * FROM your_table;” on a regular basis. If it is needed to remove some data, it can be executed by hand, locally, on the MySQL instance. You can create such rules using something like ProxySQL: block, rewrite, redirect such queries. MaxScale also gives you an option to block queries based on regular expressions.

6. Audit plugins

If you are interested in collecting data on who executed what and when, there are several audit plugins available for MySQL. If you use MySQL Enterprise, you can use MySQL Enterprise Audit which is an extension to MySQL Enterprise. Percona and MariaDB also have their own version of audit plugins. Lastly, McAfee plugin for MySQL can also be used with different versions of MySQL. Generally speaking, those plugins collect more or less the same data - connect and disconnect events, queries executed, tables accessed. All of this contains information about which user participated in such event, from what host it logged from, when did it happen and so on. The output can be XML or JSON, so it’s much easier to parse it than parsing general log contents (even though the data is rather similar). Such output can also be sent to syslog and, further, some sort of log server for processing and analysis.

7. Disable LOAD DATA LOCAL INFILE

If both server and client has the ability to run LOAD DATA LOCAL INFILE, a client will be able to load data from a local file to a remote MySQL server. This, potentially, can help to read files the client has access to - for example, on an application server, one could access any file that the HTTP server has access to. To avoid it, you need to set local-infile=0 in the my.cnf

8. File privileges

You have to keep in mind that MySQL security also depends on the operating system setup. MySQL stores data in the form of files. The MySQL server writes plenty of information to logs. Sometimes this information contains data - slow query log, general log or binary log, for example. You need to make sure that this information is safe and accesible only to users who have to access it. Typically it means that only the root and the user under whose rights MySQL is running, should have access to all MySQL-related files. Most of the time it’s a dedicated user called ‘mysql’. You should check MySQL configuration files and all the logs generated by MySQL and verify that they are not readable by other users.

ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE 9. SSL and Encryption of Data in Transit

Preventing people from accessing configuration and log files is one thing. The other issue is to make sure data is securely transferred over the network. With an exception of setups where all the clients are local and use Unix socket to access MySQL, in majority of cases, data which forms a result set for a query, leaves the server and is transferred to the client over the network. Data can also be transferred between MySQL servers, for example via standard MySQLreplication or within a Galera cluster. Network traffic can be sniffed, and through those means, your data would be exposed.

To prevent this from happening, it is possible to use SSL to encrypt traffic, both server and client-side. You can create an SSL connection between a client and a MySQL server. You can also create an SSL connection between your master and your slaves, or between the nodes of a Galera cluster. This will ensure that all data that is transferred is safe and cannot be sniffed by an attacker who gained access to your network.

The MySQL documentation covers in detail how to setup SSL encryption. If you find it too cumbersome, ClusterControl can help you deploy a secure environment for MySQL replication or Galera cluster in a couple of clicks:

10. Encryption of Data at Rest

Securing data in transit using SSL encryption only partially solves the problem. You need to take care also of data at rest - all the data that is stored in the database. Data at rest encryption can also be a requirement for security regulations like HIPAA or PCI DSS. Such encryption can be implemented on multiple levels - you can encrypt the whole disk on which the files are stored. You can encrypt only the MySQL database through functionality available in the latest versions of MySQL or MariaDB. Encryption can also be implemented in the application, so that it encrypts the data before storing it in the database. Every option has its pros and cons: disk encryption can help only when disks are physically stolen, but the files would not be encrypted on a running database server. MySQL database encryption solves this issue, but it cannot prevent access to data when the root account is compromised. Application level encryption is the most flexible and secure, but then you lose the power of SQL - it’s pretty hard to use encrypted columns in WHERE or JOIN clauses.

All flavors of MySQL provide some sort of data at rest encryption. Oracle’s MySQL uses Transparent Data Encryption to encrypt InnoDB tablespaces. This is available in the commercial MySQL Enterprise offering. It provides an option to encrypt InnoDB tablespaces, other files which also store data in some form (for example, binary logs, general log, slow query log) are not encrypted. This allows the toolchain (MySQL Enterprise Backup but also xtrabackup, mysqldump, mysqlbinlog) to work correctly with such setup.

Starting from MySQL 5.7.11, the community version of MySQL also got support for InnoDB tablespace encryption. The main difference compared to the enterprise offering is the way the keys are stored - keys are not located in a secure vault, which is required for regulatory compliance. This means that starting from Percona Server 5.7.11, it is also possible to encrypt InnoDB tablespace. In the recently published Percona Server 5.7.20, support for encrypting binary logs has been added. It is also possible to integrate with Hashicorp Vault server via a keyring_vault plugin, matching (and even extending - binary log encryption) the features available in Oracle’s MySQL Enterprise edition.

MariaDB added support for data encryption in 10.1.3 - it is a separate, enhanced implementation. It gives you the possibility to not only encrypt InnoDB tablespaces, but also InnoDB log files. As a result, data is more secure but some of the tools won’t work in such configuration. Xtrabackup will not work with encrypted redo logs - MariaDB created a fork, MariaDB Backup, which adds support for MariaDB encryption. There are also issues with mysqlbinlog.

No matter which MySQL flavor you use, as long as it is a recent version, you would have options to implement data at rest encryption via the database server, making sure that your data is additionally secured.

Securing MySQL or MariaDB is not trivial, but we hope these 10 tips will help you along the way.

Tags:  MySQL MariaDB security Database

Top 10 things to know about Alibaba Cloud RDS

Cloud-based Relational Database Services (RDS) offer businesses a variety of traditional Relational Database Management Systems (RDBMS) with several management and configuration options. In the Top 10 Things to Know about Amazon RDS blog, we covered Amazon RDS. In today’s article, we’ll be presenting Alibaba Cloud RDS. If you’ve been considering moving some or all of your database infrastructure to Alibaba Cloud RDS, or any Cloud RDS for that matter, there are many things to consider. Here are the top 10 things you need to know before taking the plunge.

About Alibaba Cloud RDS

Alibaba Cloud RDS, whose full name is Alibaba Cloud ApsaraDB for RDS, is an on-demand database service that handles some of the administrative tasks associated with managing a database, thus leaving you with more time to focus on your core business. ApsaraDB for RDS is offered with many popular RDBMS engines, including MySQL, SQL Server and PostgreSQL. RDS handles routine database tasks such as provisioning, patch up, backup, recovery, failure detection and repair. ApsaraDB for RDS can also protect against network attacks and intercept SQL injections, brute force attacks and other types of database threats.

Alibaba Cloud RDS is a relative newcomer

Alibaba Cloud is the cloud computing arm and a business unit of Alibaba Group. Founded on September 10, 2009, Alibaba Cloud provides a comprehensive suite of global cloud computing services to power both its international customers’ online businesses as well as Alibaba Group’s own e-commerce ecosystem. Alibaba Cloud’s international operations are registered and headquartered in Singapore and operate in numerous other countries around the globe, including Dubai, Frankfurt, Hong Kong, London, New York, Paris, San Mateo, Seoul, Singapore, Sydney, and Tokyo.
 
Perhaps not quite as recognized as Google or Amazon, 2009 is old enough for a cloud service, making Alibaba Cloud RDS established enough to offer tremendous value for the money.

Alibaba Cloud RDS is easy to deploy and setup

Cloud RDSes can vary quite a bit in terms of setup and deployment difficulty. In those respects, Alibaba Cloud RDS is aligned to the far left of the scale (easy!).
The following diagram explains the steps you must follow from creating an instance to using it.

Here’s a quick breakdown of the steps to get up-and-running:

Purchasing

The first step is to set up a billing account. Service is charged in two ways: Subscription and Pay-As-You-Go. See the Cost section below for more details.
Once your credit card has been approved, you’re ready to buy an RDS instance. Just go to the RDS area on the Alibaba Cloud website and find a configuration set that suits you.
You can select three parameters – Memory, Storage Space and Connection Quantity – based on your requirements.

Configuring the RDS

After purchasing and activating a suitable plan, log in to the Alibaba Cloud console to view your RDS instances. For example, the New Type II plan supports up to 50 database accounts and 200 databases.

    1. Create Databases and Users
      • On the console, in “Database Management”, select Create Database. This will open the Add Database page, as shown below. You can select a database name and character set as needed. RDS supports UTF-8, GBK, and Latin-1 character sets.
    2. Then, set database account and password. The password should not be too simple. Select “Read/Write” for the account type. Switch the Connection Mode
      • If you connect to RDS over an Alibaba Cloud server, select Intranet mode; otherwise, select Internet mode. You can switch the connection mode in the top-right corner of the console.
    3. Set the IP White List
      • You can set the IP addresses that are allowed to access your RDS instance. This is very important for security purposes. Here, simply enter the IP address of the server to be allowed to access the RDS. In Internet mode, enter the Internet IP address; in intranet mode, enter the intranet IP address. Entering the “%” wildcard will allow access by all IP addresses and is therefore not recommended.

After configuring the above settings, you can start using RDS.

Data Migration

Alibaba Cloud provides a set of migration tools, which are a bit difficult for novices. Fortunately, you do not have to use it if your database is small.

Alibaba Cloud RDS does not provide access to the mysql.user table

Users of Alibaba Cloud RDS do not have direct access to the mysql.user table. Instead, Alibaba Cloud RDS provides “RAM-Users”. These are identities used in RAM to relate with a true identity, such as a user or an application. To allow a new user or an application access to your cloud resources, you create and grant permissions to a RAM-User. The general procedure is as follows:

  1. Use the primary account (or a RAM-User with RAM operation permissions) to log on to the RAM console.
  2. Create a RAM user and add the user to one or more groups.
  3. Attach one or more authorization policies to the user (or the group to which the user belongs).
  4. Create a credential for the user. Note that:
    • If the user is to perform operations using the console, you must set a logon password for the user.
    • If the user is to call APIs, you must create an API AccessKey for the user.
    • If the user needs to use special permissions (for example, to stop ECS instances), you can set Multi-factor authentication (MFA) for the user and require that he/she uses an MFA password to log on to the Alibaba Cloud console.
  5. Provide the user with the logon URL, username, and logon password.
Create Master Account

Alibaba Cloud RDS offers another account type called high privilege accounts. These have the authority to execute ‘create’, ‘drop’, and ‘grant’ commands within instances.
 
After creating a master account, the console will no longer support database management. You need to log on to the database directly to perform maintenance.
 
Once you’ve defined a master account, it is the only account that you can manage through the console, and you need to log on to the database directly to manage all the other accounts.
You can create an account when configuring a new database:

Creating an account

You can also create new accounts on the Accounts screen, which is available from the management console:
 

Create Generic user

Alibaba Cloud RDS provides limited configuration changes

Compared to competitors’ offerings – Amazon RDS comes to mind – Alibaba’s console offers fewer configuration options.
Take database parameters as an example.
RDS allows you to configure database instance parameters through the RDS console. RDS is compatible with the native database service and features similar parameter setting methods. Parameters may also be set through APIs to run related commands.
Some parameters require a restart after modification, while others do not.

Amazon RDS, on the other hand, manages DB engine configuration through the use of parameters in a DB parameter group. DB parameter groups act as a container for engine configuration values that are applied to one or more DB instances. You can use your own DB parameter group by simply creating a new DB parameter group, modifying the desired parameters, and setting your DB instance to use the new DB parameter group. You can also copy an existing parameter group if you have already created a DB parameter group and you want to include most of the custom parameters and values from that group in a new DB parameter group.
Moreover, parameters are accessible via the AWS CLI and API, in addition to the console.

The performance schema database is off by default

Alibaba Cloud RDS instances have the performance schema database turned off by default, which greatly limits our ability to gather performance metrics.
However, it can be turned from the AlibabaCloud Console by clicking on the pencil icon and entering a value of “ON” in the Edit dialog:


A reboot of the instance is required to implement the changes:

Restart Instance

Network Latency

An inherent drawback of all cloud-based systems is network latency. When you’re located on a different continent than an Alibaba Cloud data center, you can expect delayed transmission of data. Add to this intra-cloud network latency and network-attached storage latency, and you have a significant slowing of services. This depends largely of course on the quality, stability, and speed of your Internet connection.
The most reliable way to minimize network latency is to choose the data center that is closest to your application server(s). If your servers are scattered across several areas, consider employing several database instances, one for each application server.
Alibaba provides several data centers; most are in China – their company headquarters – and around the South Pacific. However, there are locations in Germany, the Middle East, and the US:

cloud_centres

Alibaba Cloud provides some monitoring of the RDS instance.

Accessible via the “Monitoring and Alarms” link in the left-hand instance sidebar, the CloudMonitor displays the ApsaraDB for RDS operation status based on four metrics: Disk usage, IOPS usage, Connection usage, and CPU usage. CloudMonitor automatically starts monitoring the above four metrics without any additional configuration as soon as you purchase an ApsaraDB for RDS service.
CloudMonitor will also create alarm rules for each primary and read-only instance. These rules set up the thresholds of CPU usage, Connection usage, IOPS usage, and Disk usage all to 80%. When metric data exceeds any of the above thresholds, a text message and an email will be sent to the alarm contact account.

CloudWatch Resource Monitoring

 

CloudWatch Engine Monitoring

Below are the OS and MySQL metrics available through the Alibaba GUI:
  • OS Metrics :
  1. CPU and Memory usage.
  2. Disk Space
  3. IOPS
  4. Total Connections
  5. Network Traffic
  • MySQL metrics :
  1. TPS (Transactions per Second)/QPS (Queries per Second)
  2. InnoDB Buffer Pool Read Hit Ratio, Usage Ratio, and Dirty Block Ratio (%)
  3. InnoDB Read/Write Volume (KB)
  4. InnoDB Buffer Pool Read/Write Frequency
  5. InnoDB Log Read/Write/fsync
  6. Number of temporary tables created automatically on the hard disk when the MySQL is executing statements
  7. MySQL_COMDML
  8. MySQL_RowDML
  9. MyISAM Read/Write Frequency
  10. MyISAM Key Buffer Read/Write/Usage Ratio (%)

Other Cloud RDSes, like Amazon’s RDS database platform, provide statistics and advice about many types of metrics including:

  • High CPU or RAM consumption
  • Disk space consumption
  • Network traffic
  • Database connections
  • IOPS (input-output operations per second) metrics

By comparison, Amazon RDS provides a variety of automated and manual tools that you can use to monitor your RDS database(s). These tools are spread across several dashboards including Amazon RDS, CloudWatch, AWS Trusted Advisor and other AWS console dashboards. Each targets different types of metrics about resource utilization and performance as well as services like alarms and troubleshooting.

Alibaba Cloud provides backup and recovery tools

ApsaraDB supports data backup and log backup. You can configure an auto-backup policy to adjust the cycles of RDS data backup and log backup as well as manually back up RDS data at any time.
Note that, since backup files occupy backup space, charges are incurred if the used space exceeds the free quota!

Configuring a backup cycle

The Recovery screen may be utilized to recover a backup file to the target node. There are three types of recovery operations:
  • Master node roll-back: This can be used to restore a node to the state that it was in at a specific point in time.
  • Slave node repair: This can be used to automatically create a new slave node to reduce risks when an irreparable failure occurs to the slave node.
  • Read-only instance creation: This creates a read-only instance from a backup.

The Backup List tab

Cost

RDS service is charged in two ways: Subscription and Pay-As-You-Go.
Pay-As-You-Go bills you for the exact volume actually used. You are thus free to activate or stop resources at any time. The Pay-As-You-Go payment method, lets you activate and stop resources as needed with no additional hardware or maintenance costs.
For Subscription the average price of fees is lower than that of Pay-As-You-Go fees, making this pricing plan more suited to those who intend to consume more resources.
See the pricing page for more information.
There are discounts of up to 50% on all regions in Mainland China. Right now, new users get $300 USD in free credit so that they may try out Alibaba Cloud RDS for free.

Monyog is a MySQL monitoring tool that improves the database performance of your MySQL powered systems. Download your free trial.

The post Top 10 things to know about Alibaba Cloud RDS appeared first on Monyog Blog.

Possible ways to improve performance if you feel affected by Spectre and Meltdown

If you are concerned by performance degradation caused by Spectre and Meltdown patches it might be a good idea to do a few checks that will boost back your MySQL performance:

- Check if your instance’s CPU are running with full power:

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor

if the output is

powersave

You may benefit from setting CPU speed to performance mode

for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor ; do echo performance

Please refer to https://wiki.debian.org/HowTo/CpuFrequencyScaling and https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt for more details.

- Check if your MySQL instances are configured properly. Buffer pool should be big enough to keep all hot data, IO-related variables (innodb_flush_log_at_trx_commit, sync_binlogs, etc) are configured according your needs and not causing unnecessary load.

- Check that your queries are well optimized. One of the sings of bad queries is Select_scan variable in MySQL status. If it’s non-zero and counting you probably have some space for performance improvements.

- If you are running your infrastructure on the cloud and hit performance issues because of Spectre and Meltdown you may try optimize cost by checking for unused volumes, unused  or overprovisioned boxes and so win some budget for bussines-critical instance upgrade. Take a look on the billing plan. Do you see a places for optimizations? Some cloud providers are charging for cross availability zone traffic – could it be removed completely or optimized out?

Usually there are lot of places for performance improvements so don’t be afraid!

Why You Should Avoid Using “CREATE TABLE AS SELECT” Statement

In this blog post, I’ll provide an explanation why you should avoid using the CREATE TABLE AS SELECT statement.

The SQL statement “create table <table_name> as select …” is used to create a normal or temporary table and materialize the result of the select. Some applications use this construct to create a copy of the table. This is one statement that will do all the work, so you do not need to create a table structure or use another statement to copy the structure.

At the same time there are a number of problems with this statement:

  1. You don’t create indexes for the new table
  2. You are mixing transactional and non-transactional statements in one transaction. As with any DDL, it will commit current and unfinished transactions
  3. CREATE TABLE … SELECT is not supported when using GTID-based replication
  4. Metadata locks won’t release until the statement is finished

When CREATE TABLE AS SELECT statement can break things very badly

Let’s imagine we need to transfer money from one account to another (classic example). But in addition to just transferring funds, we need to calculate fees. The developers decide to create a table to perform a complex calculation.

Then the transaction looks like this:

begin; update accounts set amount = amount - 100000 where account_id=123; -- now we calculate fees create table as select ... join ... update accounts set amount = amount + 100000 where account_id=321; commit;

The “create table as select … join … ” commits a transaction that is not safe. In case of an error, the second account obviously will not be credited by the second account debit that has been already committed!

Well, instead of “create table … “, we can use “create temporary table …” which fixes the issue, as temporary table creation is allowed.

GTID issue

If you try to use CREATE TABLE AS SELECT when GTID is enabled (and ENFORCE_GTID_CONSISTENCY = 1) you get this error:

General error: 1786 CREATE TABLE ... SELECT is forbidden when @@GLOBAL.ENFORCE_GTID_CONSISTENCY = 1.

The application code may break.

Metadata lock issue

Metadata lock issue for CREATE TABLE AS SELECT is less known. (More information about the metadata locking in general). Please note: MySQL metadata lock is different from InnoDB deadlock, row-level locking and table-level locking.

This quick simulation demonstrates metadata lock:

session1:

mysql> create table test2 as select * from test1;

session2:

mysql> select * from test2 limit 10;

— blocked statement

This statement is waiting for the metadata lock:

session3:

mysql> show processlist; +----+------+-----------+------+---------+------+---------------------------------+------------------------------------------- | Id | User | Host | db | Command | Time | State | Info +----+------+-----------+------+---------+------+---------------------------------+------------------------------------------- | 2 | root | localhost | test | Query | 18 | Sending data | create table test2 as select * from test1 | 3 | root | localhost | test | Query | 7 | Waiting for table metadata lock | select * from test2 limit 10 | 4 | root | localhost | NULL | Query | 0 | NULL | show processlist +----+------+-----------+------+---------+------+---------------------------------+-------------------------------------------

The same can happen another way: a slow select query can prevent some DDL operations (i.e., rename, drop, etc.):

mysql> show processlistG *************************** 1. row *************************** Id: 4 User: root Host: localhost db: reporting_stage Command: Query Time: 0 State: NULL Info: show processlist Rows_sent: 0 Rows_examined: 0 Rows_read: 0 *************************** 2. row *************************** Id: 5 User: root Host: localhost db: test Command: Query Time: 9 State: Copying to tmp table Info: select count(*), name from test2 group by name order by cid Rows_sent: 0 Rows_examined: 0 Rows_read: 0 *************************** 3. row *************************** Id: 6 User: root Host: localhost db: test Command: Query Time: 5 State: Waiting for table metadata lock Info: rename table test2 to test4 Rows_sent: 0 Rows_examined: 0 Rows_read: 0 3 rows in set (0.00 sec)

As we can see, CREATE TABLE AS SELECT can affect other queries. However, the problem here is not the metadata lock itself (the metadata lock is needed to preserve consistency). The problem is that the metadata lock will not be released until the statement is finished. 

The fix is simple: copy the table structure first by doing “create table new_table like old_table”, then do “insert into new_table select …”. The metadata lock is still held for the create table part (very short), but isn’t for the “insert … select” part (the total time to hold the lock is much shorter). To illustrate the difference, let’s look at two cases:

  1. With “create table table_new as select … from table1“, other application connections can’t read from the destination table (table_new) for the duration of the statement (even “show fields from table_new” will be blocked)
  2. With “create table new_table like old_table” + “insert into new_table select …”, other application connections can’t read from the destination table during the “insert into new_table select …” part.

In some cases, however, the table structure is not known beforehand. For example, we may need to materialize the result set of a complex select statement, involving joins and/or group by. In this case, we can use this trick:

create table new_table as select ... join ... group by ... limit 0; insert into new_table as select ... join ... group by ...

The first statement creates a table structure and doesn’t insert any rows (LIMIT 0). The first statement places a metadata lock. However, it is very quick. The second statement actually inserts rows into the table and doesn’t place a metadata lock.

More reading on metadata locks and how to troubleshoot them in MySQL 5.7: 

Quickly Troubleshoot Metadata Locks in MySQL 5.7

 

 

ProxySQL native HTTP server and metrics on a web UI

Introducing HTTP server in ProxySQL 1.4.4

ProxySQL exports a lot of metrics, all visible in the stats schema and queryable using any client that uses the MySQL protocol.
This allows to ingest metrics into external monitoring tools, like Percona PMM, grafana, Severalnines ClusterControl, graphite, datadog, and others already existing or being under development.

Although we believe these monitoring tools are the preferred and right way to monitor ProxySQL, we often faced the issue that ProxySQL isn’t monitored while still being evaluated. This makes very difficult the troubleshooting of performance and behavior in such scenarios. In fact, it is common that users report issues, but without metrics it is often very difficult to distinguish between a real performance bug, or a misconfiguration, or even issues completely unrelated to ProxySQL.
For this reason, ProxySQL version 1.4.4 introduces a new and experimental interface: a web UI to export some metrics.
This feature doesn't replace all the statistics already available through the Admin interface, but facilitate the troubleshooting while ProxySQL is still being evaluated and metrics are not collected into an external software.

This feature is not final yet, only few metrics are available, and it is subject to changes in future.
The web UI also provides a simple dashboard that summarizes the overall configuration and status of ProxySQL itself.
Current metrics available are:

  • System Metrics: related to CPU and memory utilization
  • MySQL Metrics: related to number of MySQL connections in use (both frontends and backends), and connection pool efficiency
  • Query Cache Metrics: related to Query Cache efficiency
Configure HTTP server

If you are familiar with ProxySQL, you already know that most of its configuration can be changed at runtime.
HTTP server is no different, and you can configure it, enable or disable it at runtime.
Currently there are only 2 variables related to HTTP server:

Internally we had a long exchange of opinions on whatever HTTP server should be enabled or disabled by default. Being a new functionality, and very useful, it made sense to enable it by default. Although security was a concern: even if doesn't allow to change the configuration of ProxySQL (it is a read-only interface that exports metrics), it can be potentially used to exploit unknown vulnerabilities.
For this reason we made HTTP server disabled by default (admin-web_enabled=false).
If you want to access such metrics through the web interface, you can simple enable it by configuring admin-web_enabled=true. For example:

SET admin-web_enabled='true'; LOAD ADMIN VARIABLES TO RUNTIME;

Similarly, to disable it:

SET admin-web_enabled='false'; LOAD ADMIN VARIABLES TO RUNTIME;

If you want to make the change persistent to disk, do not forget to run SAVE ADMIN VARIABLES TO DISK.

A note on the above: the HTTP server is considered part of the Admin module, therefore after you change its variables you need to use LOAD ADMIN VARIABLES TO RUNTIME to make the change effective, and issue SAVE ADMIN VARIABLES TO DISK to persist them on disk.

Once HTTP server is enabled, you can point your browser to port 6080 (the default), for example http://127.0.0.0.1:6080 , and use admin-stats_credentials to login. For further security, we recommend to change the default admin-stats_credentials.

Once you login, a dashboard with generic information is displayed:

From here, you can choose a category to get useful metrics.
The examples below show System statistics and MySQL Connections:

Statistics are collected even if HTTP server is not running

Historical statistics to be exported in the web interface are being collected even if HTTP server is disabled.
In other words, when you enable HTTP server you will have access to statistics collected even when HTTP server was not enabled.
Statistics are stored in a new database file, named proxysql_stats.db.
During the development of this new feature concerned were raised about the performance impact of this functionality, although we managed to make ProxySQL 1.4.4 even faster than ProxySQL 1.4.3!

MariaDB ColumnStore Distributed User Defined Aggregate Functions

MariaDB ColumnStore Distributed User Defined Aggregate Functions david.hall@mar… Wed, 01/10/2018 - 11:31

MariaDB ColumnStore 1.1 introduces the Distributed User Defined Aggregate Functions (UDAF) C++ API. MariaDB Server has supported UDAF (a C API) for a while, but now we have extended it to the ColumnStore Engine. This new feature allows anyone to create aggregate functions of arbitrary complexity for distributed execution in the ColumnStore Engine. These functions can also be used as Analytic (Window) functions just like any built in aggregate. You should have a working understanding of C++ to use this API.

The UDAF API supports an arbitrary number of parameters to be defined for your Aggregate function. Version 1.1 of ColumnStore supports one parameter and will be enhanced to support any number of parameters in the next version.

For example, defining MEDIAN() is allowed, but PERCENTILE_CONT(, ) will be enabled in the next version. However, there are some workarounds for the short term, such as using a distributed MEDIAN which provides equivalent functionality to PERCENTILE_CONT(0.5). In this his example, MEDIAN can be extended to support other percentile values. If you want to support a 90th percentile function, then a PERCENTILE90() function can be implemented currently.

So, how does this work? MariaDB ColumnStore has had a distributed aggregate function capability since the start. With the 1.1 release, we’ve added new functionality that allows the engine to recognize when a UDAF has been called and perform callbacks at each level of execution. The code you write mustdefine the data structures and the work to be performed on those structures.

It’s relatively straight forward. There are two classes you need to become familiar with -- mcsv1Context and mcsv1_UDAF. mcsv1Context are the classes that holds the state of your function during execution. mcsv1_UDAF is the class you extend to write your code.

To write a UDAF, this requires extending the class mcsv1_UDAF and implementing a few functions. In many cases, these implementations are straightforward, but it really depends on what you’re trying to do. For use as an aggregate, some functions are called on the UM, some on the PM. For example, the UDAF ssq.cpp is a simple implementation that shows the basic implementation of each function.

​For use as Analytic functions, all calls are on the UM.

​In addition, you must write the same exact function in the MariaDB UDAF C API. This is required and can be a simple stub or a complete implementation, depending on whether you want your function to work as an aggregate for other engines. But, it is needed to tell the parser that your function exists.

Since memory must be allocated at each node for the work being performed there, MariaDB ColumnStore handles when and where memory is allocated. You may choose to provide a method to do that allocation which the engine calls when it needs to.

You may have a need for some complex data structure, hash table, vectors or other subobjects. In this situation, you need to become familiar with the Complex Data Model as described in the udaf_sdk Documentation. There is no limit to the complexity of the memory model you choose. It is important that you create a UserData derived class that can serialize and un-serialize itself. It’s destructor must clean up all allocated memory.

If all you need is a simple data structure, you may forgo all the memory allocation steps and rely on the base UserData class. Its default functionality is to allocate a fixed block of memory and to stream that block as a single binary value. You need do nothing except set the amount of memory in the init() method and overlay your structure in each callback method.

MariaDB ColumnStore 1.1 doesn’t support dynamic loading of plugins, so your UDAF must be compiled and linked in the MariaDB ColumnStore code tree in the ./utils.udfsdk directory. You must compile and link it into libudfsdk.so.1.1.0 along with all the other user defined functions. This library must be placed into the mariadb/columnstore/lib directory of each node. The MariaDB C UDAF code must be compiled and linked into libudf_mysql.so.1.0.0 and placed into the same place. There’s a symlink already there for mysqld to find it.

Then to activate your UDAF, in a mysql client, issue a command similar to:
CREATE AGGREGATE FUNCTION median returns REAL soname 'libudf_mysql.so'

In future blogs, I’ll delve deep into each step needed to create and use a UDAF.

User Defined Aggregates open up a whole new avenue to extract value from analytic data. We hope you enjoy using this new tool! MariaDB ColumnStore 1.1 is available for download as part of MariaDB AX, an enterprise open source solution for modern data analytics and data warehousing.

MariaDB ColumnStore 1.1 introduces the Distributed User Defined Aggregate Functions (UDAF) C++ API. MariaDB Server has supported UDAF (a C API) for a while, but now we have extended it to the ColumnStore Engine. This new feature allows anyone to create aggregate functions of arbitrary complexity for distributed execution in the ColumnStore Engine. These functions can also be used as Analytic (Window) functions just like any built in aggregate. You should have a working understanding of C++ to use this API.

Login or Register to post comments

How to Secure Your Open Source Databases with ClusterControl

Security is one of the most important aspects of running a database. Whether you are a developer or a DBA, if you are managing the database, it is your responsibility to safeguard your data and protect it from any kind of unauthorized access. The unfortunate fact is that many organizations do not protect their data, as we’ve seen from the new wave of MongoDB ransomware attacks in September 2017. We had earlier published a blog on how to secure MongoDB databases.

In this blog post, we’ll have a look into how to secure your databases using ClusterControl. All of the features described here are available in version 1.5.1 of ClusterControl (released on December 23, 2017). Please note that some features are only available for certain database types.

Backup Encryption

ClusterControl 1.5.1 introduced a new feature called backup encryption. All encrypted backups are marked with a lock icon next to it:

You can use this feature on all backup methods (mysqldump, xtrabackup, mongodump, pg_dump) supported by ClusterControl. To enable encryption, simply toggle on the "Enable Encryption" switch when scheduling or creating the backup. ClusterControl automatically generates a key to encrypt the backup. It uses AES-256 (CBC) encryption algorithm and performs the encryption on-the-fly on the target server. The following command shows an example of how ClusterControl performs a mysqldump backup:

$ mysqldump --defaults-file=/etc/my.cnf --flush-privileges --hex-blob --opt --no-create-info --no-data --triggers --routines --events --single-transaction --skip-comments --skip-lock-tables --skip-add-locks --databases db1 | gzip -6 -c | openssl enc -aes-256-cbc -pass file:/var/tmp/cmon-094508-e0bc6ad658e88d93.tmp | socat - TCP4:192.168.55.170:9999'

You would see the following error if you tried to decompress an encrypted backup without decrypting it first with the proper key:

$ gunzip mysqldump_2018-01-03_175727_data.sql.gz gzip: mysqldump_2018-01-03_175727_data.sql.gz: not in gzip format

The key is stored inside the ClusterControl database, and can be retrieved from the cmon_backup.metadata file for a particular backup set. It will be used by ClusterControl when performing restoration. Encrypting backups is highly recommended, especially when you want to secure your backups offsite like archiving them in the cloud.

MySQL/PostgreSQL Client-Server Encryption

Apart from following the recommended security steps during deployment, you can increase the reliability of your database service by using client-server SSL encryption. Using ClusterControl, you can perform this operation with simple point and click:

You can then retrieve the generated keys and certificates directly from the ClusterControl host under /var/lib/cmon/ca path to establish secure connections with the database clients. All the keys and certificates can be managed directly under Key Management, as described further down.

Database Replication Encryption

Replication traffic within a Galera Cluster can be enabled with just one click. ClusterControl uses a 2048-bit default key and certificate generated on the ClusterControl node, which is transferred to all the Galera nodes:

A cluster restart is necessary. ClusterControl will perform a rolling restart operation, taking one node at a time. You will see a green lock icon next to the database server (Galera indicates Galera Replication encryption, while SSL indicates client-server encryption) in the Hosts grid of the Overview page once encryption is enabled:

All the keys and certificates can be managed directly under Key Management, as described further down.

ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE Key Management

All the generated keys and certificates can be managed directly from the ClusterControl UI. Key Management allows you to manage SSL certificates and keys that can be provisioned on your clusters:

If the certificate has expired, you can simply use the UI to generate a new certificate with proper key and Certificate Authority (CA), or import an existing key and certificate into ClusterControl host.

Security Advisors

Advisors are mini-programs that run in ClusterControl. They perform specific tasks and provide advice on how to address issues in areas such as performance, security, log management, configuration, storage space and others. Each advisor can be scheduled like a cron job, and run as a standalone executable within the ClusterControl UI. It can also be run via the ClusterControl 's9s' command line client.

ClusterControl enables two security advisors for MySQL-based systems:

  • Access from any host ('%') - Identifies all users that use a wildcard host from the mysql system table, and lets you have more control over which hosts are able to connect to the servers.
  • Check number of accounts without a password - Identifies all users who do not have a password in the mysql system table.

For MongoDB, we have the following advisors:

  • MongoDB authentication enabled - Check whether the MongoDB instance is running with authentication mode enabled.
  • Authorization check - Check whether MongoDB users are authorized with too permissive role for access control.

For more details on how does ClusterControl performs the security checks, you can look at the advisor JavaScript-like source code under Manage -> Developer Studio. You can see the execution results from the Advisors page:

Multiple Network Interfaces

Having multiple NICs on the database hosts allows you to separate database traffic from management traffic. One network is used by the database nodes in order to communicate to each other, and this network is not exposed to any public network. The other network is used by ClusterControl, for management purposes. ClusterControl is able to deploy such a multi-network setup. Consider the following architecture diagram:

To import the above database cluster into ClusterControl, one would specify the primary IP address of the database hosts. Then, it is possible to choose the management network as well as the data network:

ClusterControl can also work in an environment without Internet access, with the databases being totally isolated from the public network. The majority of the features will work just fine. If the ClusterControl host is configured with Internet, it is also capable of cloning the database vendor's repository for the internet-less database servers. Just go to Settings (top menu) -> Repositories -> Create New Repository and set the options to fit the target database server environment:

The mirroring may take about 10 to 20 minutes depending on the internet connection, you will see the new item in the list later on. You can then pick this repository instead when scaling or deploying a new cluster, without the need for the database hosts to have any Internet connection (note that the operating system’s offline repository should be in place as well).

MySQL Users Management

The MySQL privilege system ensures that all users can perform only the operations they are allowed to. Granting is critical as you don't want to give all users complete access to your database, but you need users to have the necessary permissions to run queries and perform daily tasks.

ClusterControl provides an interactive user interface to manage the database schemas and privileges. It unifies the accounts on all MySQL servers in the cluster and simplifies the granting process. You can easily visualize the database users, so you avoid making mistakes.

As you can see in the above screenshot, ClusterControl greyed out unnecessary privileges if you only want to grant a user to a database (shopdb). "Require SSL?" is only enabled if the client/server SSL encryption is enabled while the administration privilege checkboxes are totally disabled if a specific database is defined. You can also inspect the generated GRANT statement at the bottom of the wizard, to see the statement that ClusterControl will execute to create this user. This helper looks pretty simple, but creating users and granting privileges can be error-prone.

ClusterControl also provides a list of inactive users for all database nodes in the cluster, showing off the accounts that have not been used since the last server restart:

This alerts the administrator for unnecessary accounts that exist, and that could potentially harm the server. The next step is to verify if the accounts are no longer active, and you can simply use the "Drop Selected User" option in order to remove them. Make sure you have enough database activity to ensure the list generated by ClusterControl is accurate. The longer the server uptime, the better.

Always Keep Up-to-date Related resources  Announcing ClusterControl 1.5.1 - Featuring Backup Encryption for MySQL, MongoDB & PostgreSQL  PCI Compliance for MySQL & MariaDB with ClusterControl  How to Secure MySQL/MariaDB Servers

For production use, it’s highly recommended for you to install the database-related packages from the vendor’s repository. Don’t rely on the default operating system repository, where the packages are usually outdated. If you are running in a cluster environment like Galera Cluster, or even MySQL Replication, you always have the choice to patch the system with minimal downtime.

ClusterControl supports automatic minor version rolling upgrade for MySQL/MariaDB with a single click. Just go to Manage -> Upgrades -> Upgrade and choose the appropriate major version for your running cluster. ClusterControl will then perform the upgrade, on one node at a time. The node will be stopped, then software will be updated, and then the node will be started again. If a node fails to upgrade, the upgrade process is aborted and the admin is notified. Upgrades should only be performed when there is as little traffic as possible on the cluster.

Major versions upgrades (e.g, from MySQL 5.6 to MySQL 5.7) are intentionally not automated. Major upgrades usually require uninstallation of the existing packages, which is a risky task to automate. Careful planning and testing is necessary for such kind of upgrades.

Database security is an important aspect of running your database in production. From all the incidents we frequently read about in the news (and there are probably many others that are not publicized), it is clear that there are groups busy out there with bad intentions. So, make sure your databases are well protected.

Tags:  MySQL PostgreSQL MariaDB MongoDB security database security

Pages