Planet MySQL

What Should I Monitor, and How Should I Do It?

Monitoring tools offer two core types of functionality: alerts based on aliveness checks and comparing metrics to thresholds, and displaying time-series charts of status counters. Nagios + Graphite are the prototypical time-series tools that do these things.

But these tools don’t answer the crucial questions about what we should monitor. What kinds of aliveness/health checks should we build into Nagios? Which metrics should we monitor with thresholds to raise alarms, and what should the thresholds be? What graphs should we build of status counters, which graphs should we examine and what do they mean?

We need guiding principles to help answer these questions. This webinar briefly introduces the principles that motivate and inform what we do at VividCortex, then explains which types of health checks and charts are valuable and what conclusions should be drawn from them. The webinar is focused mostly on MySQL database monitoring, but will be relevant beyond MySQL as well. Some of the questions we answer are:

  • What status counters from MySQL are central and core, and which are peripheral?
  • What is the meaning of MySQL status metrics?
  • Which subsystems inside MySQL are the most common causes of problems in production?
  • What is the unit of work-getting-done in MySQL, and how can you measure it?
  • Which open-source tools do a good job at monitoring in the way we recommend at VividCortex?
  • Which new and/or popular open-source tools should you evaluate when choosing a solution?

You will leave this webinar with a solid understanding of the types of monitoring you should be doing, the low-hanging fruit, and tools for doing it. This is not just a sales pitch for VividCortex. Register below, and we will send you a link to the recording and a copy of the slide deck.

Pic Cred


PlanetMySQL Voting: Vote UP / Vote DOWN

Using GDB, investigating segmentation fault in MySQL

In previous article, we have covered some errors and issues with using MySQL in “disk full” environment. Where there was no space left on device.(See here: Testing Disk Full Conditions)
Today’s scenario is -> Starting MySQL with GTID/binary log enabled, in 0 space left Linux(CentOS 6.5) environment.

If you hit a bug or problem, general rule for helping community to fix it is to provide as much information as possible. Especially useful is to give gdb output from coredump. To get coredump you can read this wonderful article Hunting-The-Core

Now let’s explore our situation. Because our segfault is detected while starting MySQL, it is not possible to attach PID to GDB and also using strace.

Our my.cnf file:

[mysqld] log_bin = /opt/mysql/datadir/mysql-bin log_bin_index = /opt/mysql/datadir/mysql-bin sync_binlog = 1 binlog_format = row gtid-mode = on log_slave_updates = 1 enforce-gtid-consistency = true sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES core-file [mysqld_safe] core_file_size=unlimited

Starting MySQL:

[root@localhost ~]# /opt/mysql/bin/mysqld_safe --defaults-file=/opt/mysql/my.cnf --user=mysql --datadir=/opt/mysql/datadir --socket=/opt/mysql/datadir/mysqld-new.sock --pid-file=/home/error_log_dir/mysqld-new.pid --port=3307 --log-error=/home/error_log_dir/error.err & [1] 2849 [root@localhost ~]# 150427 06:31:42 mysqld_safe Logging to '/home/error_log_dir/error.err'. 150427 06:31:42 mysqld_safe Starting mysqld daemon with databases from /opt/mysql/datadir /opt/mysql/bin/mysqld_safe: line 166: 3110 Segmentation fault (core dumped) nohup /opt/mysql/bin/mysqld --defaults-file=/opt/mysql/my.cnf --basedir=/opt/mysql --datadir=/opt/mysql/datadir --plugin-dir=/opt/mysql/lib/plugin --user=mysql --log-error=/home/error_log_dir/error.err --pid-file=/home/error_log_dir/mysqld-new.pid --socket=/opt/mysql/datadir/mysqld-new.sock --port=3307 < /dev/null >> /home/error_log_dir/error.err 2>&1 150427 06:32:07 mysqld_safe mysqld from pid file /home/error_log_dir/mysqld-new.pid ended

3110 Segmentation fault (core dumped) -> We have core dump.

Using gdb with coredump:

[root@localhost coredumps]# gdb /opt/mysql/bin/mysqld core.3110 BFD: Warning: /home/coredumps/core.3110 is truncated: expected core file size >= 803807232, found: 697573376. [New Thread 3110] [New Thread 3111] Cannot access memory at address 0x7f5ec00ff168 Cannot access memory at address 0x7f5ec00ff168 Cannot access memory at address 0x7f5ec00ff168 Reading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from /usr/lib/debug/lib64/ld-2.12.so.debug...done. done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Failed to read a valid object file image from memory. Core was generated by `/opt/mysql/bin/mysqld --defaults-file=/opt/mysql/my.cnf --basedir=/opt/mysql --'. Program terminated with signal 11, Segmentation fault. #0 0x00007f5ebfccd8ac in ?? () (gdb) bt #0 0x00007f5ebfccd8ac in ?? () Cannot access memory at address 0x7fffb0aa89b8 (gdb) bt full #0 0x00007f5ebfccd8ac in ?? () No symbol table info available. Cannot access memory at address 0x7fffb0aa89b8

Such information is not sufficient further analysis, that’s why you should try to run MySQL start command with-in GDB as follows:

[root@localhost error_log_dir]# gdb /opt/mysql/bin/mysqld (gdb) run --defaults-file=/home/error_log_dir/my.cnf --basedir=/opt/mysql --datadir=/opt/mysql/datadir --plugin-dir=/opt/mysql/lib/plugin --user=mysql --log-error=/home/error_log_dir/error.err --pid-file=/home/error_log_dir/mysqld-new.pid --socket=/opt/mysql/datadir/mysqld-new.sock --port=3307 . . Program received signal SIGSEGV, Segmentation fault. 0x0000000000000000 in ?? () . . (gdb) bt #0 0x0000000000000000 in ?? () #1 0x0000000000ae6242 in my_printf_warning (format=0x1026f08 "Disk is full writing '%s' (Errcode: %d - %s). Waiting for someone to free space...") at /root/mysql-5.6.24/mysys/my_error.c:260 #2 0x0000000000ac9052 in wait_for_free_space (filename=0x1fe7ee0 "/opt/mysql/datadir/mysql-bin.~rec~", errors=0) at /root/mysql-5.6.24/mysys/errors.c:115 #3 0x0000000000af1ac7 in my_write (Filedes=19, Buffer=0x1964d00 "/opt/mysql/datadir/mysql-bin.000003\n", Count=36, MyFlags=52) at /root/mysql-5.6.24/mysys/my_write.c:89 #4 0x0000000000acd5ae in inline_mysql_file_write (src_file=0x1027708 "/root/mysql-5.6.24/mysys/mf_iocache.c", src_line=1788, file=19, buffer=0x1964d00 "/opt/mysql/datadir/mysql-bin.000003\n", count=36, flags=52) at /root/mysql-5.6.24/include/mysql/psi/mysql_file.h:1141 #5 0x0000000000ad078c in my_b_flush_io_cache (info=0x183c1a8, need_append_buffer_lock=0) at /root/mysql-5.6.24/mysys/mf_iocache.c:1787 #6 0x0000000000a7132b in MYSQL_BIN_LOG::sync_purge_index_file (this=0x183b400) at /root/mysql-5.6.24/sql/binlog.cc:4420 #7 0x0000000000a6e206 in MYSQL_BIN_LOG::open_binlog (this=0x183b400, log_name=0x190d130 "/opt/mysql/datadir/mysql-bin", new_name=0x0, io_cache_type_arg=WRITE_CACHE, max_size_arg=1073741824, null_created_arg=false, need_lock_index=true, need_sid_lock=true, extra_description_event=0x0) at /root/mysql-5.6.24/sql/binlog.cc:3146 #8 0x000000000063ad8e in init_server_components () at /root/mysql-5.6.24/sql/mysqld.cc:5012 #9 0x000000000063b6e7 in mysqld_main (argc=19, argv=0x186de68) at /root/mysql-5.6.24/sql/mysqld.cc:5455 #10 0x000000000062fc74 in main (argc=10, argv=0x7fffffffe3d8) at /root/mysql-5.6.24/sql/main.cc:25 ###################### (gdb) bt full #0 0x0000000000000000 in ?? () No symbol table info available. #1 0x0000000000ae6242 in my_printf_warning (format=0x1026f08 "Disk is full writing '%s' (Errcode: %d - %s). Waiting for someone to free space...") at /root/mysql-5.6.24/mysys/my_error.c:260 args = {{gp_offset = 32, fp_offset = 48, overflow_arg_area = 0x7fffffffd8a0, reg_save_area = 0x7fffffffd7e0}} wbuff = "Disk is full writing '/opt/mysql/datadir/mysql-bin.~rec~' (Errcode: 28 - No space left on device). Waiting for someone to free space...\000\330\327\377\377\377\177\000\000\210\341\377\367\377\177\000\000\340\267\376\367\377\177\000\000\271\332\024u\000\000\000\000Zn\336\367\377\177\000\000\000\000\000\000\000\000\000\000/\035!\366\377\177\000\000C\000\000\000\000\000\000\000f\223!\366\377\177\000\000\001\000\000\000\000\000\000\000P\031!\366\377\177\000\000LC_"... _db_stack_frame_ = {func = 0x102c53c "my_write", file = 0x102c518 "/root/mysql-5.6.24/mysys/my_write.c", level = 2147483654, prev = 0x7fffffffd9f0} #2 0x0000000000ac9052 in wait_for_free_space (filename=0x1fe7ee0 "/opt/mysql/datadir/mysql-bin.~rec~", errors=0) at /root/mysql-5.6.24/mysys/errors.c:115 errbuf = "No space left on device\000\325\024\257\000\035\000\000\000\230\324\002\001\000\000\000\000\272\324\002\001\000\000\000\000\000\331\377\377\034\000\000\000 ܆\001\000\000\000\000\034\000\000\000\006\000\000\000 ܆\001\000\000\000\000@\331\377\377\377\177\000\000[\346\257\000\000\000\000\000 ܆\001\000\000\000\000\000܆\001\023\000\000\000<\305\002\001\000\000\000\000\030\305\002\001\000\000\000" #3 0x0000000000af1ac7 in my_write (Filedes=19, Buffer=0x1964d00 "/opt/mysql/datadir/mysql-bin.000003\n", Count=36, MyFlags=52) at /root/mysql-5.6.24/mysys/my_write.c:89 writtenbytes = 18446744073709551615 sum_written = 0 errors = 0 initial_count = 36 _db_stack_frame_ = {func = 0x1027c7b "my_b_flush_io_cache", file = 0x1027708 "/root/mysql-5.6.24/mysys/mf_iocache.c", level = 2147483653, prev = 0x7fffffffdb10}

From know developers should be able to see exact code line, for further examination.
For further reading(to see “bt full” full output) refer related BUG report: #76852
It is already verified.

The post Using GDB, investigating segmentation fault in MySQL appeared first on Azerbaijan MySQL UG.


PlanetMySQL Voting: Vote UP / Vote DOWN

WebScaleSQL builds available for Debian 8 and Ubuntu 15.04

After many months of heavy development and testing, Debian 8 (Jesse) and Ubuntu 15.04 (Vivid Vervet) were released recently.

For those who already upgraded their systems and/or plan do do it in close future, we prepared builds of WebScaleSQL :-)

You can also install it using PSCE repository.

 


PlanetMySQL Voting: Vote UP / Vote DOWN

Log Buffer #420: A Carnival of the Vanities for DBAs

This Log Buffer Editions brings few of the very insightful blog posts from the arena of Oracle, SQL Server and MySQL.

Oracle:

  • How to Increase Performance With Business Events in Fusion Applications
  • Advanced Oracle Troubleshooting Guide – Part 12: control file parallel reads causing enq: SQ – contention waits?
  • Changing REVERSE Transformations in Oracle Data Miner
  • Refresh Multiple Materialized Views in One Go: No Data Found
  • Error deploying Oracle Composite with wrong encoding wsdl

SQL Server:

  • How to Recover a SQL Server Login Password
  • Understanding Cross-Database Transactions in SQL Server
  • Adding Slicers to a Reporting Services Report
  • Continue a Foreach loop after an error in a SQL Server Integration Services package
  • Automating Image-Based Deployment of SQL Server on Azure IaaS VMs – Preparing OS Image

MySQL:

  • What Should I Monitor, and How Should I Do It?
  • How MySQL will act if there is no space left on hard disk? To answer this question, let’s test it.
  • Configuring PAM Authentication and User Mapping with MariaDB
  • MySQL Enterprise Database Firewall — Control and Monitor SQL Statement Executions
  • MariaDB : Bug when add index on Partition table

PlanetMySQL Voting: Vote UP / Vote DOWN

Indexing 101: Optimizing MySQL queries on a single table

I have recently seen several cases when performance for MySQL queries on a single table was terrible. The reason was simple: the wrong indexes were added and so the execution plan was poor. Here are guidelines to help you optimize various kinds of single-table queries.

Disclaimer: I will be presenting general guidelines and I do not intend to cover all scenarios. I am pretty confident that you can find examples where what I am writing does not work, but I am also confident that it will help you most of the time. Also I will not discuss features you can find in MySQL 5.6+ like Index Condition Pushdown to keep things simple. Be aware that such features can actually make a significant difference in query response time (for good or for bad).

What an index can do for you

An index can perform up to 3 actions: filter, sort/group and cover. While the first 2 actions are self-explanatory, not everyone may know what a ‘covering index’ is. Actually that’s very easy. The general workflow for a basic query is:
1. Use an index to find matching records and get the pointers to data.
2. Use the pointers to the corresponding data.
3. Return records

When a covering index can be used, the index already covers all fields requested in the query, so step #2 can be skipped and the workflow is now:
1. Use an index to find matching records
2. Return records

In many cases, indexes are small and can fit in memory while data is large and does not fit in memory: by using a covering index, you can avoid lots of disk operations and performance can be order of magnitudes better.
Let’s now look at different common scenarios.

Single equality

This is the most basic scenario:

SELECT * FROM t WHERE c = 100

The idea is of course to add an index on (c). However note that if the criteria is not selective enough, the optimizer may choose to perform a full table scan that will certainly be more efficient.
Also note that a frequent variation of this query is when you only select a small subset of fields instead of all fields:

SELECT c1, c2 FROM t WHERE c = 100

Here it could make sense to create an index on (c, c1, c2) because it will be a covering index. Do not create an index on (c1, c2, c)! It will still be covering but it will not be usable for filtering (remember that you can only use a left-most prefix of an index to filter).

Multiple equalities

SELECT * FROM t WHERE c = 100 and d = 'xyz'

It is also very easy to optimize: just add an index on (c, d) or (d, c).

The main mistake here is to add 2 indexes: one on (c) and one on (d). Granted, MySQL is able to use both indexes with the index_merge algorithm, but it is almost always a very bad option.

Equality and inequality

SELECT * FROM t WHERE c > 100 and d = 'xyz'

Here we must be careful because as long as we are using a column with an inequality, this will prevent us from using further columns in the index.

Therefore if we create an index on (d, c), we will be able to filter both on c and d, this is good.
But if we create an index on (c, d), we will only be filtering on c, which is less efficient.

So unlike the situation when you have equalities, order of columns matters when inequalities are used.

Multiple inequalities

SELECT * FROM t WHERE c > 100 and b < 10 and d = 'xyz'

As we have 2 inequalities, we already know that we will not be able to filter on both conditions (*). So we have to make a decision: will we filter on (d, b) or on (d, c)?

It is not possible to tell which option is better without looking at the data: simply choose the column where the inequality is the most selective. The main point is that you must put the column(s) with an equality first.

(*) Actually there is a way to ‘filter’ on both inequalites: partition on b and add an index on (d, c) or partition on c and add an index on (d, b). The details are out of the scope of this post but it might be an option for some situations.

Equalities and sort

SELECT * FROM t WHERE c = 100 and d = 'xyz' ORDER BY b

As mentioned in the first paragraph, an index can filter and sort so this query is easy to optimize. However like for inequalities, we must carefully choose the order of the columns in the index: the rule is that we will filter first, and then sort.

With that in mind, it is easy to know that (c, d, b) or (d, c, b) will be good indexes while (b, c, d) or (b, d, c) are not as good (they will sort but not filter).

And if we have:

SELECT c1, c2 FROM t WHERE c = 100 and d = 'xyz' ORDER BY b

We can create a super efficient index that will filter, sort and be covering: (c, d, b, c1, c2).

Inequality and sort

We have 2 main variations here. The first one is:

SELECT * FROM t WHERE c > 100 and d = 'xyz' ORDER BY b

Two options look reasonable in this case:
1. filter on d and sort by b.
2. filter on d and c.

Which strategy is more efficient? It will depend on your data, so you will have to experiment.

The second variation is:

SELECT * FROM t WHERE c > 100 ORDER BY b

This time we have no equality so we have to choose between filtering and sorting. Most likely you will choose filtering.

Conclusion

Not all cases have been covered in this post but you can already see that in some cases you will create poor MySQL indexes if you are not careful. In a future post, I will present a case that can look confusing at first sight but which is easy to understand if you already know everything mentioned here.

The post Indexing 101: Optimizing MySQL queries on a single table appeared first on MySQL Performance Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Deep Dive Into How to Monitor Galera Cluster for MySQL, MariaDB &amp; Percona XtraDB - Webinar Replay

Thanks to everyone who attended and participated in this week’s webinar on 'How to Monitor Galera Cluster'. If you missed the sessions or would like to watch the webinar again & browse through the slides, they are now available online.

Our speaker this time was Krzysztof Książek, Senior Support Engineer, Severalnines.

Watch the replay

Deep Dive Into How To Monitor MySQL or MariaDB Galera Cluster / Percona XtraDB Cluster - Replay from Severalnines AB

 

Read the slides

Deep Dive Into How To Monitor MySQL or MariaDB Galera Cluster / Percona XtraDB Cluster from Severalnines AB

 

Krzysztof provided a deep-dive session on what to monitor in Galera Cluster for MySQL & MariaDB. Krzysztof is a MySQL DBA with experience in managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.

Amongst other things, Krzysztof discussed why having a good monitoring system is a must, covering the following topics:

Galera monitoring

  • Cluster status
  • Flow control

Host metrics and their impact on MySQL

  • CPU
  • Memory
  • I/O

InnoDB metrics

  • CPU-related
  • I/O-related 

If you’re in Operations and your job is to monitor the health of MySQL/MariaDB Galera Cluster or Percona XtraDB Cluster, then this webinar replay is for you!

RELATED BLOGS

 

Blog category: Tags:
PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Workbench 6.3.3 GA has been released

The MySQL developer tools team is pleased to announce 6.3.3 as our GA release for the MySQL Workbench 6.3

For the full list of changes in this revision, visit

http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-3.html

For detailed information about the new features, see What’s
New in MySQL Workbench 6.3
http://dev.mysql.com/doc/workbench/en/wb-what-is-new-63.html

For discussion, join the MySQL Workbench Forums:
http://forums.mysql.com/index.php?151

Download MySQL Workbench 6.3.3 GA now, for Windows, Mac OS X 10.7+,
Oracle Linux 6 and 7, Fedora 20 and Fedora 21, Ubuntu 14.04 and Ubuntu
14.10 or sources, from:

http://dev.mysql.com/downloads/tools/workbench/

Enjoy !


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Workbench 6.3.3 GA has been released

Dear MySQL users,

The MySQL developer tools team announces 6.3.3 as our GA release for
MySQL Workbench 6.3.

For the full list of changes in this revision, visit
http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-3.html

For detailed information about the new features, see What’s
New in MySQL Workbench 6.3
http://dev.mysql.com/doc/workbench/en/wb-what-is-new-63.html

For discussion, join the MySQL Workbench Forums:
http://forums.mysql.com/index.php?151

Download MySQL Workbench 6.3.3 GA now, for Windows, Mac OS X 10.7+,
Oracle Linux 6 and 7, Fedora 20 and Fedora 21, Ubuntu 14.04 and Ubuntu
14.10 or sources, from:

http://dev.mysql.com/downloads/tools/workbench/


PlanetMySQL Voting: Vote UP / Vote DOWN

Testing “disk full” conditions in MySQL

How MySQL will act if there is no space left on hard disk?
To answer this question, let’s test it:

Our first test is with MySQL 5.6.24-debug with disabled binary log:
Trying to import huge dump, after while it says table is full:

Query OK, 12725 rows affected (2.46 sec) Records: 12725 Duplicates: 0 Warnings: 0 Query OK, 12724 rows affected (2.40 sec) Records: 12724 Duplicates: 0 Warnings: 0 Query OK, 12726 rows affected (2.53 sec) Records: 12726 Duplicates: 0 Warnings: 0 ERROR 1114 (HY000): The table 'sales' is full ERROR 1114 (HY000): The table 'sales' is full ERROR 1114 (HY000): The table 'sales' is full

In error log you will see something like:

[root@localhost mysql]# tail -f /opt/mysql/datadir/error.err Version: '5.6.24-debug' socket: '/opt/mysql/datadir/mysqld-new.sock' port: 3307 Shahriyar Rzayev's MySQL 2015-04-24 03:56:09 7fabeffff700 InnoDB: Error: Write to file ./sales2/sales.ibd failed at offset 34603008. InnoDB: 1048576 bytes should have been written, only 1011712 were written. InnoDB: Operating system error number 11. InnoDB: Check that your OS and file system support files of this size. InnoDB: Check also that the disk is not full or a disk quota exceeded. InnoDB: Error number 11 means 'Resource temporarily unavailable'. InnoDB: Some operating system error numbers are described at 2015-04-24 03:56:09 21838 [ERROR] /opt/mysql/bin/mysqld: The table 'sales' is full 2015-04-24 03:56:12 21838 [ERROR] /opt/mysql/bin/mysqld: The table 'sales' is full 2015-04-24 03:56:15 21838 [ERROR] /opt/mysql/bin/mysqld: The table 'sales' is full 2015-04-24 03:56:19 21838 [ERROR] /opt/mysql/bin/mysqld: The table 'sales' is full 2015-04-24 03:56:20 21838 [ERROR] /opt/mysql/bin/mysqld: The table 'sales' is full

At this point “Operating system error number 11.” and OS disk usage is:

[root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 6.7G 6.3G 24M 100% / tmpfs 246M 0 246M 0% /dev/shm /dev/sda1 485M 55M 405M 12% /boot

If we continue our import process with another database. Error will change:

mysql> create database sales3; Query OK, 1 row affected, 1 warning (0.08 sec) mysql> use sales3; Database changed ERROR 3 (HY000): Error writing file './sales3/Product_Codes.frm' (Errcode: 28 - No space left on device) ERROR 1146 (42S02): Table 'sales3.Product_Codes' doesn't exist ERROR 1146 (42S02): Table 'sales3.Product_Codes' doesn't exist Query OK, 0 rows affected (0.00 sec) ERROR 1146 (42S02): Table 'sales3.Product_Codes' doesn't exist Query OK, 0 rows affected (0.00 sec) ERROR 3 (HY000): Error writing file './sales3/account_balance.frm' (Errcode: 28 - No space left on device) ERROR 1146 (42S02): Table 'sales3.account_balance' doesn't exist ERROR 1146 (42S02): Table 'sales3.account_balance' doesn't exist ERROR 1146 (42S02): Table 'sales3.account_balance' doesn't exist

Disk usage is:

[root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 6.7G 6.3G 16M 100% / tmpfs 246M 0 246M 0% /dev/shm /dev/sda1 485M 55M 405M 12% /boot

Of course there will be NO entry in error log because disk is full.
That’s why let’s change error log path to another directory and start from here:

[root@localhost error_log_dir]# setenforce 0 (use only in test environment) [root@localhost error_log_dir]# chown mysql:mysql /home/error_log_dir/ [root@localhost mysql]# /opt/mysql/bin/mysqld_safe --defaults-file=/opt/mysql/my.cnf --user=mysql --datadir=/opt/mysql/datadir --socket=/opt/mysql/datadir/mysqld-new.sock --pid-file=/opt/mysql/datadir/mysqld-new.pid --port=3307 --log-error=/home/error_log_dir/error.err & [root@localhost mysql]# 150424 05:03:06 mysqld_safe Logging to '/home/error_log_dir/error.err'. 150424 05:03:06 mysqld_safe Starting mysqld daemon with databases from /opt/mysql/datadir 150424 05:03:13 mysqld_safe Number of processes running now: 0 150424 05:03:13 mysqld_safe mysqld restarted

It will continuously restart, because of related 28 – No space left on device error:

2015-04-24 05:03:36 22371 [ERROR] /opt/mysql/bin/mysqld: Error writing file '/opt/mysql/datadir/mysqld-new.pid' (Errcode: 28 - No space left on device) 2015-04-24 05:03:36 22371 [ERROR] Can't start server: can't create PID file: No space left on device

If we change PID file path it will start as usual:

[root@localhost mysql]# /opt/mysql/bin/mysqld_safe --defaults-file=/opt/mysql/my.cnf --user=mysql --datadir=/opt/mysql/datadir --socket=/opt/mysql/datadir/mysqld-new.sock --pid-file=/home/error_log_dir/mysqld-new.pid --port=3307 --log-error=/home/error_log_dir/error.err &

Another thing we should note that, after getting -> Errcode: 28 – No space left on device , if you try to create new database:

mysql> create database sales3; Query OK, 1 row affected, 1 warning (0.12 sec) mysql> show warnings; +-------+------+------------------------------------------------------------------------------+ | Level | Code | Message | +-------+------+------------------------------------------------------------------------------+ | Error | 3 | Error writing file './sales3/db.opt' (Errcode: 28 - No space left on device) | +-------+------+------------------------------------------------------------------------------+ 1 row in set (0.00 sec)

db.opt file will be created as empty file.

If you want to create new table:

mysql> create table t1(id int); ERROR 3 (HY000): Error writing file './sales3/t1.frm' (Errcode: 28 - No space left on device)

And know let’s enable binary log and then try to import dump.
Related to Binary Log error’s due full disk there was a BUG fixed from 5.6.23 version of MySQL: #73365

And another one still waiting as OPEN:#72437
Where i got an error:
[ERROR] Error in Log_event::read_log_event(): ‘read error’, data_len: 2070, event_type: 29
And interesting warning:
[Warning] Error reading GTIDs from binary log: -1

For activating binary log and GTID add followings to my.cnf file:

log_bin = /opt/mysql/datadir/mysql-bin log_bin_index = /opt/mysql/datadir/mysql-bin sync_binlog = 1 binlog_format = row gtid-mode = on log_slave_updates = 1 enforce-gtid-consistency = true

Then drop all previous imports(databases) and again try to import dump.
You will get a new problem which i have reported, while writing this article: #76825
To reproduce this report, read “How to repeat” section.

Another interesting thing, i want to note is, after detecting full disk error, if we try to create view, we will hit another reported issue: #76827 (I have disabled GTID/binary log while testing views)

At this moment, that’s all.
Will update this article every time, when i find related issues.
Thank you for reading.

The post Testing “disk full” conditions in MySQL appeared first on Azerbaijan MySQL UG.


PlanetMySQL Voting: Vote UP / Vote DOWN

Happy to see this MySQL JSON momentum !

Last week at Percona Live Facebook has presented for the first time Docstore which is a native JSON implementation in MySQL. Oracle has also presented their MySQL 5.7 lab release that includes the implementation of a native JSON type. This is an important move as MySQL was behind other other RDMS regarding JSON (PostgreSQL already [...]
PlanetMySQL Voting: Vote UP / Vote DOWN

linux tools part 10– Monitoring process iostat

We can use iopp tool to monitor process io performance

download from github : https://github.com/markwkm/iopp

pid: The process id. rchar: The number of bytes which this task has caused to be read from storage. wchar: The number of bytes which this task has caused, or shall cause to be written to disk. syscr: Count of the number of read I/O operations. syscw: Count of the number of write I/O operations. rbytes rkb rmb reads: Count of the number of bytes which this process really did cause to be fetched from the storage layer. wbytes wkb wmb writes: Count of the number of bytes which this process really did cause to be sent to the storage layer. cwbytes cwkb cwmb cwrites: The number of bytes which this process caused to not happen, by truncating pagecache. command: Filename of the executable.

There is one small bug , you’ll get different output from -i -k than you will from -i or from -i -m.

#iopp -i -k 2

#grep ^ /sys/class/block/*/dev |grep 224

/sys/class/block/sdo/dev:8:224

#lsof |grep “8,224” |more

bash 5265 root cwd DIR 8,224 57151488 427819009 /data/yp900 redis-ser 19843 root cwd DIR 8,224 4096 469368833 /data/redis6379 sh 27579 root cwd DIR 8,224 4096 579731457 /data/mysql3306 mysqld 28324 mysql cwd DIR 8,224 4096 579731459 /data/mysql3306/data mysqld 28324 mysql 1w REG 8,224 1327901 579743077 /data/mysql3306/mysql-error.log mysqld 28324 mysql 2w REG 8,224 1327901 579743077 /data/mysql3306/mysql-error.log mysqld 28324 mysql 3u REG 8,224 23520 579743078 /data/mysql3306/binlog/mysql-bin.index mysqld 28324 mysql 4uW REG 8,224 1073741824 579743083 /data/mysql3306/data/ibdata0 mysqld 28324 mysql 5u REG 8,224 0 579731461 /data/mysql3306/mysql-tmpdir/ib1io6G9 (deleted) mysqld 28324 mysql 6u REG 8,224 0 579731462 /data/mysql3306/mysql-tmpdir/ibnkifvg (deleted) mysqld 28324 mysql 7u REG 8,224 0 579731463 /data/mysql3306/mysql-tmpdir/ibHWjojn (deleted) mysqld 28324 mysql 8u REG 8,224 0 579731464 /data/mysql3306/mysql-tmpdir/ib1o8yHC (deleted) mysqld 28324 mysql 9uW REG 8,224 1073741824 579743084 /data/mysql3306/data/ibdata1 mysqld 28324 mysql 10uW REG 8,224 1073741824 579743085 /data/mysql3306/data/ibdata2 mysqld 28324 mysql 11uW REG 8,224 27850178560 579743086 /data/mysql3306/data/ibdata3 mysqld 28324 mysql 12uW REG 8,224 536870912 579743087 /data/mysql3306/data/ib_logfile0 mysqld 28324 mysql 13uW REG 8,224 536870912 579743088 /data/mysql3306/data/ib_logfile1 mysqld 28324 mysql 14uW REG 8,224 2281701376 579733990 /data/mysql3306/data/mit/agent_info#P#p20150404.ibd mysqld 28324 mysql 15u REG 8,224 0 579731465 /data/mysql3306/mysql-tmpdir/ibBZSSPL (deleted)

A nice smart tools with a little bugs


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL OCP Exams

Planning out my year, I decided to take the Oracle OCP and MySQL OCP exams. I checked for review books and was pleasantly surprised to find the soon to be released OCP MySQL Database Administrator Exam Guide (Exam 1Z0-883). However, I noticed that the book was actually prepared for the obsolete and discountinued Exams 1Z0-870, 1Z0-873, and 1Z0-874.

There isn’t an alternative review book for the OCP MySQL 5.6 Developer or Database Administrator Exams. The question that I have is simple: “How relevant is this book because it was prepared for the older exams?” There isn’t a table of content published on the Amazon.com site. If there was a table of contents it could help me determine how close the book’s content is to the new exam.

As a preparation to figure out the value of the book as a study guide, I’ve reviewed the current Oracle MySQL Training Objectives (listed below). The new MySQL OCP Developer and Administrator exams have the following descriptions and objectives:

  • MySQL 5.6 Developer 1Z0-882. Oracle provides the following outline for their MySQL for Developer (Ed 3) training course:

    Course Objectives

    • Describe the MySQL client/server architecture
    • Use MySQL client programs and common options
    • Program MySQL applications with Java and PHP connectors
    • Use a “NoSQL” approach to store and retrieve data
    • Design efficient tables
    • Create and delete database objects
    • Use expressions in SQL statements
    • Examine database metadata
    • Use SQL statements to modify table data
    • Maintain database integrity with transactions
    • Write multiple table queries
    • Create “virtual tables” containing specific data
    • Create user-defined variables, prepared statements, and stored routines
    • Create and manage triggers
    • Identify and deal with errors and exceptions in client programs
    • Write queries that are fast and effective, even under heavy loads
  • MySQL 5.6 Database Administrator 1Z0-883. Oracle provides the following outline for their MySQL for Database Administrators (Ed 3.1) training course:

    Course Objectives

    • Describe the MySQL Architecture
    • Install and Upgrade MySQL
    • Use the INFORMATION_SCHEMA database to access metadata
    • Perform the MySQL start and shutdown operations
    • Configure MySQL server options at runtime
    • Use available tools including MySQL Workbench
    • Evaluate data types and character sets for performance issues
    • Understand data locking in MySQL
    • Understand the use of the InnoDB storage engine with MySQL
    • Maintain integrity of a MySQL installation
    • Use triggers for administration tasks
    • Use Enterprise Audit and Pluggable Authentication
    • Configure advanced replication topologies for high availability
    • Describe introductory performance tuning techniques
    • Perform backup and restore operations
    • Automate administrative tasks with scheduled events

    As always, I hope this helps those who read it; and, in this case I hope it helps you make an effective decision on preparation resources for the MySQL 5.6 OCP exams.


    PlanetMySQL Voting: Vote UP / Vote DOWN

Configuring PAM Authentication and User Mapping with MariaDB

Thu, 2015-04-23 20:17geoff_montee_g

User accounts in MariaDB have traditionally been completely separate from operating system accounts. However, MariaDB has included a PAM authentication plugin since version 5.2.10. With this plugin, DBAs can configure MariaDB user accounts to authenticate via PAM, allowing users to use their Linux username and password to log into the MariaDB server.

However, even when using the PAM authentication plugin, the user account still needs to exist in MariaDB, and the account needs to have privileges. Creating these MariaDB accounts and making sure the privileges are correct can be a lot of work. To decrease the amount of work involved, some users would like to be able to map a Linux user to a different MariaDB user. For example, let's say that "alice" and "bob" are both DBAs. It would be nice if each of them could log into MariaDB with their own Linux username and password, while MariaDB sees both of them as the same "dba" user. That way, there is only one MariaDB account to keep track of.

Luckily, both PAM and MariaDB support exactly that kind of use case. In this blog post, I will walk you through how to set up this kind of authentication.

Set up the user mapper PAM plugin

MariaDB's git repository has a simple user mapper PAM plugin. Downloading, compiling, and installing it is simple:

wget https://raw.githubusercontent.com/MariaDB/server/10.1/plugin/auth_pam/mapper/pam_user_map.c gcc pam_user_map.c -shared -lpam -fPIC -o pam_user_map.so sudo install --mode=0755 pam_user_map.so /lib64/security/ Set up the PAM policy

We want to configure the PAM policy so that:

  • Users authenticate with their Linux user names and passwords (i.e. use the pam_unix.so PAM module);
  • Login attempts go into the system's audit logs;
  • "Real" user names will be mapped to MariaDB user names (i.e. use the pam_user_map.so PAM module).

We can create a PAM policy to do all of the above with:

sudo tee /etc/pam.d/mysql <<EOF auth required pam_unix.so audit account required pam_unix.so audit auth required pam_user_map.so EOF Create some test accounts

Let's create some Linux accounts to test things out:

# generic "dba" account to map other users to sudo useradd dba # a "real" account for Alice sudo useradd alice sudo passwd alice # a "real" account for Bob sudo useradd bob sudo passwd bob Configuring the user account mapping

By default, the pam_user_map.so module looks at /etc/security/user_map.conf for the mappings. Let's map both "alice" and "bob" to the "dba" user:

sudo tee /etc/security/user_map.conf <<EOF alice: dba bob: dba EOF Turn off SELinux

Even with SELinux set to permissive mode, you can still run into issues while trying to use MariaDB and PAM together. You may want to disable SELinux entirely. Otherwise, you could have messages like this show up in your system logs:

Apr 14 12:37:45 localhost setroubleshoot: SELinux is preventing /usr/sbin/mysqld from execute access on the file . For complete SELinux messages. run sealert -l 807c6372-91d9-4445-b944-79113756d6c2 Apr 14 12:37:45 localhost python: SELinux is preventing /usr/sbin/mysqld from execute access on the file . ***** Plugin catchall_labels (83.8 confidence) suggests ******************* If you want to allow mysqld to have execute access on the file Then you need to change the label on $FIX_TARGET_PATH Do # semanage fcontext -a -t FILE_TYPE '$FIX_TARGET_PATH' where FILE_TYPE is one of the following: abrt_helper_exec_t, bin_t, boot_t, etc_runtime_t, etc_t, ld_so_t, lib_t, mysqld_exec_t, prelink_exec_t, shell_exec_t, src_t, system_conf_t, system_db_t, textrel_shlib_t, usr_t. Then execute: restorecon -v '$FIX_TARGET_PATH' ***** Plugin catchall (17.1 confidence) suggests ************************** If you believe that mysqld should be allowed execute access on the file by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # grep mysqld /var/log/audit/audit.log | audit2allow -M mypol # semodule -i mypol.pp Apr 14 12:37:59 localhost setroubleshoot: Plugin Exception restorecon_source Apr 14 12:37:59 localhost setroubleshoot: SELinux is preventing /usr/sbin/unix_chkpwd from execute access on the file . For complete SELinux messages. run sealert -l c56fe6e0-c78c-4bdb-a80f-27ef86a1ea85 Apr 14 12:37:59 localhost python: SELinux is preventing /usr/sbin/unix_chkpwd from execute access on the file . ***** Plugin catchall (100. confidence) suggests ************************** If you believe that unix_chkpwd should be allowed execute access on the file by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # grep unix_chkpwd /var/log/audit/audit.log | audit2allow -M mypol # semodule -i mypol.pp Open up access to /etc/shadow

The pam_unix.so PAM module usually uses the unix_chkpwd utility to handle the authentication. This utility requires read access to /etc/shadow, which is usually unreadable for security reasons. To get PAM authentication to work with MariaDB, you will probably have to allow the mysql user to read this file. This is very easy to do:

sudo groupadd shadow sudo usermod -a -G shadow mysql sudo chown root:shadow /etc/shadow sudo chmod g+r /etc/shadow

Of course, opening up access to /etc/shadow to some users is a security risk. However, if you try to use PAM together with MariaDB without opening up this access, you are likely to see messages like this in the system logs:

Apr 14 12:56:23 localhost unix_chkpwd[3332]: check pass; user unknown Apr 14 12:56:23 localhost unix_chkpwd[3332]: password check failed for user (alice) Apr 14 12:56:23 localhost mysqld: pam_unix(mysql:auth): authentication failure; logname= uid=991 euid=991 tty= ruser= rhost= user=alice Set up everything in MariaDB

Finally, let's set up everything in MariaDB:

-- Install the plugin INSTALL SONAME 'auth_pam'; -- Create the "dba" user CREATE USER 'dba'@'%' IDENTIFIED BY 'strongpassword'; GRANT ALL PRIVILEGES ON *.* TO 'dba'@'%'; -- Create an anonymous catch-all user that will use the PAM plugin and the mysql policy CREATE USER ''@'%' IDENTIFIED VIA pam USING 'mysql'; -- Allow the anonymous user to proxy as the dba user GRANT PROXY ON 'dba'@'%' TO ''@'%';

Since we changed the mysql user's group membership, we also have to restart the MariaDB service:

sudo service mysql restart Try it out

Now, let's try it out. Even though we log in as "alice", our MariaDB privileges are actually those of the "dba" user:

[gmontee@localhost ~]$ mysql -u alice -h 127.0.0.1 [mariadb] Password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 4 Server version: 10.0.17-MariaDB-log MariaDB Server Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> SELECT USER(), CURRENT_USER(); +-----------------+----------------+ | USER() | CURRENT_USER() | +-----------------+----------------+ | alice@localhost | dba@% | +-----------------+----------------+ 1 row in set (0.00 sec) Thoughts?

Is anyone using a setup like this? If so, how does it work for you? Can you think of any ways to improve this functionality?

Tags: DBAHowtoLinuxPlugins About the Author

Geoff Montee is a Support Engineer with MariaDB. He has previous experience as a Database Administrator/Software Engineer with the U.S. Government, and as a System Administrator and Software Developer at Florida State University.


PlanetMySQL Voting: Vote UP / Vote DOWN

High Performance SQL and NoSQL Presentation


Download PDF Presentation

In this presentation on High Performance SQL and NoSQL to the New York Web Performance Meetup I use the simple analogy of applying the same techniques used in Web Performance Optimization (WPO) to your data path (SQL or NoSQL) performance.

This presentation provides examples for 5 of the most common WPO optimizations and how to apply this to your applications access to data:

  1. Gzip assets
  2. Make fewer HTTP requests
  3. Add expires headers
  4. Use a CDN
  5. Optimized Images

This presentation was recorded. A video link will be available at a later time.

Thanks to Grovo for hosting the event.


PlanetMySQL Voting: Vote UP / Vote DOWN

Self-Critic and Slides of my PLMCE Talks

The link to the slides of my talks can be found at the end of this post but first, let me share some thoughts about PLMCE.

Talking with people, I was surprised to be criticized of presenting only the good sides of my solution without giving credit to the good side of the alternative solutions.  More than surprised, I was also a little shocked as I want to be perceived as objective as possible.  Let me try to fix that:
  • I am not a GTID and log-slaves detractor, I am a simplicity lover.
  • I actually like GTIDs and I have some use-cases for them; one of  them is described in the MariaDB Knowledge Base (search for last_gtid in that page).
  • But I think that using GTIDs in the slave protocol is complex and that a more simple solution exists (Binlog Servers).  As a simplicity embracer, I choose the latter (I realize that you might not agree with my opinion of more simple but this is a different discussion).
  • Moreover, as the GTID negotiation consumes more resources on the master than the file/position negotiation (this is a fact, not an opinion) and having tens (and sometimes hundreds) of slave replicating from the same master at my current job, a slave reconnection storm is a scenario that scares me with automatic slave positioning with GTIDs.
About log-slave-updates, I see the following use-case (good or bad, we will see later):
  1. High Availability (when combined with GTIDs),
  2. Intermediate Master,
  3. External Trigger with Sequencing,
  4. Storing timing information when merging writes streams,
  5. Converting SBR to RBR,
  6. Testing new MySQL versions.
With reference to use-cases #1 and #2 above (and IMHO), they should be replaced by Binlog Servers to avoid complexity and the consequences of binary log morphing (I talked about binary log morphing in my Lightning Talk, I will come back on that notion in the future but not in this post).  Moreover, in addition to the risk of morphing, intermediate master introduces lag and operation complexity (dealing with their failure and with rogue transactions), thus should be avoided (IMHO).
With reference to use-case #3 (external trigger with sequencing), when needing external triggers and if you can, you should use the binary logs from the master (directly connected to the master or via Binlog Servers).  Actually, a MySQL slave is a specific use-case of external triggers based on the binary logs from the master.  However, if you want to trigger an event after a commit on a slave (which I call sequencing above), log-slave-updates is your best choice so far (very useful for cache invalidation as an example).
With reference to use-case #4, it deserves a post on its own (I might write it when I have time) and could be a valid use-case for log-slave-updates (in very very specific situations IMHO).
With reference to use-case #5 (converting SBR to RBR), log-slave-updates is the only way to achieve that with the current implementation but I see another way coming in the future, I might blog about it.

Finally, with reference to use-case #6 (testing new MySQL versions), it will always be useful but it is not exactly a production use-case.

I need to thanks a few people before closing, they will recognize themselves:
  • Thanks to the people that came to me after my talk and shared their perception about me being a GTID and log-slave-updates detractor: I should have presented things differently and I will in the future.
  • Thanks to the person that thought about the term binary log morphing while he and I were brainstorming about this subject:  I am looking forward to discuss again with you.
  • Thanks to the person that reminded me about the importance of the trigger with sequencing use-case for log-slave-updates: I completely forgot this one and it is a very legitimate use-case.
  • Thanks to the person who opened my eyes on the importance of timing in write stream merging; we have an expression in French for that, it would translate in English to "I understood quickly but you had to spend a long time explaining" which I am grateful for.
Finally, feel free to add a comment below about Binlog Server, my talks and/or this post.  You can also contact me directly via LinkedIn, Facebook or Twitter (my mail is not here for obvious reasons but you can find it in my slides).
You will find the slides of my talks at those links:
And the HOWTO about the MaxScale Binlog Server at those links:
PlanetMySQL Voting: Vote UP / Vote DOWN

Connector/J moves to Git

I’m pleased to announce that Connector/J has a new home.

Just as several other MySQL products, Connector/J source code management moved to Git and, pretty much as expected, to GitHub. Our reasoning is nothing else than listening to our users demands and trying to follow best trends and practices. There was nothing significantly wrong with Bazaar and Launchpad, as they served us well for the last seven years. It was just time to move on.

Rest assured, all will work as before, no complications no hassles. Our public GitHub repository will expose Connector/J source code as it is in the latest generally available (GA) release, as it has been for the last years.

From now on you will find Connector/J source code in the well known Git repository hosting service, GitHub, at mysql-connector-j under the umbrella of MySQL organization. Other MySQL products will follow in time. This repository contains all history since Connector/J early days so nothing was lost. Those who are used to our old launchpad repository will only have to change tools.

Hands-on

GitHub allows you to explore source code, project history, get to know about contributions and a lot more. But you probably already know all of this anyway.

So, to get the Connector/J source code you can just execute the following command in your preferred shell or command prompt:

$ git clone https://github.com/mysql/mysql-connector-j.git

You can also fetch the code through SSH protocol, using Subversion or simply download it as a zip file. Obviously, in order for you to execute git commands, you’ll have to have it installed in your computer. Just follow the official getting started guide if you don’t have it yet.

Building MySQL Connector/J 5.1 from source

Some time ago I have blogged about building Connector/J from source code. Nothing changes in this post, except for the replacement of Bazaar and Launchpad by Git and GitHub respectively and the bzr  command that is replaced by the git  command mentioned above.

We love to hear from you

As always, don’t forget to use our official channels to report us your wishes, findings or just ask for help when you need it. We can be found at MySQL Bugs database or MySQL Connector/JDBC and Java forum. Thank you!

Welcome to Connector/J on Git!
Farewell Bazaar. Farewell Launchpad. Thank you for the 7 years of good services.


PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Enterprise Database Firewall — Control and Monitor SQL Statement Executions

As of MySQL 5.6.24, MySQL Enterprise Edition includes MySQL Enterprise Firewall, an application-level firewall (it runs within the mysql database process) that enables database administrators to permit or deny SQL statement execution based on matching against whitelists of accepted statement patterns. This helps harden MySQL Server against attacks such as SQL injection or attempts to exploit applications by using them outside of their legitimate query workload characteristics.

Each MySQL account registered with the firewall has its own whitelist of statement patterns (a tokenized representation of a SQL statement), enabling protection to be tailored per account. For a given account, the firewall can operate in recording or protecting mode, for training in the accepted statement patterns or protection against unacceptable statements. The diagram illustrates how the firewall processes incoming statements in each mode.

MySQL Enterprise Firewall Operation

(from https://dev.mysql.com/doc/refman/5.6/en/firewall.html)

If you do not have a MySQL Enterprise Edition license, you may download a trial version of the software via Oracle eDelivery. The MySQL Firewall is included in the MySQL Product Pack, specifically for MySQL Database 5.6.24 or higher.

MySQL Enterprise Firewall has these components:

  • A server-side plugin named MYSQL_FIREWALL that examines SQL statements before they execute and, based on its in-memory cache, renders a decision whether to execute or reject each statement.
  • Server-side plugins named MYSQL_FIREWALL_USERS and MYSQL_FIREWALL_WHITELIST implement INFORMATION_SCHEMA tables that provide views into the firewall data cache.
  • System tables named firewall_users and firewall_whitelist in the mysql database provide persistent storage of firewall data.
  • A stored procedure named sp_set_firewall_mode() registers MySQL accounts with the firewall, establishes their operational mode, and manages transfer of firewall data between the cache and the underlying system tables.
  • A set of user-defined functions provides an SQL-level API for synchronizing the cache with the underlying system tables.
  • System variables enable firewall configuration and status variables provide runtime operational information.

(from https://dev.mysql.com/doc/refman/5.6/en/firewall-components.html)

Installing the Firewall

Installing the firewall is fairly easy. After you install MySQL version 5.6.24 or greater, you simply execute an SQL script that is located in the $MYSQL_HOME/share directory. There are two versions of the script, one for Linux and one for Windows (the firewall isn’t supported on the Mac yet).

The scripts are named win_install_firewall.sql for Windows and linux_install_firewall.sql for linux. You may execute this script from the command line or via MySQL Workbench. For the command line, be sure you are in the directory where the script is located.

shell> mysql -u root -p mysql &lt; win_install_firewall.sql Enter password: (enter root password here)

The script create the firewall tables, functions, stored procedures and installs the necessary plugins. The script contains the following:

# Copyright (c) 2015 Oracle and/or its affiliates. All rights reserved. # Install firewall tables USE mysql; CREATE TABLE IF NOT EXISTS mysql.firewall_whitelist( USERHOST VARCHAR(80) NOT NULL, RULE text NOT NULL) engine= MyISAM; CREATE TABLE IF NOT EXISTS mysql.firewall_users( USERHOST VARCHAR(80) PRIMARY KEY, MODE ENUM ('OFF', 'RECORDING', 'PROTECTING', 'RESET') DEFAULT 'OFF') engine= MyISAM; INSTALL PLUGIN mysql_firewall SONAME 'firewall.dll'; INSTALL PLUGIN mysql_firewall_whitelist SONAME 'firewall.dll'; INSTALL PLUGIN mysql_firewall_users SONAME 'firewall.dll'; CREATE FUNCTION set_firewall_mode RETURNS STRING SONAME 'firewall.dll'; CREATE FUNCTION normalize_statement RETURNS STRING SONAME 'firewall.dll'; CREATE AGGREGATE FUNCTION read_firewall_whitelist RETURNS STRING SONAME 'firewall.dll'; CREATE AGGREGATE FUNCTION read_firewall_users RETURNS STRING SONAME 'firewall.dll'; delimiter // CREATE PROCEDURE sp_set_firewall_mode (IN arg_userhost VARCHAR(80), IN arg_mode varchar(12)) BEGIN IF arg_mode = "RECORDING" THEN SELECT read_firewall_whitelist(arg_userhost,FW.rule) FROM mysql.firewall_whitelist FW WHERE FW.userhost=arg_userhost; END IF; SELECT set_firewall_mode(arg_userhost, arg_mode); if arg_mode = "RESET" THEN SET arg_mode = "OFF"; END IF; INSERT IGNORE INTO mysql.firewall_users VALUES (arg_userhost, arg_mode); UPDATE mysql.firewall_users SET mode=arg_mode WHERE userhost = arg_userhost; IF arg_mode = "PROTECTING" OR arg_mode = "OFF" THEN DELETE FROM mysql.firewall_whitelist WHERE USERHOST = arg_userhost; INSERT INTO mysql.firewall_whitelist SELECT USERHOST,RULE FROM INFORMATION_SCHEMA.mysql_firewall_whitelist WHERE USERHOST=arg_userhost; END IF; END // delimiter ;

After you run the script, the firewall should be enabled. You may verify it by running this statement:

mysql> SHOW GLOBAL VARIABLES LIKE 'mysql_firewall_mode'; +-------------------------------+-------+ | Variable_name | Value | +-------------------------------+-------+ | mysql_firewall_max_query_size | 4096 | | mysql_firewall_mode | ON | | mysql_firewall_trace | OFF | +-------------------------------+-------+

Testing the Firewall

To test the firewall, you may use a current mysql user, but we are going to create a test user for this example – webuser@localhost. (The user probably doesn’t need all privileges, but for this example we will grant everything to this user)

CREATE USER 'webuser'@'localhost' IDENTIFIED BY 'Yobuddy!'; 'GRANT ALL PRIVILEGES ON *.* TO 'webuser'@'localhost' WITH GRANT OPTION'

OPTIONAL: For our test, we will be using the sakila schema provided by MySQL. You may download the sakila database schema (requires MySQL 5.0 or later) at http://dev.mysql.com/doc/index-other.html. If you don’t want to use the sakila database, you may use your own existing database or create a new database.

After downloading the sakila schema, you will have two files, named sakila-schema.sql and sakila-data.sql. Execute the sakila-schema.sql first, and then sakila-data.sql to populate the database with data. If you are using the command line, simply do the following: (substitute UserName for a mysql user name)

# mysql -uUserName -p &lt; sakila-schema.sql # mysql -uUserName -p &lt; sakila-data.sql

After creating the sakila schema and importing the data, we now set the firewall to record those queries which we want to allow:

mysql> CALL `mysql`.`sp_set_firewall_mode`("webuser@localhost","RECORDING") +-----------------------------------------------+ | read_firewall_whitelist(arg_userhost,FW.rule) | +-----------------------------------------------+ | Imported users: 0 Imported rules: 0 | +-----------------------------------------------+ 1 row in set (0.14 sec) +-------------------------------------------+ | set_firewall_mode(arg_userhost, arg_mode) | +-------------------------------------------+ | OK | +-------------------------------------------+ 1 row in set (0.22 sec) Query OK, 5 rows affected (0.28 sec)

We can check to see the firewall mode via this statement, to be sure we are in the recording mode:

mysql> SELECT * FROM MYSQL.FIREWALL_USERS; +-------------------+------------+ | USERHOST | MODE | +-------------------+------------+ | webuser@localhost | RECORDING | +-------------------+------------+ 1 row in set (0.02 sec)

Now that we have recording turned on, let’s run a few queries:

mysql> use sakila Database changed mysql> show tables; +----------------------------+ | Tables_in_sakila | +----------------------------+ | actor | | actor_info | | address | | category | | city | | country | | customer | | customer_list | | film | | film_actor | | film_category | | film_list | | film_text | | inventory | | language | | nicer_but_slower_film_list | | payment | | rental | | sales_by_film_category | | sales_by_store | | staff | | staff_list | | store | +----------------------------+ 23 rows in set (0.00 sec) mysql> select * from actor limit 2; +----------+------------+-----------+---------------------+ | actor_id | first_name | last_name | last_update | +----------+------------+-----------+---------------------+ | 1 | PENELOPE | GUINESS | 2006-02-15 04:34:33 | | 2 | NICK | WAHLBERG | 2006-02-15 04:34:33 | +----------+------------+-----------+---------------------+ 2 rows in set (0.13 sec) mysql> select first_name, last_name from actor where first_name like 'T%'; +------------+-----------+ | first_name | last_name | +------------+-----------+ | TIM | HACKMAN | | TOM | MCKELLEN | | TOM | MIRANDA | | THORA | TEMPLE | +------------+-----------+ 4 rows in set (0.00 sec)

We turn off the recording by turning on the protection mode:

mysql> CALL `mysql`.`sp_set_firewall_mode`("webuser@localhost","PROTECTING"); +-------------------------------------------+ | set_firewall_mode(arg_userhost, arg_mode) | +-------------------------------------------+ | OK | +-------------------------------------------+ 1 row in set (0.00 sec)

We can check to see the firewall mode via this statement:

mysql> SELECT * FROM MYSQL.FIREWALL_USERS; +-------------------+------------+ | USERHOST | MODE | +-------------------+------------+ | webuser@localhost | PROTECTING | +-------------------+------------+ 1 row in set (0.02 sec)

And we can look at our whitelist of statements:

mysql> SELECT * FROM MYSQL.FIREWALL_WHITELIST; +-------------------+-------------------------------------------------------------------+ | USERHOST | RULE | +-------------------+-------------------------------------------------------------------+ | webuser@localhost | SELECT * FROM actor LIMIT ? | | webuser@localhost | SELECT SCHEMA ( ) | | webuser@localhost | SELECT first_name , last_name FROM actor WHERE first_name LIKE ? | | webuser@localhost | SHOW TABLES | +-------------------+-------------------------------------------------------------------+ 4 rows in set (0.00 sec)

The firewall is now protecting against non-whitelisted queries. We can execute a couple of the queries we previously ran, which should be allowed by the firewall.

mysql> show tables; +----------------------------+ | Tables_in_sakila | +----------------------------+ | actor | | actor_info | | address | | category | | city | | country | | customer | | customer_list | | film | | film_actor | | film_category | | film_list | | film_text | | inventory | | language | | nicer_but_slower_film_list | | payment | | rental | | sales_by_film_category | | sales_by_store | | staff | | staff_list | | store | +----------------------------+ 23 rows in set (0.01 sec)

Now we run two new queries, which should be blocked by the firewall.

mysql> select * from rental; ERROR 1045 (42000): Firewall prevents statement mysql> select * from staff; ERROR 1045 (42000): Firewall prevents statement

The server will write an error message to the log for each statement that is rejected. Example:

2015-03-21T22:59:05.371772Z 14 [Note] Plugin MYSQL_FIREWALL reported: 'ACCESS DENIED for webuser@localhost. Reason: No match in whitelist. Statement: select * from rental '

You can use these log messages in your efforts to identify the source of attacks.

To see how much firewall activity you have, you may look look at the status variables:

mysql> SHOW GLOBAL STATUS LIKE 'Firewall%'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Firewall_access_denied | 42 | | Firewall_access_granted | 55 | | Firewall_cached_entries | 78 | +-------------------------+-------+

The variables indicate the number of statements rejected, accepted, and added to the cache, respectively.

The MySQL Enterprise Firewall Reference is found at https://dev.mysql.com/doc/refman/5.6/en/firewall-reference.html.

 

Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn. Tony is the author of Twenty Forty-Four: The League of Patriots 

Visit http://2044thebook.com for more information.
PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB : Bug when add index on Partition table

Version: 5.5.5-10.0.10-MariaDB-log MariaDB Server

When I use mariadb as multi-source slave DB I met a strange problem . Creating index on a vary large partition table , Mariadb do — copy to tmp table on the slave side. It takes a long time and still not finish over 9 hours.

Primary : MySQL 5.6.16 —

add index on a partition table:

CREATE TABLE `track_raw_wap211_log` ( `table_id` varchar(100) DEFAULT NULL, `page_id` varchar(100) DEFAULT NULL, `banner_id` varchar(100) DEFAULT NULL, `button_id` varchar(100) DEFAULT NULL, `test_id` varchar(100) DEFAULT NULL, `classfication` varchar(100) DEFAULT NULL, `request_refer` varchar(100) DEFAULT NULL, `request_url` text, `title` varchar(100) DEFAULT NULL, `user_id` varchar(100) DEFAULT NULL, `language` varchar(100) DEFAULT NULL, `event` varchar(100) DEFAULT NULL, `event_desc` varchar(100) DEFAULT NULL, `event_type` varchar(100) DEFAULT NULL, `log_version` varchar(100) DEFAULT NULL, `project` varchar(100) DEFAULT NULL, `log_time` varchar(100) DEFAULT NULL, `unicookie` varchar(100) DEFAULT NULL, `session_id` varchar(100) DEFAULT NULL, `session_duration` varchar(100) DEFAULT NULL, `resolution` varchar(100) DEFAULT NULL, `channel` varchar(100) DEFAULT NULL, `refer` text, `user_agent` text, `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `httplogtime` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00', `httpforwardip` varchar(100) DEFAULT NULL, `country` varchar(100) DEFAULT NULL, `province` varchar(100) DEFAULT NULL, `city` varchar(100) DEFAULT NULL, `area` varchar(100) DEFAULT NULL, `mapweidu` varchar(100) DEFAULT NULL, `mapjingdu` varchar(100) DEFAULT NULL, `sync_mysqlid` bigint(20) NOT NULL AUTO_INCREMENT, PRIMARY KEY (`sync_mysqlid`,`create_time`), KEY `ix_create_time` (`create_time`), KEY `ix_httplogtime` (`httplogtime`), KEY `ix_httplogtime_userid` (`httplogtime`,`user_id`) ) ENGINE=InnoDB AUTO_INCREMENT=23217118 DEFAULT CHARSET=utf8 /*!50100 PARTITION BY RANGE ( UNIX_TIMESTAMP(`create_time`)) (PARTITION p20150318 VALUES LESS THAN (1426694400) ENGINE = InnoDB, PARTITION p20150319 VALUES LESS THAN (1426780800) ENGINE = InnoDB, PARTITION p20150320 VALUES LESS THAN (1426867200) ENGINE = InnoDB, PARTITION p20150321 VALUES LESS THAN (1426953600) ENGINE = InnoDB, PARTITION p20150322 VALUES LESS THAN (1427040000) ENGINE = InnoDB, PARTITION p20150323 VALUES LESS THAN (1427126400) ENGINE = InnoDB, PARTITION p20150324 VALUES LESS THAN (1427212800) ENGINE = InnoDB, PARTITION p20150325 VALUES LESS THAN (1427299200) ENGINE = InnoDB, PARTITION p20150326 VALUES LESS THAN (1427385600) ENGINE = InnoDB, PARTITION p20150327 VALUES LESS THAN (1427472000) ENGINE = InnoDB, PARTITION p20150328 VALUES LESS THAN (1427558400) ENGINE = InnoDB, PARTITION p20150329 VALUES LESS THAN (1427644800) ENGINE = InnoDB, PARTITION p20150330 VALUES LESS THAN (1427731200) ENGINE = InnoDB, PARTITION p20150331 VALUES LESS THAN (1427817600) ENGINE = InnoDB, PARTITION p20150401 VALUES LESS THAN (1427904000) ENGINE = InnoDB, PARTITION p20150402 VALUES LESS THAN (1427990400) ENGINE = InnoDB, PARTITION p20150403 VALUES LESS THAN (1428076800) ENGINE = InnoDB, PARTITION p20150404 VALUES LESS THAN (1428163200) ENGINE = InnoDB, PARTITION p20150405 VALUES LESS THAN (1428249600) ENGINE = InnoDB, PARTITION p20150406 VALUES LESS THAN (1428336000) ENGINE = InnoDB, PARTITION p20150407 VALUES LESS THAN (1428422400) ENGINE = InnoDB, PARTITION p20150408 VALUES LESS THAN (1428508800) ENGINE = InnoDB, PARTITION p20150409 VALUES LESS THAN (1428595200) ENGINE = InnoDB, PARTITION p20150410 VALUES LESS THAN (1428681600) ENGINE = InnoDB, PARTITION p20150411 VALUES LESS THAN (1428768000) ENGINE = InnoDB, PARTITION p20150412 VALUES LESS THAN (1428854400) ENGINE = InnoDB, PARTITION p20150413 VALUES LESS THAN (1428940800) ENGINE = InnoDB, PARTITION p20150414 VALUES LESS THAN (1429027200) ENGINE = InnoDB, PARTITION p20150415 VALUES LESS THAN (1429113600) ENGINE = InnoDB, PARTITION p20150416 VALUES LESS THAN (1429200000) ENGINE = InnoDB, PARTITION p20150417 VALUES LESS THAN (1429286400) ENGINE = InnoDB, PARTITION p20150418 VALUES LESS THAN (1429372800) ENGINE = InnoDB, PARTITION p20150419 VALUES LESS THAN (1429459200) ENGINE = InnoDB, PARTITION p20150420 VALUES LESS THAN (1429545600) ENGINE = InnoDB, PARTITION p20150421 VALUES LESS THAN (1429632000) ENGINE = InnoDB, PARTITION p20150422 VALUES LESS THAN (1429718400) ENGINE = InnoDB, PARTITION p20150423 VALUES LESS THAN (1429804800) ENGINE = InnoDB, PARTITION p20150424 VALUES LESS THAN (1429891200) ENGINE = InnoDB, PARTITION p20150425 VALUES LESS THAN (1429977600) ENGINE = InnoDB, PARTITION p20150426 VALUES LESS THAN (1430064000) ENGINE = InnoDB, PARTITION p20150427 VALUES LESS THAN (1430150400) ENGINE = InnoDB, PARTITION p20150428 VALUES LESS THAN (1430236800) ENGINE = InnoDB, PARTITION p20150429 VALUES LESS THAN (1430323200) ENGINE = InnoDB, PARTITION p20150430 VALUES LESS THAN (1430409600) ENGINE = InnoDB, PARTITION p20150501 VALUES LESS THAN (1430496000) ENGINE = InnoDB, PARTITION p20150502 VALUES LESS THAN (1430582400) ENGINE = InnoDB, PARTITION p20150503 VALUES LESS THAN (1430668800) ENGINE = InnoDB, PARTITION p20150504 VALUES LESS THAN (1430755200) ENGINE = InnoDB, PARTITION p20150505 VALUES LESS THAN (1430841600) ENGINE = InnoDB, PARTITION p20150506 VALUES LESS THAN (1430928000) ENGINE = InnoDB, PARTITION p20150507 VALUES LESS THAN (1431014400) ENGINE = InnoDB, PARTITION p20150508 VALUES LESS THAN (1431100800) ENGINE = InnoDB, PARTITION p20150509 VALUES LESS THAN (1431187200) ENGINE = InnoDB, PARTITION p20150510 VALUES LESS THAN (1431273600) ENGINE = InnoDB, PARTITION p20150511 VALUES LESS THAN (1431360000) ENGINE = InnoDB) */

create index idx_tmp on track_raw_wap211_log (log_time);

Slave: Mariadb – 5.5.5-10.0.15-MariaDB-log MariaDB Server

21407 | root | localhost | jkgj_log | Query | 34 | copy to tmp table | create index idx_tmp on track_raw_wap211_log (log_time) | 0.000 |

drop index command still has this problem.

——————————————————————

When I repeat these steps on MariaDB – 10.0.16 everything is OK.

This bug has been reported to MariaDB:

https://mariadb.atlassian.net/browse/MDEV-8038


PlanetMySQL Voting: Vote UP / Vote DOWN

Add zsh to Fedora

One of my students requested an option to the bash shell. It was interesting to hear that he wanted me to instal the zsh in my Fedora image.

This post shows how to add the zsh to my Fedora image because I already release a new one for the term without the zsh shell. You use the yum utility as the root user to install the zsh library:

yum install -y zsh

It should produce an output stream like the following, which required accessing the alternate mirror site:

Loaded plugins: langpacks, refresh-packagekit mysql-connectors-community | 2.5 kB 00:00 mysql-tools-community | 2.5 kB 00:00 mysql56-community | 2.5 kB 00:00 pgdg93 | 3.6 kB 00:00 updates/20/x86_64/metalink | 14 kB 00:00 updates | 4.9 kB 00:00 (1/3): mysql-connectors-community/20/x86_64/primary_db | 8.8 kB 00:00 (2/3): pgdg93/20/x86_64/primary_db | 83 kB 00:01 (3/3): updates/20/x86_64/primary_db | 13 MB 00:13 updates/20/x86_64/pkgtags FAILED http://mirror.utexas.edu/fedora/linux/updates/20/x86_64/repodata/1ea83dc402a2bcba53f9b0011ecfa0d579b5a316e4c7f01ec5f1166dcdca138f-pkgtags.sqlite.gz: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. (1/2): updates/20/x86_64/updateinfo | 1.9 MB 00:07 (2/2): updates/20/x86_64/pkgtags | 1.4 MB 00:01 Resolving Dependencies --> Running transaction check ---> Package zsh.x86_64 0:5.0.7-6.fc20 will be installed --> Finished Dependency Resolution   Dependencies Resolved   ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: zsh x86_64 5.0.7-6.fc20 updates 2.5 M   Transaction Summary ================================================================================ Install 1 Package   Total download size: 2.5 M Installed size: 5.9 M Downloading packages: zsh-5.0.7-6.fc20.x86_64.rpm | 2.5 MB 00:03 Running transaction check Running transaction test Transaction test succeeded Running transaction (shutdown inhibited) Installing : zsh-5.0.7-6.fc20.x86_64 1/1 Verifying : zsh-5.0.7-6.fc20.x86_64 1/1   Installed: zsh.x86_64 0:5.0.7-6.fc20   Complete!

Once you’ve installed the zsh, you can configure like you would the bash shell. You make edits to individual .zshrc files and generic changes to the /etc/zshrc file. You can find the documentation to edit the zsh in the User’s Guide to the Z-Shell.

The following is a modified .zshrc file. The changes enable the up-arrow in Oracle’s sqlplus and provides you with a color prompt, like this:

# Source global definitions if [ -f /etc/zshrc ]; then . /etc/zshrc fi   # Uncomment the following line if you don't like systemctl's auto-paging feature: # export SYSTEMD_PAGER=   # Set the JAVA_HOME path. export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75-2.5.4.2.fc20.x86_64   # Set the CLASSPATH path. export CLASSPATH=/usr/share/java/mysql-connector-java.jar:.   # User specific aliases and functions . /u01/app/oracle/product/11.2.0/xe/bin/oracle_env.sh   # Wrap sqlplus with rlwrap to edit prior lines with the # up, down, left and right keys. sqlplus() { if [ "$RLWRAP" = "0" ]; then sqlplus "$@" else rlwrap sqlplus "$@" fi }   # Set the bindkey. bindkey -v bindkey "^R" history-incremental-search-backward export EDITOR="vim"   # history stuff HISTFILE=~/.zsh-histfile HISTSIZE=2000   # Set vi as a command line editor. set -o vi   autoload -U colors && colors PS1="[%{$fg[red]%}%n%{$reset_color%}@%{$fg[blue]%}%m %{$fg[yellow]%}%~%{$reset_color%}% ]$ "

The zsh prompt looks like the following:

[student@localhost ~]$

As always, I hope this helps those looking for this type of information.


PlanetMySQL Voting: Vote UP / Vote DOWN

Pages