Planet MySQL

MySQL InnoDB Cluster Setup and Server Failover / Restart

MySQL InnoDB Cluster Setup and Configuration

Reference :
https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-production-deployment.html

Pr-requisite  Assumption :
    MySQL Server 8.0.12+ installation : /usr/local/mysql
    MySQL Shell installation : /usr/local/shell
    MySQL Router Installation : /usr/local/router
    Hostname and IP must be resolvable.   
        Make sure the /etc/hosts valid to have the IP and Hostname entries correctly with ALL MySQL Server machines.

The Video [ https://youtu.be/_jR_bJGTf-o ] provides the full steps showing the Demonstration of
1. Setting up of 3 x MySQL Servers on 1 VM
    a.  Configuration my1.cnf, my2.cnf and my3.cnf

    b.  MySQL  Server (mysqld) initialization
         # mysqld --defaults-file=<config file> --initialize-insecure

    c.  Start up MySQL Server
        # mysqld_safe --defaults-file=<config file> &
        
    Note : It is possible to setup MySQL Server on different machines.  Steps should be the same.
   
2. Using MySQL Shell (mysqlsh) to configure the MySQL Instance (LOCALLY)
    a. mysqlsh > dba.configureInstance( "root@localhost:<port>", {clusterAdmin:"gradmin", clusterAdminPassword:"grpass"})
   
    The configuration should be done on each of the server (LOCALLY).  Because, by default the MySQL installation creates 'root@localhost'  which can only access the Database LOCALLY.  

3.  Execute SQL "reset master;reset slave;"  on all the servers to make sure ALL data to be in-sync.   (If data import to all nodes, it should be good to reset the state)
    # mysql -uroot -h127.0.0.1 -P<port> -e "reset master;reset slave;"

4. Creating the Cluster
     a.  Connect to any one of the MySQL Server with "mysqlsh"
     # mysqlsh --uri gradmin:grpass@<hostname1>:<port1>
     mysqlsh> var cl = dba.createCluster('mycluster')
     b. Add the 2 more instances to the Cluster
     mysqlsh> cl.addInstance('gradmin:grpass@<hostname2>:<port2>')
     mysqlsh> cl.addInstance('gradmin:grpass@<hostname3>:<port3>')
     c. Show the status of the Cluster
      mysqlsh> cl.status()

The MySQL InnoDB Cluster should have been configured and running.

5. MySQL Router Setup and Startup
   a. Bootstrapping the configuration (Assuming /home/mysql/config/ folder exists)
       # mysqlrouter --bootstrap=gradmin:grpass@<hostname1>:<port1> --directory /home/mysql/config/mysqlrouter01 --force
    b. Start up MySQL Router
       # cd /home/mysql/config/mysqlrouter01;./start.sh
    The 'start.sh" is created by the bootstrapping process.   Once it is started, the default PORT (6446) is for RW routing.  PORT(6447) is for RO routing.

6. Testing MySQL Routing for RW
    # while [ 1 ]
    # do
    #    sleep 1
    #    mysql -ugradmin -pgrpass -h127.0.0.1 -P6446 -e "select @@hostname, @@port;"
    # done

7. Kill (or shutdown) the RW server
    # mysql -uroot -h127.0.0.1 -P<port> -e "shutdown;"      Note : root can only access locally
    Check the Routing acess from Step 6, the Hostname/Port should have switched/failed over.

Enjoy and check the Video [ https://youtu.be/_jR_bJGTf-o ] :  



 


   



Where you can find MySQL in October - December 2018 - part 1.

We would like to announce the shows & conferences where you can find MySQL Community Team or MySQL representatives at. Please be aware that the list below could be subject of change.

October 2018

  • GITEX, Dubai, United Arab Emirates, October 14-18, 2018
    • Same as last year Oracle & MySQL are part of GITEX converence. There is an Oracle booth (Stand A5-01, Hall 5) as well as MySQL one (POD2, Hall 5) where you can find us. 
  • ZendCon, Las Vegas, US, October 15-17, 2018
    • We are Exhibitor sponsor same as last year, however newly this year we are going to have a new very cool booth design! Come to check and talk to us there!
    • Also if you are around, do not miss the sessions by David Stokes, The MySQL Community Manager as follows: 
      • "MySQL without the SQL -- Oh my!" scheduled for today @11:30-12:30pm
      • "MySQL 8 performance tuning" scheduled for tomorrow, Oct 17 @4:00-5:00pm
    • We are looking forward to seeing and talking to you at ZendCon!
  • JCConf, Taipei, Taiwan, October 19, 2018
    • Same as last year we are again supporting Java Community Conference (JCConf) in Taiwan. This year you can find our staff at Oracle/MySQL booth in expo area and listen following MySQL & Oracle talks:
      • "How MySQL support NoSQL with Java applications" by Ivan Tu, the Principal Sales consultant, MySQL APAC.
      • "GraalVM (polyglot environment supports Oracle, MySQL and Java)" given by Yudi Zheng, Senior researcher at Oracle Lab organization.
    • We are looking forward to talking to you there!
  • Oracle Open World & MySQL Community Reception, San Francisco, October 22-25, 2018
    • MySQL is again part of the OOW show, please check the website for a MySQL talks in the agenda. And same as last year MySQL Community organizes a MySQL Reception. It is an evening event organized by MySQL Community on October 23rd at Samovar Tea Lounge at 730 Howard St., San Francisco, CA from 7-9 pm. 
    • Join the MySQL team at the MySQL Reception to celebrate the dynamics and growth of the MySQL community!
  • Forum PHP, Paris, France, October 25-26, 2018
    • We are a Bronze sponsor of Forum PHP this year with an approved talk on: "MySQL 8.0: What's new". Please check the organizers' website for further details
  • JSFoo, Bangalore, India October 26-27, 2018
    • This is the first time we are attending this JavaScript conference, this year we are having just a talk there. The talk is scheduled for Sat, Oct 27, 2018 @13:40-14:10 with the topic of: "MySQL 8 loves JavaScript". The speaker is Sanjay Manwani, Developer Evangelism at Oracle.

November 2018

  • Madison PHP, Madison, US, November 2-3, 2018
    • MySQL Community Team is a Community Sponsor of Madison PHP conference this year again.
  • MOPCON 2018, Taipei, Taiwan, November 3-4, 2018
    • This year for the first time we are going to have a talk at MOPCON 2018. Please do not miss the opportunity to listen the talk given by Ivan Tu, the MySQL Principal Consultant Manager as follows:
      • "The mobile application supported by new generation MySQL 8.0" scheduled for Nov 4, @11:05-11:45am in BigData track.
    • ​​We are looking forward to meeting & talking to you there!

..More shows to be announced soon....

Cluster Performance Validation via Load Testing

Your database cluster contains your most business-critical data and therefore proper performance under load is critical to business health. If response time is slow, customers (and staff) get frustrated and the business suffers a slow-down.

If the database layer is unable to keep up with demand, all applications can and will suffer slow performance as a result.

To prevent this situation, use load tests to determine the throughput as objectively as possible.

In the sample load.pl script below, increase load by increasing the thread quantity.

You could also run this on a database with data in it without polluting the existing data since new test databases are created to match each node’s hostname for uniqueness.

Note: The examples in this blog post assume that a Connector is running on each database node and listening on port 3306, and that the database itself is listening on non-standard port 13306.

Install Sysbench

As the root user:

wget https://www.percona.com/redir/downloads/percona-release/redhat/latest/percona-release-0.1-4.noarch.rpm rpm -Uvh percona-release-0.1-4.noarch.rpm yum install sysbench sysbench

https://github.com/akopytov/sysbench#usage

Create and Prepare the Test Databases

First, prepare the per-host test databases by writing via the Connector so that the create database commands are replicated from the master to all nodes.

Repeat on all nodes as OS user tungsten:

export HOST=`/bin/hostname -s`; mysql -u app_user -P3306 -psecret -h127.0.0.1 -e "create database test_$HOST;" sysbench --db-driver=mysql --mysql-user=app_user --mysql-password=secret --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-db=test_`/bin/hostname -s` --db-ps-mode=disable --range_size=100 --table_size=10000 --tables=2 --threads=1 --events=0 --time=60 --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua prepare

Create and run the load test script

touch load.pl; chmod 755 load.pl; vim load.pl ./load.pl

Below is the load.pl script:

#!/usr/bin/perl # load.pl use strict; our $time = 58; our $threads = 2; our $name = `/bin/hostname -s`; chomp($name); our $cmd = <<EOT; sysbench --db-driver=mysql --mysql-user=app_user --mysql-password=secret --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-db=test_$name --db-ps-mode=disable --range_size=100 --table_size=10000 --tables=2 --threads=$threads --events=0 --time=$time --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua run EOT print "Executing: $cmd\n"; system($cmd); exit 0;

Here is an example run with threads set to 1 for a small test:

tungsten@db5:/home/tungsten # ./load.pl Executing: sysbench --db-driver=mysql --mysql-user=app_user --mysql-password=secret --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-db=test_db5 --db-ps-mode=disable --range_size=100 --table_size=10000 --tables=2 --threads=1 --events=0 --time=58 --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua run sysbench 1.0.15 (using bundled LuaJIT 2.1.0-beta2) Running the test with following options: Number of threads: 1 Initializing random number generator from current time Initializing worker threads... Threads started! SQL statistics: queries performed: read: 71204 write: 8220 other: 22296 total: 101720 transactions: 5086 (87.68 per sec.) queries: 101720 (1753.69 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 58.0014s total number of events: 5086 Latency (ms): min: 9.09 avg: 11.40 max: 218.45 95th percentile: 13.70 sum: 57978.28 Threads fairness: events (avg/stddev): 5086.0000/0.00 execution time (avg/stddev): 57.9783/0.00

Install into cron if desired

We use this script on our lab clusters to generate ongoing load. To do so, we simply add the 58-second sysbench load job (-time=58) into cron to be executed once per minute:

tungsten@db3:/home/tungsten # crontab -e * * * * * /home/tungsten/load.pl > /dev/null 2>&1

The Wrap-Up

Increase load gradually by increasing the number of threads be host. Use the script on more than one host to increase load further. Use commands like top and iostat to monitor system resource utilization.

By testing through the Connector, you are testing multiple things at once:

  • Overall throughput via the Connector (also helps prove out the overall network bandwidth)
  • MySQL read and write capabilities on the Master in terms of throughput and system utilization
  • Replicator speed to the slaves – check the slave latency – are they up to date? This tests the replicator, the network and the write speed of the slave databases
  • Behavior and erformance of the application (do a switch under load. Does it reconnect properly? Are there any issues or errors?)

Perform as much testing as possible BEFORE you go live and save yourself tons of headaches!

To learn about Continuent solutions in general, check out https://www.continuent.com/solutions

For more information about monitoring Continuent clusters, please visit https://docs.continuent.com/tungsten-clustering-6.0/ecosystem-nagios.html.

Continuent Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit https://www.continuent.com/solutions

Want to learn more or run a POC? Contact us.

[Solved] MySQL User Operation - ERROR 1396 (HY000): Operation CREATE / DROP USER failed for 'user'@'host'

Error Message: ERROR 1396 (HY000): Operation CREATE USER failed for 'admin'@'%'
Generic Error Message: Operation %s failed for %s
Error Scenario: Operation CREATE USER failed
Operation DROP USER failed

Reason: The reason for this error is, you are trying to do some user operation but the user does not exist on the MySQL system. Also, for drop user, the user details are stored somewhere in the system, even though, you have already dropped the user from MySQL server.
Resolution: Revoke all access granted to the user, drop the user, and run FLUSH PRIVILEGE command to remove the caches. Now create/drop/alter the user, it will work.
REVOKE ALL ON *.* FROM 'user'@'host'; DROP USER 'user'@'host'; FLUSH PRIVILEGES;
Grant Tables: The following tables will help you in identifying the user related informations (as of MySQL 5.7):

mysql.user: User accounts, global privileges, and other non-privilege columns mysql.db: Database-level privileges mysql.tables_priv: Table-level privileges mysql.columns_priv: Column-level privileges mysql.procs_priv: Stored procedure and function privileges
mysql.proxies_priv: Proxy-user privilege
Related MySQL Bug Reports: https://bugs.mysql.com/bug.php?id=28331 https://bugs.mysql.com/bug.php?id=86523
I hope this post will help you, if you faced this error on some other scenarios or if you know, some other workaround / solution for this error, please add on the comment section. It will be helpful for other readers.



Percona Live Europe Tutorial: Query Optimization and TLS at Large Scale

For Percona Live Europe this year, I got accepted a workshop on query optimization and a 50-minute talk covering TLS for MySQL at Large Scale, talking about our experiences at the Wikimedia Foundation.

Workshop

The 3-hour workshop on Monday, titled Query Optimization with MySQL 8.0 and MariaDB 10.3: The Basics is a beginners’ tutorial–though dense in content. It’s for people who are more familiar with database storage systems other than InnoDB for MySQL, MariaDB or Percona Server. Or who, already familiar with them, are suffering performance and scaling issues with their SQL queries. If you get confused with the output of basic commands like EXPLAIN and SHOW STATUS and want to learn some SQL-level optimizations, such as creating the right indexes or altering the schema to get the most out of the performance of your database server, then you want to attend this tutorial before going into more advanced topics. Even veteran DBAs and developers may learn one or two new tricks, only available on the latest server versions!

Something that people may enjoy is that, during the tutorial, every attendee will be able to throw queries to a real-time copy of the Wikipedia database servers—or setup their own offline Wikipedia copy in their laptop. They’ll get practice by themselves what is being explained—so it will be fully hands-on. I like my sessions to be interactive, so all attendees should get ready to answer questions and think through the proposed problems by themselves!

Fifty minutes talk

My 50 minute talk TLS for MySQL at Large Scale will be a bit more advanced, although maybe more attractive to users of other database technologies. On Tuesday, I will tell the tale of the mistakes and lessons learned while deploying encryption (TLS/SSL) for the replication, administration, and client connections of our databases. At the Wikimedia Foundation we take very seriously the privacy of our users—Wikipedia readers, project contributors, data reusers and every members of our community—and while none of our databases are publicly reachable, our aim is to encrypt every single connection between servers, even within our datacenters.

However, when people talk about security topics, most of the time they are trying to show off the good parts of their set up, while hiding the ugly parts. Or maybe they are too theoretical to actually learn something. My focus will not be on the security principles everybody should follow, but on the pure operational problems, and the solutions we needed to deploy, as well what we would have done differently if we had known, while deploying TLS on our 200+ MariaDB server pool.

Looking forward…

For me, as an attendee, I always look forward to the ProxySQL sessions, as it is something we are currently deploying in our production. Also, I want to know more about the maturity and roadmap of the newest MySQL and MariaDB releases, as they keep adding new interesting features we need, as well as cluster technologies such as Galera and InnoDB Cluster. I like, too, to talk with people developing and using other technologies outside of my stack, and you never know when they will fill in a need we have (analytics, compression, NoSQL, etc.).

But above all, the thing I enjoy the most is the networking—being able to talk with professionals that suffer the same problems that I do is something I normally cannot do, and that I enjoy doing a lot during Percona Live.

Jaime Crespo in a Percona Live T-Shirt – why not come to this year’s event and start YOUR collection.

The post Percona Live Europe Tutorial: Query Optimization and TLS at Large Scale appeared first on Percona Community Blog.

Identifying High Load Spots in MySQL Using Slow Query Log and pt-query-digest

pt-query-digest is one of the most commonly used tool when it comes to query auditing in MySQL®. By default, pt-query-digest reports the top ten queries consuming the most amount of time inside MySQL. A query that takes more time than the set threshold for completion is considered slow but it’s not always true that tuning such queries makes them faster. Sometimes, when resources on server are busy, it will impact every other operation on the server, and so will impact queries too. In such cases, you will see the proportion of slow queries goes up. That can also include queries that work fine in general.

This article explains a small trick to identify such spots using pt-query-digest and the slow query log. pt-query-digest is a component of Percona Toolkit, open source software that is free to download and use.

Some sample data

Let’s have a look at sample data in Percona Server 5.7. Slow query log is configured to capture queries longer than ten seconds with no limit on rate of logging, which is generally considered to throttle the IO that comes while writing slow queries to the log file.

mysql> show variables like 'log_slow_rate%' ; +---------------------+---------+ | Variable_name | Value | +---------------------+---------+ | log_slow_rate_limit | 1 | --> Log all queries | log_slow_rate_type | session | +---------------------+---------+ 2 rows in set (0.00 sec) mysql> show variables like 'long_query_time' ; +-----------------+-----------+ | Variable_name | Value | +-----------------+-----------+ | long_query_time | 10.000000 | --> 10 seconds +-----------------+-----------+ 1 row in set (0.01 sec)

When I run pt-query-digest, I see in the summary report that 80% of the queries have come from just three query patterns.

# Profile # Rank Query ID Response time Calls R/Call V/M # ==== ============================= ================ ===== ======== ===== # 1 0x7B92A64478A4499516F46891... 13446.3083 56.1% 102 131.8266 3.83 SELECT performance_schema.events_statements_history # 2 0x752E6264A9E73B741D3DC04F... 4185.0857 17.5% 30 139.5029 0.00 SELECT table1 # 3 0xAFB5110D2C576F3700EE3F7B... 1688.7549 7.0% 13 129.9042 8.20 SELECT table2 # 4 0x6CE1C4E763245AF56911E983... 1401.7309 5.8% 12 116.8109 13.45 SELECT table4 # 5 0x85325FDF75CD6F1C91DFBB85... 989.5446 4.1% 15 65.9696 55.42 SELECT tbl1 tbl2 tbl3 tbl4 # 6 0xB30E9CB844F2F14648B182D0... 420.2127 1.8% 4 105.0532 12.91 SELECT tbl5 # 7 0x7F7C6EE1D23493B5D6234382... 382.1407 1.6% 12 31.8451 70.36 INSERT UPDATE tbl6 # 8 0xBC1EE70ABAE1D17CD8F177D7... 320.5010 1.3% 6 53.4168 67.01 REPLACE tbl7 # 10 0xA2A385D3A76D492144DD219B... 183.9891 0.8% 18 10.2216 0.00 UPDATE tbl8 # MISC 0xMISC 948.6902 4.0% 14 67.7636 0.0 <10 ITEMS>

Query #1 is generated by the qan-agent from PMM and runs approximately once a minute. These results will be handed over to PMM Server. Similarly queries #2 & #3 are pretty simple. I mean, they scan just one row and will return either zero or one rows. They also use indexing, which makes me think that this is not because of something just with in MySQL. I wanted to know if I could find any common aspect of all these occurrences.

Let’s take a closer look at the queries recorded in slow query log.

# grep -B3 DIGEST mysql-slow_Oct2nd_4th.log .... .... # User@Host: ztrend[ztrend] @ localhost [] Id: 6431601021 # Query_time: 139.279651 Lock_time: 64.502959 Rows_sent: 0 Rows_examined: 0 SET timestamp=1538524947; SELECT DIGEST, CURRENT_SCHEMA, SQL_TEXT FROM performance_schema.events_statements_history; # User@Host: ztrend[ztrend] @ localhost [] Id: 6431601029 # Query_time: 139.282594 Lock_time: 83.140413 Rows_sent: 0 Rows_examined: 0 SET timestamp=1538524947; SELECT DIGEST, CURRENT_SCHEMA, SQL_TEXT FROM performance_schema.events_statements_history; # User@Host: ztrend[ztrend] @ localhost [] Id: 6431601031 # Query_time: 139.314228 Lock_time: 96.679563 Rows_sent: 0 Rows_examined: 0 SET timestamp=1538524947; SELECT DIGEST, CURRENT_SCHEMA, SQL_TEXT FROM performance_schema.events_statements_history; .... ....

Now you can see two things.

  • All of them have same Unix timestamp
  • All of them were spending more than 70% of their execution time waiting for some lock.
Analyzing the data from pt-query-digest

Now I want to check if I can group the count of queries based on their time of execution. If there are multiple queries at a given time captured into the slow query log, time will be printed for the first query but not all. Fortunately, in this case I can rely on the Unix timestamp to compute the counts. The timestamp is gets captured for every query. Luckily, without a long struggle, a combination of grep and awk utilities have displayed what I wanted to display.

# grep -A1 Query_time mysql-slow_Oct2nd_4th.log | grep SET | awk -F "=" '{ print $2 }' | uniq -c 2 1538450797; 1 1538524822; 3 1538524846; 7 1538524857; 167 1538524947; ---> 72% of queries have happened at this timestamp. 1 1538551813; 3 1538551815; 6 1538602215; 1 1538617599; 33 1538631015; 1 1538631016; 1 1538631017;

You can use the command below to check the regular date time format of a given timestamp. So, Oct 3, 05:32 is when there was something wrong on the server:

# date -d @1538524947 Wed Oct 3 05:32:27 IST 2018

Query tuning can be carried out alongside this, but identifying such spots helps avoiding spending time on query tuning where badly written queries are not the problem. Having said that, from this point, further troubleshooting may take different sub paths such as checking log files at that particular time, looking at CPU reports, reviewing past pt-stalk reports if set up to run in the background, and dmesg etc. This approach is useful for identifying at what time (or time range) MySQL was more stressed just using slow query log when no robust monitoring tools, like Percona Monitoring and Management (PMM), are deployed.

Using PMM to monitor queries

If you have PMM, you can review Query Analytics to see the topmost slow queries, along with details like execution counts, load etc. Below is a sample screen copy for your reference:

NOTE: If you use Percona Server for MySQL, slow query log can report time in micro seconds. It also supports extended logging of  other statistics about query execution. These provide extra power to see the insights of query processing. You can see more information about these options here.

Invitation to meet Galera Cluster developers at Oracle OpenWorld San Francisco

We will have a kiosk at Moscone Center, south exhibition hall, Oracle’s Data Management area number 123, close to high availability area and exits 16 and 18. Our kiosk number is DBA-P1.

Our CEO and Co-Founder Seppo Jaakola will host a presentation highlighting the main features of our upcoming Galera Cluster 4.0. The presentation will take place Monday, Oct 22, 1:00 p.m. – 1:20 p.m at the Exchange @ Moscone South – Theater 2. Seats are limited!

Come and meet us! Let’s discuss your MySQL high availability plan or your Galera Cluster deployment. If you want to set up a meeting with us please email to ínfo@galeracluster.com for a meeting request.

MySQL Adventures: GTID Replication In AWS RDS

You all heard about that today AWS announced that RDS is started to support GTID Transactions. I’m a great fan of RDS but not for GTID. Since RDS has better settings and configurations to perform well. Many of you people read about the AWS What’s new page regarding GTID. But here we are going to talk about the actual benefits and drawbacks.

RDS supports GTID on MySQL 5.7.23 or later. But AWS released this version on Oct10 (two days before). So, for now, this is the only version which supports GTID.

NOTE: GTID supports only for RDS, its not available for Aurora. It may support in future)

Before configuring the GTID, lets have a look at what is GTID?

  • GTID stands for Global Transaction Identifier.
  • It’ll generate a unique ID for each committed transaction.
  • The GTID referred as server_UUID:transaction_id
  • GTID replication is a better solution in a multi-master environment.
  • To learn more about GTID, hit here.
GTID in RDS:
  1. You can use GTID only on RDS, not in Aurora.

2. There are 4 types of GTID modes in RDS.

From AWS Docs,

  • OFF — No GTID. Anonymous transactions are replicated.
  • OFF_PREMISSIVE — New transactions are anonymous transactions, but all transactions can be replicated.
  • ON_PERMISSIVE specifies that new transactions are GTID transactions, but all transactions can be replicated.
  • ON specifies that new transactions are GTID transactions, and a transaction must be a GTID transaction to be replicated.

3. The default GTID mode in RDS is OFF_PREMISSIVE.

4. RDS support 3 Consistency levels for GTID.

  • OFF allows transactions to violate GTID consistency.
  • ON prevents transactions from violating GTID consistency.
  • WARN allows transactions to violate GTID consistency but generates a warning when a violation occurs.
Replication between RDS to EC2 with GTID:

I have launched an RDS and enabled the below parameters in the Parameter group.

gtid-mode = ON
enforce_gtid_consistency = ON #From RDS Console
Backup Retention Period = 2 Days (you can set this as you need) Create a database with some data: CREATE DATABASE searcedb;

USE searcedb;

CREATE TABLE dba_profile
(
id INT auto_increment PRIMARY KEY,
name VARCHAR(10),
fav_db VARCHAR(10)
);

INSERT INTO dba_profile (name, fav_db) VALUES ('sqladmin', 'MSSQL');
INSERT INTO dba_profile (name, fav_db) VALUES ('mac', 'MySQL'); Create the user for replication: CREATE USER 'rep_user'@'%' IDENTIFIED BY 'rep_user';

GRANT REPLICATION slave ON *.* TO 'rep_user'@'%' IDENTIFIED BY 'rep_user'; FLUSH PRIVILEGES; Take DUMP on RDS: mysqldump \
-h sqladmin-mysql-rds.xxxxx.rds.amazonaws.com \
-u sqladmin -p \
--routines \
--events \
--triggers \
--databases searcedb > dbdump.sql

The above command will dump the searcedb with stored procedures, triggers, and events. If you have multiple databases the use --databases db1 db2 db3. Generally, for replicating the database we use --master-data=2 to get the binlog file and position. But this is GTID replication. So it has the last executed GTID information in the dump file.

$ grep PURGED dbdump.sql SET @@GLOBAL.GTID_PURGED='eac87cf0-cdfe-11e8-9275-0aecd3b2835c:1-13';

You may get this warning message during the dump. It just saying that the dump file contains the Purge GTID command.

Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events.

We can’t restore the dump on the MySQL where GTID is not enabled.

Restore the dump on EC2:

Enable the GTID before restoring the DB. RDS will replicate all the databases. In MySQL there some RDS related tables which are not available in EC2 MySQL. There is a table called heartbeat will keep inserting data about the RDS health. This statements also will be replicated to Slave. replicate-ignore-db=mysql will not work here. Because the statements are specifically mentioned the DB name. So we need to ignore these tables on Slave by replicate-ignore-tables.

Enable GTID on EC2: vi /etc/mysql/mysql.conf.d/mysqld.cnf server-id = 1234
gtid_mode = ON
enforce_gtid_consistency = ON
log-bin
log-slave-updates #Ignore tables
replicate_ignore_table = mysql.rds_configuration,mysql.rds_global_status_history_old,mysql.rds_heartbeat2,mysql.rds_history,mysql.rds_replication_status,mysql.rds_sysinfo
#Restart MySQL
service mysql restart Restore the DB: mysql -u root -p < dbdump.sql Establish the replication: CHANGE MASTER TO MASTER_HOST="sqladmin-mysql-rds.xxxxx.rds.amazonaws.com", MASTER_USER="rep_user", MASTER_PASSWORD="rep_user", MASTER_PORT=3306, MASTER_AUTO_POSITION = 1; START SLAVE; Check the Replication: show slave status\G
Slave_IO_State: Waiting for master to send event Master_Host: sqladmin-mysql-rds.xxxx.rds.amazonaws.com Master_Log_File: mysql-bin-changelog.000030 Read_Master_Log_Pos: 551 Relay_Log_File: ip-172-31-29-127-relay-bin.000002 Relay_Log_Pos: 444 Relay_Master_Log_File: mysql-bin-changelog.000030 Slave_IO_Running: Yes Slave_SQL_Running: Yes ........... Executed_Gtid_Set: eac87cf0-cdfe-11e8-9275-0aecd3b2835c:1-22 Auto_Position: 1 Lets insert some rows: On Master: INSERT INTO dba_profile (name, fav_db) VALUES ('surface', 'PostgresqlSQL'); On Slave: mysql -u root -p
Enter Password: mysql> select * from searcedb.dba_profile;
+----+----------+------------+ | id | name | fav_db | +----+----------+------------+ | 1 | sqladmin | MSSQL | | 2 | mac | MySQL | | 3 | surface | PostgreSQL | +----+----------+------------+ 3 rows in set (0.00 sec) Enable GTID Replication on existing RDS master and slave/RDS Read Replica: On Mater RDS:
  1. Make sure we are running MySQL 5.7.23 or later.

2. Use the custom parameter group.

3. In the Parameter group,

gtid-mode = ON
enforce_gtid_consistency = ON #From RDS Console
Backup Retention Period = 2 Days (you can set this as you need)

4.Need to reboot the RDS to apply these changes.

On Slave:
  1. Use the custom parameter group (it's a good practice to have separate parameter group for Mater and Slave)
  2. If you are using RDS Read Replica, then in the Parameter group,
gtid-mode = ON
enforce_gtid_consistency = ON

3. If you are using EC2 as a Replica then in my.cnf

gtid_mode = ON
enforce_gtid_consistency = ON

4. Reboot the Read Replica.

5. Still, your read replica will use Binlog Position based replication. Run the below command to Start Replication with GTID.

CALL mysql.rds_set_master_auto_position(1); How to Disable GTID in RDS: Caution: You need to follow these step as it is. Else it’ll break your replication and you may lose some transactions. Caution: You need to follow these step as it is. Else it’ll break your replication and you may lose some transactions.
  1. Disable the get auto position for replication.
CALL mysql.rds_set_master_auto_position(0);

2. In the parameter group, set gtid-mode = ON_PREMISSIVE

3. Reboot the Replica.

4. Again in the parameter group, set gtid-mode = OFF_PREMISSIVE

5. Make sure all GTID transactions are applied on the Replica. To check this follow the below steps.

  • On Master, get the current binlog file name and its position.
show master status\G; *************************** 1. row *************************** File: mysql-bin-changelog.000039 Position: 827 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 2f5e8f57-ce2a-11e8-8874-0a1b0ee9b48e:1-27
  • Make a note of the output.
  • On each Read Replica, Run the below command. (replace the binlog filename and position with your output)
SELECT MASTER_POS_WAIT('mysql-bin-changelog.000039', 827);
  • If you see the result is 0, then we are fine. Else the need to wait for some time and again the run the same command.
+----------------------------------------------------+ | MASTER_POS_WAIT('mysql-bin-changelog.000039', 827) | +----------------------------------------------------+ | 0 | +----------------------------------------------------+
  • Once we confirmed that all GTID replications are applied then, in the Read Replica Parameter group we can disable the GTID permanently.
gtid_mode – OFF
enforce_gtid_consistency – OFF
  • Then do the same on the Master RDS parameter group.
Disable GTID on EC2 Slave:
  1. Switch replication from auto position to binlog position.
STOP SLAVE;
change master to master_auto_position = 0;

2. Verify all the GTID transactions are applied. Run the below command. (replace the binlog filename and position with your output)

SELECT MASTER_POS_WAIT('mysql-bin-changelog.000040', 194);
  • If you see the result is 0, then we are fine. Else the need to wait for some time and again the run the same command.
+----------------------------------------------------+ | MASTER_POS_WAIT('mysql-bin-changelog.000040', 194) | +----------------------------------------------------+ | 0 | +----------------------------------------------------+
  • Now, GTID parameters in my.cnf
vi /etc/mysql/mysql.conf.d/mysqld.cnf # Remove the below lines:
gtid_mode = ON
enforce_gtid_consistency = ON Best Practice (from my personal thoughts):
  • On Master: use ON_PERMISSIVE GTID mode. Since this will replicate both GTID and anonymous transactions.
  • On Slave: Use GTID = ON, Because we need strong consistency.
  • Finally, use GTID if it is necessary. Because I tried to change the GTID mode frequently on the Master Node, it breaks the replication.
  • Don’t try to replicate MariaDB to MySQL. MariaDB has different GTID implementation.
  • A few months back I read a blog which is written by Jean-François Gagné. He had done the anonymous transaction replication using a patched version of MySQL.

MySQL Adventures: GTID Replication In AWS RDS was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

MySQL 2018 Community Reception

The 2018 MySQL Community Reception is October 23rd in a new venue at Samovar Tea, 730 Howard Street in San Francisco at 7:00 PM.   Right in the heart of the Moscone Center activities for Oracle OpenWorld and Oracle Code one activities.

The MySQL Community Reception is not part of Oracle OpenWorld or Oracle Code One (you do not need a badge for either event) but you do need to RSVP.  Food, drinks, and a really amazing group of attendees!   And there will be more than tea to drink.

Plus we have a few new surprises this year! 

Generating Identifiers – from AUTO_INCREMENT to Sequence

There are a number of options for generating ID values for your tables. In this post, Alexey Mikotkin of Devart explores your choices for generating identifiers with a look at auto_increment, triggers, UUID and sequences.

AUTO_INCREMENT

Frequently, we happen to need to fill tables with unique identifiers. Naturally, the first example of such identifiers is PRIMARY KEY data. These are usually integer values hidden from the user since their specific values are unimportant.

When adding a row to a table, you need to take this new key value from somewhere. You can set up your own process of generating a new identifier, but MySQL comes to the aid of the user with the AUTO_INCREMENT column setting. It is set as a column attribute and allows you to generate unique integer identifiers. As an example, consider the users table, the primary key includes an id column of type INT:

CREATE TABLE users ( id int NOT NULL AUTO_INCREMENT, first_name varchar(100) NOT NULL, last_name varchar(100) NOT NULL, email varchar(254) NOT NULL, PRIMARY KEY (id) );

Inserting a NULL value into the id field leads to the generation of a unique value; inserting 0 value is also possible unless the NO_AUTO_VALUE_ON_ZERO Server SQL Mode is enabled::

INSERT INTO users(id, first_name, last_name, email) VALUES (NULL, 'Simon', 'Wood', 'simon@testhost.com'); INSERT INTO users(id, first_name, last_name, email) VALUES (0, 'Peter', 'Hopper', 'peter@testhost.com');

It is possible to omit the id column. The same result is obtained with:

INSERT INTO users(first_name, last_name, email) VALUES ('Simon', 'Wood', 'simon@testhost.com'); INSERT INTO users(first_name, last_name, email) VALUES ('Peter', 'Hopper', 'peter@testhost.com');

The selection will provide the following result:

Select from users table shown in dbForge Studio

You can get the automatically generated value using the LAST_INSERT_ID() session function. This value can be used to insert a new row into a related table.

There are aspects to consider when using AUTO_INCREMENT, here are some:

  • In the case of rollback of a data insertion transaction, no data will be added to a table. However, the AUTO_INCREMENT counter will increase, and the next time you insert a row in the table, holes will appear in the table.
  • In the case of multiple data inserts with a single INSERT command, the LAST_INSERT_ID() function will return an automatically generated value for the first row.
  • The problem with the AUTO_INCREMENT counter value is described in Bug #199 – Innodb autoincrement stats los on restart.

For example, let’s consider several cases of using AUTO_INCREMENT for table1:

CREATE TABLE table1 ( id int NOT NULL AUTO_INCREMENT, PRIMARY KEY (id) ) ENGINE = INNODB; -- transactional table -- Insert operations. INSERT INTO table1 VALUES (NULL); -- 1 INSERT INTO table1 VALUES (NULL); -- 2 INSERT INTO table1 VALUES (NULL); -- 3 SELECT LAST_INSERT_ID() INTO @p1; -- 3 -- Insert operations within commited transaction. START TRANSACTION; INSERT INTO table1 VALUES (NULL); -- 4 INSERT INTO table1 VALUES (NULL); -- 5 INSERT INTO table1 VALUES (NULL); -- 6 COMMIT; SELECT LAST_INSERT_ID() INTO @p3; -- 6 -- Insert operations within rolled back transaction. START TRANSACTION; INSERT INTO table1 VALUES (NULL); -- 7 won't be inserted (hole) INSERT INTO table1 VALUES (NULL); -- 8 won't be inserted (hole) INSERT INTO table1 VALUES (NULL); -- 9 won't be inserted (hole) ROLLBACK; SELECT LAST_INSERT_ID() INTO @p2; -- 9 -- Insert multiple rows operation. INSERT INTO table1 VALUES (NULL), (NULL), (NULL); -- 10, 11, 12 SELECT LAST_INSERT_ID() INTO @p4; -- 10 -- Let’s check which LAST_INSERT_ID() values were at different stages of the script execution: SELECT @p1, @p2, @p3, @p4; +------+------+------+------+ | @p1 | @p2 | @p3 | @p4 | +------+------+------+------+ | 3 | 9 | 6 | 10 | +------+------+------+------+ -- The data selection from the table shows that there are holes in the table in the values of identifiers: SELECT * FROM table1; +----+ | id | +----+ | 1 | | 2 | | 3 | | 4 | | 5 | | 6 | | 10 | | 11 | | 12 | +----+

Note: The next AUTO_INCREMENT value for the table can be parsed from the SHOW CREATE TABLE result or read from the AUTO_INCREMENT field of the INFORMATION_SCHEMA TABLES table.

The rarer case is when the primary key is surrogate — it consists of two columns. The MyISAM engine has an interesting solution that provides the possibility of generating values for such keys. Let’s consider the example:

CREATE TABLE roomdetails ( room char(30) NOT NULL, id int NOT NULL AUTO_INCREMENT, PRIMARY KEY (room, id) ) ENGINE = MYISAM; INSERT INTO roomdetails VALUES ('ManClothing', NULL); INSERT INTO roomdetails VALUES ('WomanClothing', NULL); INSERT INTO roomdetails VALUES ('WomanClothing', NULL); INSERT INTO roomdetails VALUES ('WomanClothing', NULL); INSERT INTO roomdetails VALUES ('Fitting', NULL); INSERT INTO roomdetails VALUES ('ManClothing', NULL);

It is quite a convenient solution:

Special values auto generation

The possibilities of the AUTO_INCREMENT attribute are limited because it can be used only for generating simple integer values. But what about complex identifier values? For example, depending on the date/time or [A0001, A0002, B0150…]). To be sure, such values should not be used in primary keys, but they might be used for some auxiliary identifiers.

The generation of such unique values can be automated, but it will be necessary to write code for such purposes. We can use the BEFORE INSERT trigger to perform the actions we need.

Let’s consider a simple example. We have the sensors table for sensors registration. Each sensor in the table has its own name, location, and type: 1 –analog, 2 –discrete, 3 –valve. Moreover, each sensor should be marked with a unique label like [symbolic representation of the sensor type + a unique 4-digit number] where the symbolic representation corresponds to such values [AN, DS, VL].

In our case, it is necessary to form values like these [DS0001, DS0002…] and insert them into the label column.

When the trigger is executed, it is necessary to understand if any sensors of this type exist in the table. It is enough to assign number “1” to the first sensor of a certain type when it is added to the table.

In case such sensors already exist, it is necessary to find the maximum value of the identifier in this group and form a new one by incrementing the value by 1. Naturally, it is necessary to take into account that the label should start with the desired symbol and the number should be 4-digit.

So, here is the table and the trigger creation script:

CREATE TABLE sensors ( id int NOT NULL AUTO_INCREMENT, type int NOT NULL, name varchar(255) DEFAULT NULL, `position` int DEFAULT NULL, label char(6) NOT NULL, PRIMARY KEY (id) ); DELIMITER $$ CREATE TRIGGER trigger_sensors BEFORE INSERT ON sensors FOR EACH ROW BEGIN IF (NEW.label IS NULL) THEN -- Find max existed label for specified sensor type SELECT MAX(label) INTO @max_label FROM sensors WHERE type = NEW.type; IF (@max_label IS NULL) THEN SET @label = CASE NEW.type WHEN 1 THEN 'AN' WHEN 2 THEN 'DS' WHEN 3 THEN 'VL' ELSE 'UNKNOWN' END; -- Set first sensor label SET NEW.label = CONCAT(@label, '0001'); ELSE -- Set next sensor label SET NEW.label = CONCAT(SUBSTR(@max_label, 1, 2), LPAD(SUBSTR(@max_label, 3) + 1, 4, '0')); END IF; END IF; END$$ DELIMITER;

The code for generating a new identifier can, of course, be more complex. In this case, it is desirable to implement some of the code as a stored procedure/function. Let’s try to add several sensors to the table and look at the result of the labels generation:

INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 1, 'temperature 1', 10, 'AN0025'); -- Set exact label value 'AN0025' INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 1, 'temperature 2', 11, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 1, 'pressure 1', 15, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 2, 'door 1', 10, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 2, 'door 2', 11, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 3, 'valve 1', 20, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 3, 'valve 2', 21, NULL);

Using UUID

Another version of the identification data is worth mentioning – Universal Unique Identifier (UUID), also known as GUID. This is a 128-bit number suitable for use in primary keys.

A UUUI value can be represented as a string – CHAR(36)/VARCHAR(36) or a binary value – BINARY(16).

Benefits:

  • Ability to generate values ​​from the outside, for example from an application.
  • UUID values ​​are unique across tables and databases since the standard assumes uniqueness in space and time.
  • There is a specification – A Universally Unique IDentifier (UUID) URN Namespace.

Disadvantages:

  • Possible performance problems.
  • Data increase.
  • More complex data analysis (debugging).

To generate this value, MySQL function UUID() is used. New functions have been added to Oracle MySQL 8.0 server to work with UUID values ​​- UUID_TO_BIN, BIN_TO_UUID, IS_UUID. Learn more about it at the Oracle MySQL website – UUID()

The code shows the use of UUID values:

CREATE TABLE table_uuid (id binary(16) PRIMARY KEY); INSERT INTO table_uuid VALUES(UUID_TO_BIN(UUID())); INSERT INTO table_uuid VALUES(UUID_TO_BIN(UUID())); INSERT INTO table_uuid VALUES(UUID_TO_BIN(UUID())); SELECT BIN_TO_UUID(id) FROM table_uuid; +--------------------------------------+ | BIN_TO_UUID(id) | +--------------------------------------+ | d9008d47-cdf4-11e8-8d6f-0242ac11001b | | d900e2b2-cdf4-11e8-8d6f-0242ac11001b | | d9015ce9-cdf4-11e8-8d6f-0242ac11001b | +--------------------------------------+

You may also find useful the following article – Store UUID in an optimized way.

Using sequences

Some databases support the object type called Sequence that allows generating sequences of numbers. The Oracle MySQL server does not support this object type yet but the MariaDB 10.3 server has the Sequence engine that allows working with the Sequence object.

The Sequence engine provides DDL commands for creating and modifying sequences as well as several auxiliary functions for working with the values. It is possible to specify the following parameters while creating a named sequence: START – a start value, INCREMENT – a step, MINVALUE/MAXVALUE – the minimum and maximum value; CACHE – the size of the cache values; CYCLE/NOCYCLE – the sequence cyclicity. For more information, see the CREATE SEQUENCE documentation.

Moreover, the sequence can be used to generate unique numeric values.  This possibility can be considered as an alternative to AUTO_INCREMENT but the sequence additionally provides an opportunity to specify a step of the values. Let’s take a look at this example by using the users table. The sequence object users_seq will be used to fill the values of the primary key. It is enough to specify the NEXT VALUE FOR function in the DEFAULT property of the column:

CREATE SEQUENCE users_seq; CREATE TABLE users ( id int NOT NULL DEFAULT (NEXT VALUE FOR users_seq), first_name varchar(100) NOT NULL, last_name varchar(100) NOT NULL, email varchar(254) NOT NULL, PRIMARY KEY (id) ); INSERT INTO users (first_name, last_name, email) VALUES ('Simon', 'Wood', 'simon@testhost.com'); INSERT INTO users (first_name, last_name, email) VALUES ('Peter', 'Hopper', 'peter@testhost.com');

Table content output:

Information

The images for this article were produced while using dbForge Studio for MySQL Express Edition, a download is available from https://www.devart.com/dbforge/mysql/studio/dbforgemysql80exp.exe

It’s free!

 

Thank you to community reviewer Jean-François Gagné for his review and suggestions for this post.

The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.

The post Generating Identifiers – from AUTO_INCREMENT to Sequence appeared first on Percona Community Blog.

Mydbops Delighted to be part of Open Source India -2018

Mydbops has partnered with OSI days for the second consecutive year. OSI days is one of the Asia’s leading  open source conference.

Presentations on MySQL 

Topic        : Evolution of DBA’s in Cloud

Presenter : Manosh Malai ,Senior Devops / DB Consultant Mydbops

Kabilesh P R. Co-Founder / DB Consultant Mydbops

 

As Cloud is more widely adopted by industry now DBA’s should focus on ramping up their Skills on core optimisation and designing more scalable database. Our consultants emphasis the role of DBA in cloud environment and share their experience in handling large scale systems.

Topic : MySQL 8.0 = NoSQL + SQL

Presenter : Tomas Ulin, Vice President MySQL Engineering Oracle

Topic : High Availability framework for MySQL wth Semi-Synchronous replication

Presenter : Prasad Nagaraj,VP, Engineering Scalegrid

 

 

Essential Cluster Monitoring Using Nagios and NRPE

In a previous post we went into detail about how to implement Tungsten-specific checks. In this post we will focus on the other standard Nagios checks that would help keep your cluster nodes healthy.

Your database cluster contains your most business-critical data. The slave nodes must be online, healthy and in sync with the master in order to be viable failover candidates.

This means keeping a close watch on the health of the databases nodes from many perspectives, from ensuring sufficient disk space to testing that replication traffic is flowing.

A robust monitoring setup is essential for cluster health and viability – if your replicator goes offline and you do not know about it, then that slave becomes effectively useless because it has stale data.

Nagios Checks The Power of Persistence

One of the best (and also the worst) things about Nagios is the built-in nagging – it just screams for attention until you pay attention to it.

Nagios server uses services.cfg which defines a service that calls the check_nrpe binary with at least one argument – the name of the check to execute on the remote host.

Once on the remote host, the NRPE daemon processes the request from the Nagios server, comparing the check name sent by the Nagios server request with the list of defined commands in the /etc/nagios/nrpe.cfg file. If a match is found, the command is executed by the nrpe user. If different privileges are needed, then sudo must be employed.

Prerequisites Before you can use these examples

This is NOT a Nagios tutorial as such, although we present configuration examples for the Nagios framework. You will need to already have the following:

  • Nagios server installed and fully functional
  • NRPE installed and fully functional on each cluster node you wish to monitor

Please note that installing and configuring Nagios and NRPE in your environment is not covered in this article.

Teach the Targets Tell NRPE on the Database Nodes What To Do

The NRPE commands are defined in the /etc/nagios/nrpe.cfg file on each monitored database node. We will discuss three NRPE plugins called by the defined commands: check_disk, check_mysql and check_mysql_query.

First, let’s ensure that we do not fill up our disk space using the check_disk plugin by defining two custom commands, each calling check_disk to monitor a different disk partition:

command[check_root]=/usr/lib64/nagios/plugins/check_disk -w 20 -c 10 -p / command[check_disk_data]=/usr/lib64/nagios/plugins/check_disk -w 20 -c 10 -p /volumes/data

Next, let’s validate that we are able to login to mysql directly, bypassing the connector by using port 13306, and using the check_mysql plugin by defining a custom command also called check_mysql:

command[check_mysql]=/usr/lib64/nagios/plugins/check_mysql -H localhost -u nagios -p secret -P 13306

If there is a connector running on that node, you may run the same test to validate that we are able to login through the connector by using port 3306 and the check_mysql plugin by defining a custom command called check_mysql_connector:

command[check_mysql_connector]=/usr/lib64/nagios/plugins/check_mysql -H localhost -u nagios -p secret -P 3306

Finally, you may run any MySQL query you wish to validate further, normally via the local MySQL port 13306 to ensure that the check is testing the local host:

command[check_mysql_query]=/usr/lib64/nagios/plugins/check_mysql_query -q 'select mydatacolumn from nagios.test_data' -H localhost -u nagios -p secret -P 13306

Here are some other example commands you may define that are not Tungsten-specific:

command[check_total_procs]=/usr/lib64/nagios/plugins/check_procs -w 150 -c 200 command[check_users]=/usr/lib64/nagios/plugins/check_users -w 15 -c 25 command[check_load]=/usr/lib64/nagios/plugins/check_load -w 5,4,3 -c 6,5,4 command[check_procs]=/usr/lib64/nagios/plugins/check_procs -w 150 -c 200 command[check_zombie_procs]=/usr/lib64/nagios/plugins/check_procs -w 5 -c 10 -s Z

Additionally, there is no harm in defining commands that may not be called, which allows for simple administration – keep the master copy in one place and then just push updates to all nodes as needed then restart nrpe.

Big Brother Sees You Tell the Nagios server to begin watching

Here are the service check definitions for the /opt/local/etc/nagios/objects/services.cfg file:

# Service definition define service{ service_description Root partition - Continuent Clustering servicegroups myclusters host_name db1,db2,db3,db4,db5,db6,db7,db8,db9 check_command check_nrpe!check_root contact_groups admin use generic-service } # Service definition define service{ service_description Data partition - Continuent Clustering servicegroups myclusters host_name db1,db2,db3,db4,db5,db6,db7,db8,db9 check_command check_nrpe!check_disk_data contact_groups admin use generic-service } # Service definition define service{ service_description mysql local login - Continuent Clustering servicegroups myclusters host_name db1,db2,db3,db4,db5,db6,db7,db8,db9 contact_groups admin check_command check_nrpe!check_mysql use generic-service } # Service definition define service{ service_description mysql login via connector - Continuent Clustering servicegroups myclusters host_name db1,db2,db3,db4,db5,db6,db7,db8,db9 contact_groups admin check_command check_nrpe!check_mysql_connector use generic-service } # Service definition define service{ service_description mysql local query - Continuent Clustering servicegroups myclusters host_name db1,db2,db3,db4,db5,db6,db7,db8,db9 contact_groups admin check_command check_nrpe!check_mysql_query use generic-service }

NOTE: You must also add all of the hosts into the /opt/local/etc/nagios/objects/hosts.cfg file.

Let’s Get Practical How to test the remote NRPE calls from the command line

The best way to ensure things are working well is to divide and conquer. My favorite approach is to use the check_nrpe binary on the command line from the Nagios server to make sure that the call(s) to the remote monitored node(s) succeed long before I configure the Nagios server daemon and start getting those evil text messages and emails.

To test a remote NRPE client command from a nagios server via the command line, use the check_nrpe command:

shell> /opt/local/libexec/nagios/check_nrpe -H db1 -c check_disk_data DISK OK - free space: /volumes/data 40234 MB (78% inode=99%);| /volumes/data=10955MB;51170;51180;0;51190

The above command calls the NRPE daemon running on host db1 and executes the NRPE command “check_disk_data” as defined in the db1:/etc/nagios/nrpe.cfg file.

The Wrap-Up Put it all together and sleep better knowing your Continuent Cluster is under constant surveillance

Once your tests are working and your Nagios server config files have been updated, just restart the Nagios server daemon and you are on your way!

Tuning the values in the nrpe.cfg file may be required for optimal performance, as always, YMMV.

To learn about Continuent solutions in general, check out https://www.continuent.com/solutions

For more information about monitoring Continuent clusters, please visit https://docs.continuent.com/tungsten-clustering-6.0/ecosystem-nagios.html.

Continuent Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit https://www.continuent.com/solutions

Want to learn more or run a POC? Contact us.

Deploying MySQL on Kubernetes with a Percona-based Operator

In the context of providing managed WordPress hosting services, at Presslabs we operate with lots of small to medium-sized databases, in a DB-per-service model, as we call it. The workloads are mostly reads, so we need to efficiently scale that. The MySQL® asynchronous replication model fits the bill very well, allowing us to scale horizontally from one server—with the obvious availability pitfalls—to tens of nodes. The next release of the stack is going to be open-sourced.

As we were already using Kubernetes, we were looking for an operator that could automate our DB deployments and auto-scaling. Those available were doing synchronous replication using MySQL group replication or Galera-based replication. Therefore, we decided to write our own operator.

Solution architecture

The MySQL operator, released under Apache 2.0 license, is based on Percona Server for MySQL for its operational improvements —like utility user and backup locks—and relies on the tried and tested Orchestrator to do the automatic failovers. We’ve been using Percona Server in production for about four years, with very good results, thus encouraging us to continue implementing it in the operator as well.

The MySQL Operator-Orchestrator integration is highly important for topology, as well as for cluster healing and system failover. Orchestrator is a MySQL high availability and replication management tool that was coded and opened by GitHub.

As we’re writing this, the operator is undergoing a full rewrite to implement the operator using the Kubebuilder framework, which is a pretty logical step to simplify and standardize the operator to make it more readable to contributors and users.

Aims for the project

We’ve built the MySQL operator with several considerations in mind, generated by the needs that no other operator could satisfy at the time we started working on it, last year.

Here are some of them:

  • Easily deployable MySQL clusters in Kubernetes, following the cluster-per-service model
  • DevOps-friendly, critical to basic operations such as monitoring, availability, scalability, and backup stories
  • Out-of-the-box backups, scheduled or on-demand, and point-in-time recovery
  • Support for cloning, both inside a cluster and across clusters

It’s good to know that the MySQL operator is now in beta version, and can be tested in production workloads. However, you can take a spin and decide for yourself—we’re already successfully using it for a part of our production workloads at Presslabs, for our customer dashboard services.

Going further to some more practical info, we’ve successfully installed and tested the operator on AWS, Google Cloud Platform, and Microsoft Azure and covered the step by step process in three tutorials here.

Set up and configuration

It’s fairly simple to use the operator. Prerequisites would be the ubiquitous Helm and Kubectl.

The first step is to install the controller. Two commands should be run, to make use of the Helm chart bundled in the operator:

$ helm repo add presslabs https://presslabs.github.io/charts $ helm install presslabs/mysql-operator --name mysql-operator

These commands will deploy the controller together with an Orchestrator cluster.

The configuration parameters of the Helm chart for the operator and its default values are as follows:

Parameter Description Default value replicaCount replicas for controller 1 image controller container image quay.io/presslabs/mysql-operator:v0.1.5 imagePullPolicy controller image pull policy IfNotPresent helperImage mysql helper image quay.io/presslabs/mysql-helper:v0.1.5 installCRDs whether or not to install CRDS true resources controller pod resources {} nodeSelector controller pod nodeSelector {} tolerations controller pod tolerations {} affinity controller pod affinity {} extraArgs args that are passed to controller [] rbac.create whether or not to create rbac service account, role and roleBinding true rbac.serviceAccountName If rbac.create is false then this service account is used default orchestrator.replicas Control Orchestrator replicas 3 orchestrator.image Orchestrator container image quay.io/presslabs/orchestrator:latest

 

Further Orchestrator values can be tuned by checking the values.yaml config file.

Cluster deployment

The next step is to deploy a cluster. For this, you need to create a Kubernetes secret that contains MySQL credentials (root password, database name, user name, user password), to initialize the cluster and a custom resource MySQL cluster as you can see below:

An example of a secret (example-cluster-secret.yaml):

apiVersion: v1 kind: Secret metadata:  name: my-secret type: Opaque data:  ROOT_PASSWORD: # root password, base_64 encoded

An example of simple cluster (example-cluster.yaml):

apiVersion: mysql.presslabs.org/v1alpha1 kind: MysqlCluster metadata:  name: my-cluster spec:  replicas: 2  secretName: my-secret

The usual kubectl commands can be used to do various operations, such as a basic listing:

$ kubectl get mysql

or detailed cluster information:

$ kubectl describe mysql my-cluster

Backups

A further step could be setting up the backups on an object storage service. To create a backup is as simple as creating a MySQL Backup resource that can be seen in this example (example-backup.yaml):

apiVersion: mysql.presslabs.org/v1alpha1 kind: MysqlBackup metadata:  name: my-cluster-backup spec:  clusterName: my-cluster  backupUri: gs://bucket_name/path/to/backup.xtrabackup.gz  backupSecretName: my-cluster-backup-secret

To provide credentials for a storage service, you have to create a secret and specify your credentials to your provider; we currently support AWS, GCS or HTTP as in this example (example-backup-secret.yaml):

apiVersion: v1 kind: Secret metadata:  name: my-cluster-backup-secret type: Opaque Data:  # AWS  AWS_ACCESS_KEY_ID: #add here your key, base_64 encoded  AWS_SECRET_KEY: #and your secret, base_64 encoded  # or Google Cloud base_64 encoded  # GCS_SERVICE_ACCOUNT_JSON_KEY: #your key, base_64 encoded  # GCS_PROJECT_ID: #your ID, base_64 encoded

Also, recurrent cluster backups and cluster initialization from a backup are some additional operations you can opt for. For more details head for our documentation page.

Further operations and new usage information are kept up-to-date on the project homepage.

Our future plans include developing the MySQL operator and integrating it with Percona Management & Monitoring for better exposing the internals of the Kubernetes DB cluster.

Open source community

Community contributions are highly appreciated; we should mention the pull requests from Platform9, so far, but also the sharp questions on the channel we’ve opened on Gitter, for which we do the best to answer in detail, as well as issue reports from early users of the operator.

Come and talk to us about the project

Along with my colleague Calin Don, I’ll be talking about this at Percona Live Europe in November. It would be great to have the chance to meet other enthusiasts and talk about what we’ve discovered so far!

The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.

The post Deploying MySQL on Kubernetes with a Percona-based Operator appeared first on Percona Community Blog.

How to Fix ProxySQL Configuration When it Won’t Start

With the exception of the three configuration variables described here, ProxySQL will only parse the configuration files the first time it is started, or if the proxysql.db file is missing for some other reason.

If we want to change any of this data we need to do so via ProxySQL’s admin interface and then save them to disk. That’s fine if ProxySQL is running, but what if it won’t start because of these values?

For example, perhaps we accidentally configured ProxySQL to run on port 3306 and restarted it, but there’s already a production MySQL instance running on this port. ProxySQL won’t start, so we can’t edit the value that way:

2018-10-02 09:18:33 network.cpp:53:listen_on_port(): [ERROR] bind(): Address already in use

We could delete proxysql.db and have it reload the configuration files, but that would mean any changes we didn’t mirror into the configuration files will be lost.

Another option is to edit ProxySQL’s database file using sqlite3:

[root@centos7-pxc57-4 ~]# cd /var/lib/proxysql/ [root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db sqlite> SELECT * FROM global_variables WHERE variable_name='mysql-interfaces'; mysql-interfaces|127.0.0.1:3306 sqlite> UPDATE global_variables SET variable_value='127.0.0.1:6033' WHERE variable_name='mysql-interfaces'; sqlite> SELECT * FROM global_variables WHERE variable_name='mysql-interfaces'; mysql-interfaces|127.0.0.1:6033

Or if we have a few edits to make we may prefer to do so with a text editor:

[root@centos7-pxc57-4 ~]# cd /var/lib/proxysql/ [root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db sqlite> .output /tmp/global_variables sqlite> .dump global_variables sqlite> .exit

The above commands will dump the global_variables table into a file in SQL format, which we can then edit:

[root@centos7-pxc57-4 proxysql]# grep mysql-interfaces /tmp/global_variables INSERT INTO “global_variables” VALUES(‘mysql-interfaces’,’127.0.0.1:3306’); [root@centos7-pxc57-4 proxysql]# vim /tmp/global_variables [root@centos7-pxc57-4 proxysql]# grep mysql-interfaces /tmp/global_variables INSERT INTO “global_variables” VALUES(‘mysql-interfaces’,’127.0.0.1:6033’);

Now we need to restore this data. We’ll use the restore command to empty the table (as we’re restoring from a missing backup):

[root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db sqlite> .restore global_variables sqlite> .read /tmp/global_variables sqlite> .exit

Once we’ve made the change, we should be able to start ProxySQL again:

[root@centos7-pxc57-4 proxysql]# /etc/init.d/proxysql start Starting ProxySQL: DONE! [root@centos7-pxc57-4 proxysql]# lsof -I | grep proxysql proxysql 15171 proxysql 19u IPv4 265881 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 20u IPv4 265882 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 21u IPv4 265883 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 22u IPv4 265884 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 23u IPv4 266635 0t0 TCP *:6032 (LISTEN)

While you are here

You might enjoy my recent post Using ProxySQL to connect to IPV6-only databases over IPV4

You can download ProxySQL from Percona repositories, and you might also want to check out our recorded webinars that feature ProxySQL too.

MySQL TDE: Online key store migration

So, if we’re applying GDPR to our system, and we’re already making use of MySQL Transparent Data Encryption / keyring, then here’s an example on how to migrate from filed-based keyring to the encrypted keyring. Online.

If you’re looking to go deeper into the TDE then I suggest reading the MySQL Server Team’s InnoDB Transparent Tablespace Encryption blog.

You’d already have your environment running, whereas I have to create one.. give me a minute please, 8.0.12 here we come:

mysqld --defaults-file=my_okv.cnf --initialize-insecure --user=khollman mysqld --defaults-file=my_okv.cnf --user=khollman & mysql --defaults-file=my_okv.cnf -uroot show plugins; show variables like 'keyring%'; alter user 'root'@'localhost' identified by 'oracle'; create database nexus; create table nexus.replicant (id INT(11) NOT NULL AUTO_INCREMENT , `First name` varchar(40) not null default '', `Last name` varchar(40) not null default '', `Replicant` enum('Yes','No') not null default 'Yes', PRIMARY KEY (id)) engine=InnoDB row_format=COMPACT ENCRYPTION = 'Y'; INSERT INTO nexus.`replicant` (`First name`,`Last name`,`Replicant`) VALUES ('Roy','Hauer','Yes'), ('Rutger','Batty','Yes'), ('Voight','Kampff','Yes'), ('Pris','Hannah','Yes'), ('Daryl','Stratton','Yes'), ('Rachael','Young','Yes'), ('Sean','Tyrell','Yes'), ('Rick','Ford','No'), ('Harrison','Deckard','Yes'); select * from nexus.replicant;

Now we have an environment using the keyring file-based TDE.

Before migrating the key store, there are a few things we need to be aware of, as well as reading the manual on this topic:

  • mysqld. Yes, we start up another mysqld process, but it’s not a fully functioning server, far from it. It is just a means to migrate the keys from the old file-based  to the new encrypted file. So don’t worry about the defaults-file, the innodb_xxxx params nor anything else. We actually need to reuse the existing datadir.
  • datadir. As just mentioned, don’t try and use another datadir as it won’t find any files there to encrypt with the new key and the process won’t be successful. Use the existing online server datadir. (of course, I recommend this process be run in a non-production test environment first!)
  • -source & -destination. I think this is quite obvious.  The plugin we’re coming from, and going to.
  • keyring_file_data is the existing file-based keyring being used.
  • keyring_encrypted_file_data & _password is the new encrypted password being stored in its file in this location.
  • keyring-migration- params. We need to connect to the existing instance with super user privs. As it’s locally to the instance, we can use -socket.

 

mysqld --basedir=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64 \ --plugin-dir=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/lib/plugin \ --lc_messages_dir=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/share \ --datadir=/opt/mysql/okv/data \ --keyring-migration-source=keyring_file.so \ --keyring_file_data=/opt/mysql/okv/keyring \ --keyring-migration-destination=keyring_encrypted_file.so \ --keyring_encrypted_file_data=/opt/mysql/okv/keyring_enc \ --keyring_encrypted_file_password=oracle2018 \ --keyring-migration-socket=/opt/mysql/okv/mysql.socket \ --keyring-migration-user=root \ --keyring-migration-password=oracle

And if, only if, the migration is successful, you should see output like the following. Anything else, i.e. if no output comes back, or some of the lines don’t appear in your scenario, double check the parameters in the previous command as it’s more than likely impeding a successful key migration somewhere:

2018-10-08T11:26:22.227161Z 0 [Note] [MY-010098] [Server] --secure-file-priv is set to NULL. Operations related to importing and exporting data are disabled 2018-10-08T11:26:22.227219Z 0 [Note] [MY-010949] [Server] Basedir set to /usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/. 2018-10-08T11:26:22.227226Z 0 [System] [MY-010116] [Server] mysqld (mysqld 8.0.12-commercial) starting as process 13758 2018-10-08T11:26:22.254234Z 0 [Note] [MY-011085] [Server] Keyring migration successful. 2018-10-08T11:26:22.254381Z 0 [Note] [MY-010120] [Server] Binlog end 2018-10-08T11:26:22.254465Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'keyring_encrypted_file' 2018-10-08T11:26:22.254642Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'keyring_file' 2018-10-08T11:26:22.255757Z 0 [System] [MY-010910] [Server] mysqld: Shutdown complete (mysqld 8.0.12-commercial) MySQL Enterprise Server - Commercial.

Migrated.

 

To make sure the instance has the new parameters in the defaults file, and before any risk of restarting the instance, we’ll need to add the new ‘encrypted’ params to the my.cnf:

[mysqld] plugin_dir =/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/lib/plugin #early-plugin-load =keyring_file.so #keyring_file_data =/opt/mysql/okv/keyring early-plugin-load =keyring_encrypted_file.so keyring_encrypted_file_data =/opt/mysql/okv/keyring_enc keyring_encrypted_file_password =oracle2018 ...

 

And upon the next most convenient / least inconvenient moment, restart the instance:

mysqladmin --defaults-file=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/my_okv.cnf -uroot -poracle shutdown mysqld --defaults-file=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/my_okv.cnf --user=khollman &

And let’s double check which keyring plugin we’re using:

select * from information_schema.plugins where plugin_name like '%keyring%' \G *************************** 1. row *************************** PLUGIN_NAME: keyring_encrypted_file PLUGIN_VERSION: 1.0 PLUGIN_STATUS: ACTIVE PLUGIN_TYPE: KEYRING PLUGIN_TYPE_VERSION: 1.1 PLUGIN_LIBRARY: keyring_encrypted_file.so PLUGIN_LIBRARY_VERSION: 1.9 PLUGIN_AUTHOR: Oracle Corporation PLUGIN_DESCRIPTION: store/fetch authentication data to/from an encrypted file PLUGIN_LICENSE: PROPRIETARY LOAD_OPTION: ON 1 row in set (0,00 sec)

And also that we can select the data from the encrypted tablespace:

select * from nexus.replicant; +----+------------+-----------+-----------+ | id | First name | Last name | Replicant | +----+------------+-----------+-----------+ | 1 | Roy | Hauer | Yes | | 2 | Rutger | Batty | Yes | | 3 | Voight | Kampff | Yes | | 4 | Pris | Hannah | Yes | | 5 | Daryl | Stratton | Yes | | 6 | Rachael | Young | Yes | | 7 | Sean | Tyrell | Yes | | 8 | Rick | Ford | No | | 9 | Harrison | Deckard | Yes | +----+------------+-----------+-----------+ 9 rows in set (0,00 sec)

 

Seems quite straight forward.

Well how about, in a test environment, changing the keyring_encrypted_file_password value to something different now, and restart the instance, and run the same select on the same table.

Hey presto:

select * from nexus.replicant; ERROR 3185 (HY000): Can't find master key from keyring, please check in the server log if a keyring plugin is loaded and initialized successfully. Error (Code 3185): Can't find master key from keyring, please check in the server log if a keyring plugin is loaded and initialized successfully. Error (Code 1877): Operation cannot be performed. The table 'nexus.replicant' is missing, corrupt or contains bad data.

 

Hope this helps someone out there. Enjoy encrypting!

Now we can run encrypted backups safely and not worry about moving those files around different systems now.

Advertisements

Percona Live 2019 – Save the Date!

After much speculation following the announcement in Santa Clara earlier this year, we are delighted to announce Percona Live 2019 will be taking place in Austin, Texas.

Save the dates in your diary for May, 28-30 2019!

The conference will take place just after Memorial Day at The Hyatt Regency, Austin on the shores of Lady Bird Lake.

This is also an ideal central location for those who wish to extend their stay and explore what Austin has to offer! Call for papers, ticket sales and sponsorship opportunities will be announced soon, so stay tuned!

In other Percona Live news, we’re less than 4 weeks away from this year’s European conference taking place in Frankfurt, Germany on 5-7 November. The tutorials and breakout sessions have been announced, and you can view the full schedule here. Tickets are still on sale so don’t miss out, book yours here today!

 

Register Now for Percona Live Europe 2018

 

FOSDEM’19 MySQL, MariaDB & Friends DEVROOM Call for Papers is now open !

Good news ! The MySQL, MariaDB & Friends Devroom has been accepted for FOSDEM’19 ‘s edition as announced earlier!

This event is a real success story for the MySQL ecosystem; the content, the speakers and the attendees are growing every year.

The first big change for this 2019 edition is that the MariaDB Foundation (Ian) is joining my efforts to build this Devroom. Don’t forget that FOSDEM takes place in Belgium, and our motto is “l’Union fait la Force” [“Unity is Strength”].

FOSDEM 2019’s edition will take place 2nd & 3rd February in Brussels and our MySQL, MariaDB & Friends devroom will run on Saturday 2nd (may change). FOSDEM & MySQL/MariaDB is a love story started 19 years ago !

The committee selecting the content for our devroom is not yet created and if you want to be part of this experience, just send me an email (candidate at mysqlmariadbandfriends dot eu) before Oct 29th.

If you want to join the Committee you have to align with the following conditions:

  • planning to be present at FOSDEM
  • having a link with MySQL & MariaDB Ecosystem
  • have some time to review and rate talks
  • be an ambassador for the event by promoting it

The Call for Paper is now offcialy open and ends November 15th. You can submit now() your proposal using FOSDEM’s submission tool.

Marketing and Sales speeches are not welcome, focus on the engineering, the operations and of course the developers.

Don’t forget to specify the track (MySQL, MariaDB and Friends devroom) and set the duration to 20 mins (short but intense ;-) )!

Thank you, and see you soon in Brussels !

lefred

Percona Monitoring and Management (PMM) 1.15.0 Is Now Available

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

This release offers two new features for both the MySQL Community and Percona Customers:

  • MySQL Custom Queries – Turn a SELECT into a dashboard!
  • Server and Client logs – Collect troubleshooting logs for Percona Support

We addressed 17 new features and improvements, and fixed 17 bugs.

MySQL Custom Queries

In 1.15 we are introducing the ability to take a SQL SELECT statement and turn the result set into metric series in PMM.  The queries are executed at the LOW RESOLUTION level, which by default is every 60 seconds.  A key advantage is that you can extend PMM to profile metrics unique to your environment (see users table example), or to introduce support for a table that isn’t part of PMM yet. This feature is on by default and only requires that you edit the configuration file and use vaild YAML syntax.  The configuration file is in /usr/local/percona/pmm-client/queries-mysqld.yml.

Example – Application users table

We’re going to take a fictional MySQL users table that also tracks the number of upvotes and downvotes, and we’ll convert this into two metric series, with a set of seven labels, where each label can also store a value.

Browsing metrics series using Advanced Data Exploration Dashboard

Lets look at the output so we understand the goal – take data from a MySQL table and store in PMM, then display as a metric series.  Using the Advanced Data Exploration Dashboard you can review your metric series. Exploring the metric series  app1_users_metrics_downvotes we see the following:

MySQL table

Lets assume you have the following users table that includes true/false, string, and integer types.

SELECT * FROM `users` +----+------+--------------+-----------+------------+-----------+---------------------+--------+---------+-----------+ | id | app | user_type | last_name | first_name | logged_in | active_subscription | banned | upvotes | downvotes | +----+------+--------------+-----------+------------+-----------+---------------------+--------+---------+-----------+ | 1 | app2 | unprivileged | Marley | Bob | 1 | 1 | 0 | 100 | 25 | | 2 | app3 | moderator | Young | Neil | 1 | 1 | 1 | 150 | 10 | | 3 | app4 | unprivileged | OConnor | Sinead | 1 | 1 | 0 | 25 | 50 | | 4 | app1 | unprivileged | Yorke | Thom | 0 | 1 | 0 | 100 | 100 | | 5 | app5 | admin | Buckley | Jeff | 1 | 1 | 0 | 175 | 0 | +----+------+--------------+-----------+------------+-----------+---------------------+--------+---------+-----------+

Explaining the YAML syntax

We’ll go through a simple example and mention what’s required for each line.  The metric series is constructed based on the first line and appends the column name to form metric series.  Therefore the number of metric series per table will be the count of columns that are of type GAUGE or COUNTER.  This metric series will be called app1_users_metrics_downvotes:

app1_users_metrics: ## leading section of your metric series. query: "SELECT * FROM app1.users" ## Your query. Don't forget the schema name. metrics: ## Required line to start the list of metric items - downvotes: ## Name of the column returned by the query. Will be appended to the metric series. usage: "COUNTER" ## Column value type. COUNTER will make this a metric series. description: "Number of upvotes" ## Helpful description of the column.

Full queries-mysqld.yml example

Each column in the SELECT is named in this example, but that isn’t required, you can use a SELECT * as well.  Notice the format of schema.table for the query is included.

--- app1_users_metrics: query: "SELECT app,first_name,last_name,logged_in,active_subscription,banned,upvotes,downvotes FROM app1.users" metrics: - app: usage: "LABEL" description: "Name of the Application" - user_type: usage: "LABEL" description: "User's privilege level within the Application" - first_name: usage: "LABEL" description: "User's First Name" - last_name: usage: "LABEL" description: "User's Last Name" - logged_in: usage: "LABEL" description: "User's logged in or out status" - active_subscription: usage: "LABEL" description: "Whether User has an active subscription or not" - banned: usage: "LABEL" description: "Whether user is banned or not" - upvotes: usage: "COUNTER" description: "Count of upvotes the User has earned. Upvotes once granted cannot be revoked, so the number can only increase." - downvotes: usage: "GAUGE" description: "Count of downvotes the User has earned. Downvotes can be revoked so the number can increase as well as decrease." ...

We hope you enjoy this feature, and we welcome your feedback via the Percona forums!

Server and Client logs

We’ve enhanced the volume of data collected from both the Server and Client perspectives.  Each service provides a set of files designed to be shared with Percona Support while you work on an issue.

Server

From the Server, we’ve improved the logs.zip service to include:

  • Prometheus targets
  • Consul nodes, QAN API instances
  • Amazon RDS and Aurora instances
  • Version
  • Server configuration
  • Percona Toolkit commands

You retrieve the link from your PMM server using this format:   https://pmmdemo.percona.com/managed/logs.zip

Client

On the Client side we’ve added a new action called summary which fetches logs, network, and Percona Toolkit output in order to share with Percona Support. To initiate a Client side collection, execute:

pmm-admin summary

The output will be a file you can use to attach to your Support ticket.  The single file will look something like this:

summary__2018_10_10_16_20_00.tar.gz New Features and Improvements
  • PMM-2913 – Provide ability to execute Custom Queries against MySQL – Credit to wrouesnel for the framework of this feature in wrouesnel/postgres_exporter!
  • PMM-2904 – Improve PMM Server Diagnostics for Support
  • PMM-2860 – Improve pmm-client Diagnostics for Support
  • PMM-1754 – Provide functionality to easily select query and copy it to clipboard in QAN
  • PMM-1855 – Add swap to AMI
  • PMM-3013 – Rename PXC Overview graph Sequence numbers of transactions to IST Progress
  • PMM-2726 – Abort data collection in Exporters based on Prometheus Timeout – MySQLd Exporter
  • PMM-3003 – PostgreSQL Overview Dashboard Tooltip fixes
  • PMM-2936 – Some improvements for Query Analytics Settings screen
  • PMM-3029 – PostgreSQL Dashboard Improvements
Fixed Bugs
  • PMM-2976 – Upgrading to PMM 1.14.x fails if dashboards from Grafana 4.x are present on an installation
  • PMM-2969 – rds_exporter becomes throttled by CloudWatch API
  • PMM-1443 – The credentials for a secured server are exposed without explicit request
  • PMM-3006 – Monitoring over 1000 instances is displayed imperfectly on the label
  • PMM-3011 – PMM’s default MongoDB DSN is localhost, which is not resolved to IPv4 on modern systems
  • PMM-2211 – Bad display when using old range in QAN
  • PMM-1664 – Infinite loading with wrong queryID
  • PMM-2715 – Since pmm-client-1.9.0, pmm-admin detects CentOS/RHEL 6 installations using linux-upstart as service manager and ignores SysV scripts
  • PMM-2839 – Tablestats safety precaution does not work for RDS/Aurora instances
  • PMM-2845 – pmm-admin purge causes client to panic
  • PMM-2968 – pmm-admin list shows empty data source column for mysql:metrics
  • PMM-3043 – Total Time percentage is incorrectly shown as a decimal fraction
  • PMM-3082 – Prometheus Scrape Interval Variance chart doesn’t display data
How to get PMM Server

PMM is available for installation using three methods:

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

Two More MySQL Books for 2018

Last time I mentioned four great MySQL books for 2018.  I was tactfully reminded of two books I overlooked. First is Dr. Charles Bell's Introducing InnoDB Cluster which I have not read (but it is on order).
Introducing InnoDB Cluster And last, but not least, is Mikael Ronstrum's MySQL Cluster 7.5 Inside and Out.  This is another book on NDB cluster and is a 'msut have' for those running NDB clusters.

MySQL Cluster 7.5 Inside and Out I apologize to both authors and take full blame for not mentioning these two find books.  Now I just have to wait for Amazon to send me the copies I ordered!


Pages