Planet MySQL

MySQL Replication with ProxySQL on WHM/cPanel Servers - Part 2

In the first part of the series, we showed you how to deploy a MySQL Replication setup with ProxySQL with WHM and cPanel. In this part, we are going to show some post-deployment operations for maintenance, management, failover as well as advantages over the standalone setup.

MySQL User Management

With this integration enabled, MySQL user management will have to be done from WHM or cPanel. Otherwise, ProxySQL mysql_users table would not sync with what is configured for our replication master. Suppose we already created a user called severaln_user1 (the MySQL username is automatically prefixed by cPanel to comply to MySQL limitation), and we would like to assign to database severaln_db1 like below:

The above will result to the following mysql_users table output in ProxySQL:

If you would like to create MySQL resources outside of cPanel, you can use ClusterControl -> Manage -> Schemas and Users feature and then import the database user into ProxySQL by going to ClusterControl -> Nodes -> pick the ProxySQL node -> Users -> Import Users

The Proxysqlhook module that we use to sync up ProxySQL users sends the debugging logs into /usr/local/cpanel/logs/error_log. Use this file to inspect and understand what happens behind the scenes. The following lines would appear in the cPanel log file if we installed a web application called Zikula using Softaculous:

[2019-07-08 11:53:41 +0800] info [mysql] Creating MySQL database severaln_ziku703 for user severalnines [2019-07-08 11:53:41 +0800] info [mysql] Creating MySQL virtual user severaln_ziku703 for user severalnines [2019-07-08 11:53:41 +0800] info [cpanel] **** Reading ProxySQL information: Host: 192.168.0.16, Port: 6032, User: proxysql-admin ***** [2019-07-08 11:53:41 +0800] info [cpanel] **** Checking if severaln_ziku703 exists inside ProxySQL mysql_users table ***** [2019-07-08 11:53:41 +0800] info [cpanel] **** Inserting severaln_ziku703 into ProxySQL mysql_users table ***** [2019-07-08 11:53:41 +0800] info [cpanel] **** Save and load user into ProxySQL runtime ***** [2019-07-08 11:53:41 +0800] info [cpanel] **** Checking if severaln_ziku703 exists inside ProxySQL mysql_users table ***** [2019-07-08 11:53:41 +0800] info [cpanel] **** Checking if severaln_ziku703 exists inside ProxySQL mysql_users table ***** [2019-07-08 11:53:41 +0800] info [cpanel] **** Updating severaln_ziku703 default schema in ProxySQL mysql_users table ***** [2019-07-08 11:53:41 +0800] info [cpanel] **** Save and load user into ProxySQL runtime *****

You would see some repeated lines like "Checking if {DB user} exists" because WHM creates multiple MySQL user/host for every create database user request. In our example, WHM would create these 3 users:

  • severaln_ziku703@localhost
  • severaln_ziku703@'<WHM IP address>'
  • severaln_ziku703@'<WHM FQDN>'

ProxySQL only needs the username, password and default hostgroup information when adding a user. Therefore, the checking lines are there to avoid multiple inserts of the exact same user.

If you would like to modify the module and make some improvements into it, don't forget to re-register the module by running the following command on the WHM server:

(whm)$ /usr/local/cpanel/bin/manage_hooks add module ProxysqlHook Query Monitoring and Caching

With ProxySQL, you can monitor all queries coming from the application that have been passed or are passing through it. The standard WHM does not provide this level of detail in MySQL query monitoring. The following shows all MySQL queries that have been captured by ProxySQL:

With ClusterControl, you can easily look up the most repeated queries and cache them via ProxySQL query cache feature. Use the "Order By" dropdown to sort the queries by "Count Star", rollover to the query that you want to cache and click the "Cache Query" button underneath it. The following dialog will appear:

The resultset of cached queries will be stored and served by the ProxySQL itself, reducing the number of hits to the backend which will offload your MySQL replication cluster as a whole. ProxySQL query cache implementation is fundamentally different from MySQL query cache. It's time-based cache and will be expired after a timeout called "Cache TTL". In this configuration, we would like to cache the above query for 5 seconds (5000 ms) from hitting the reader group where the destination hostgroup is 20.

Read/Write Splitting and Balancing

By listening to MySQL default port 3306, ProxySQL is kind of acting like the MySQL server itself. It speaks MySQL protocols on both frontend and backend. The query rules defined by ClusterControl when setting up the ProxySQL will automatically split all reads (^SELECT .* in Regex language) to hostgroup 20 which is the reader group, and the rest will be forwarded to the writer hostgroup 10, as shown in the following query rules section:

With this architecture, you don't have to worry about splitting up read/write queries as ProxySQL will do the job for you. The users have minimal to none changes to the code, allowing the hosting users to use all the applications and features provided by WHM and cPanel natively, similar to connecting to a standalone MySQL setup.

In terms of connection balancing, if you have more than one active node in a particular hostgroup (like reader hostgroup 20 in this example), ProxySQL will automatically spread the load between them based on a number of criteria - weights, replication lag, connections used, overall load and latency. ProxySQL is known to be very good in high concurrency environment by implementing an advanced connection pooling mechanism. Quoted from ProxySQL blog post, ProxySQL doesn't just implement Persistent Connection, but also Connection Multiplexing. In fact, ProxySQL can handle hundreds of thousands of clients, yet forward all their traffic to few connections to the backend. So ProxySQL can handle N client connections and M backend connections , where N > M (even N thousands times bigger than M).

MySQL Failover and Recovery Related resources  MySQL Replication with ProxySQL on WHM/cPanel Servers - Part 1  High Availability MySQL on cPanel with Galera Cluster  How to Migrate WHMCS Database to MariaDB Galera Cluster

With ClusterControl managing the replication cluster, failover is performed automatically if automatic recovery is enabled. In case of a master failure:

  • ClusterControl will detect and verify the master failure via MySQL client, SSH and ping.
  • ClusterControl will wait for 3 seconds before commencing a failover procedure.
  • ClusterControl will promote the most up-to-date slave to become the next master.
  • If the old master comes back online, it will be started as a read-only, without participating in the active replication.
  • It's up to users to decide what will happen to the old master. It could be introduced back to the replication chain by using "Rebuild Replication Slave" functionality in ClusterControl.
  • ClusterControl will only attempt to perform the master failover once. If it fails, user intervention is required.

You can monitor the whole failover process under ClusterControl -> Activity -> Jobs -> Failover to a new master as shown below:

During the failover, all connections to the database servers will be queued up in ProxySQL. They won't get terminated until timeout, controlled by mysql-default_query_timeout variable which is 86400000 milliseconds or 24 hours. The applications would most likely not see errors or failures to the database at this point, but the tradeoff is increased latency, within a configurable threshold.

At this point, ClusterControl will present the topology as below:

If we would like to allow the old master join back into the replication after it is up and available, we would need to rebuild it as a slave by going to ClusterControl -> Nodes -> pick the old master -> Rebuild Replication Slave -> pick the new master -> Proceed. Once the rebuilding is complete, you should get the following topology (notice 192.168.0.32 is the master now):

Server Consolidation and Database Scaling

With this architecture, we can consolidate many MySQL servers which resides on every WHM server into one single replication setup. You can scale more database nodes as you grow, or have multiple replication clusters to support all of them and managed by a single ClusterControl server. The following architecture diagram illustrates if we have two WHM servers connected to one single MySQL replication cluster via ProxySQL socket file:

The above allows us to separate the two most important tiers in our hosting system - application (front-end) and database (back-end). As you might know, co-locating MySQL in the WHM server commonly results to resource exhaustion as MySQL needs a huge upfront RAM allocation to start up and perform well (mostly depending on the innodb_buffer_pool_size variable). Considering the disk space is sufficient, with the above setup, you can have more hosting accounts hosted per server, where all the server resources can be utilized by the front-end tier applications.

Scaling up the MySQL replication cluster will be much simpler with a separate tier architecture. If let's say the master requires a scale up (upgrading RAM, hard disk, RAID, NIC) maintenance, we can switch over the master role to another slave (ClusterControl -> Nodes -> pick a slave -> Promote Slave) and then perform the maintenance task without affecting the MySQL service as a whole. For scale out operation (adding more slaves), you can perform that without even affecting the master by performing the staging directly from any active slave. With ClusterControl, you can even stage a new slave from an existing MySQL backup (PITR-compatible only):

Rebuilding a slave from backup will not bring additional burden to the master. ClusterControl will copy the selected backup file from ClusterControl server to the target node and perform the restoration there. Once done, the node will be connecting to the master and starts retrieving all the missing transactions since the restore time and catch up with the master. When it's lagging, ProxySQL will not include the node in the load balancing set until the replication lag is less than 10 seconds (configurable when adding a mysql_servers table via ProxySQL admin interface).

Final Thoughts

ProxySQL extends the capabilities of WHM cPanel in managing MySQL Replication. With ClusterControl managing your replication cluster, all the complex tasks involved in managing the replication cluster are now easier than ever before.

Tags:  cpanel whm replication MySQL high availability

Jul 8: Where is the MySQL team this week!

During this week you can find MySQL at following shows:

  • Oracle Global Partnership Summit, Munich, Germany, July 9-11, 2019
    • Do not miss this event organized by Oracle University with Rich Mason, the Senior VP and General Manager for MySQL having a keynote & Carsten Thalheimer, the MySQL Master Principal Sales Consultant running the technical session. 
    • Registration here
  • MySQL User Group Munich - Meetup, Germany, July 11, 2019 @7:30pm
    • Carsten Thalheimer, the MySQL Master Principal Sales Consultant is a speaking guest of this meetup. He will be talking about "MySQL 8-The Next Big Thing!"
  • OpenSource Conference, Nagoya, Japan, July 13, 2019
    • MySQL is Gold sponsor w/ booth & 45 mins speaking session where Machiko Ikoma, the MySQL Principal Solution Engineer will be talking about "MySQL Update".
  • FOSS4G Hokkaido, Japan, July 12-13, 2019
    • MySQL as a partner of OSGeo.JP organizing FOSS4G event. Do not miss the MySQL seminar where you can hear about GIS w/ JavaScript & MySQL. Seminar starts @9:30am and ends on 12:30pm.

 

    SQL Constraints Tutorial With Example | Understand SQL Constraints

    SQL Constraints Tutorial With Example | Understand SQL Constraints is today’s topic. SQL Constraints are rules used to limit a type of data that can go into the table, to maintain the accuracy and the integrity of a data inside the table. Constraints can be divided into the following two types,

    1. Column level constraints: Limits only the column data.
    2. Table-level constraints: Limits whole table data.
    SQL Constraints

    Constraints are the set of rules enforced on the data columns of the table. These are used to limit the type of data that can go into the table. It ensures the accuracy, integrity, and reliability of the data in the SQL database.

    Constraints in SQL are used to make sure that an integrity of the data is maintained in the database.

    See the following most used constraints that can be applied to the table.

    1. NOT NULL
    2. UNIQUE
    3. PRIMARY KEY
    4. FOREIGN KEY
    5. CHECK
    6. DEFAULT
    #NOT NULL Constraint in SQL

    NOT NULL constraint restricts the column from having the NULL value. Once a NOT NULL constraint is applied to the column, you cannot pass the null value to that column. It enforces the column to contain a proper value. Empty will should not be there in that column for every table.

    Generally, the ID column in the SQL Table contains the NOT NULL constraint.

    See the following query of NOT NULL constraint.

    CREATE TABLE Singer ( id int NOT NULL, Name varchar(60) NOT NULL, Song varchar(60) );

    The above query will declare that an id and Name field of the Singer table will not take NULL value.

    #PRIMARY KEY Constraint

    We have covered Primary Key in this blog. The primary key is a field in the table which is used for uniquely identifying the row in the table. If the column has a primary key constraint, then it will contain the unique values and will not able to hold any NULL values.

    The following SQL statement creates the table named investors and specifies an id column as a primary key. That means id field does not allow the NULL or duplicate values.

    CREATE TABLE investors ( id INT NOT NULL PRIMARY KEY, name VARCHAR(30) NOT NULL, portfolio amount INT, fund_name VARCHAR(50) );

    In the above query, we will create the investors’ table in which id column is the primary key. You can not add NULL value in the Primary Key column. So, id column has now two SQL Constraints. One point to remember that a primary key length cannot exceed more than 900 bytes.

    #UNIQUE Constraint

    UNIQUE constraint ensures that a field or the column will only have unique values. The UNIQUE constraint field will not have any duplicate data. The unique constraint can be applied at the column level or the table level.

    The UNIQUE Constraint prevents two records from having identical values in a column.

    #UNIQUE constraint in Table-level

    Here we have the simple CREATE query to create the table, which will have the column app_id with unique values. See the following query.

    CREATE TABLE Apps( app_id int NOT NULL UNIQUE, Name varchar(60), Price int ); #UNIQUE constraint in Column-level

    Here, we have to use the Alter table query to assign the Unique constraint to a particular field.

    See the following query.

    ALTER TABLE Apps ADD UNIQUE(app_id);

    The above query specifies that the app_id field of Apps table will only have a unique value.

    #Foreign Key Constraint

    We have also covered the Foreign Key constraint here in this blog. SQL foreign key is used to form a link between two tables, which makes it a referencing key. FOREIGN KEY is used to relate the two tables. The FOREIGN KEY constraint is also used to restrict the actions that would destroy the links between tables. Let’s see its use, with the help of the two tables.

    Investors

    i_id investor_Name address 101 KRUNAL Noida 102 ANKIT Delhi 103 RUSHABH Rohtak

    Employees

    e_id employee_age i_id 10 26 101 11 24 103 12 26 102

    In the above, there are two tables.

    1. Investors
    2. Employees

    Now, the above two tables are connected through the Foreign key called i_id.

    Investors table has a primary key called i_id and Employees table has a foreign key called i_id.

    If you try to insert incorrect data, the DBMS will return the error and will not allow you to insert any data.

    So, it will prevent to add any malicious values are manipulative values. You need to be really very careful when editing, updating, or deleting the data between those tables.

    If it goes wrong, then the entire system will behave unnaturally and buggy.

    #Using FOREIGN KEY constraint at Table Level

    See the following query.

    CREATE table Employees( e_id int PRIMARY KEY, employee_age varchar(60) NOT NULL, i_id int FOREIGN KEY REFERENCES Investors(i_id) );

    So, in the above query, we have defined the relationship between the tables using i_id.

    The i_id column in the Employees table reference from Investors tables.

    #Using FOREIGN KEY constraint at Column Level

    See the following query.

    ALTER table Employees ADD FOREIGN KEY (i_id) REFERENCES Investors(i_id);

    Here, we have used Alter query to add the Foreign Key constraint to the Employees table.

    #Behaviour of Foreign Key Column on Delete

    There are two ways to maintain the integrity of data in the Child table when the particular record is removed from the main table. When the two tables are connected through Foreign key, and certain data in the main table is removed, for which the record exists in a child table, then we must have some kind of mechanism to save that integrity of the data in the child table.

      1. On Delete Cascade: It will remove a record from a child table if that value of a foreign key is deleted from a main table.
      2. On Delete Null: It will set all the values in that record of the child, the table as NULL, for which a value of a foreign key is deleted from a main table.
      3. If we don’t use any of the above, then we cannot delete the data from a main table for which the data in child table exists. We will get an error if we try to do so.
    ERROR : Record in child table exist #Check Constraint

    CHECK constraint is used to restrict a value of the column between the range.

    It performs check on the values, before storing them into the database. Its like condition was checking before saving data into the column.

    #Using CHECK constraint at Table Level CREATE table Student( id int NOT NULL CHECK(id > 0), Name varchar(60) NOT NULL, Age int );

    The above query will restrict the id value to be higher than zero.

    #Using CHECK constraint at Column Level

    See the following query.

    ALTER table Student ADD CHECK(id > 0); #DEFAULT Constraint

    The DEFAULT constraint provides the default value to a column when the INSERT INTO statement does not provide the specific value.

    #Using DEFAULT constraint at Table Level CREATE TABLE CUSTOMERS( ID INT NOT NULL, NAME VARCHAR (20) NOT NULL, ADDRESS CHAR (100) , SALARY DECIMAL (18, 2) DEFAULT 100000.00, PRIMARY KEY (ID) ); #Using DEFAULT constraint at Column Level

    If the table has been created already, then, we can use the Alter table statement to assign the default constraint. See the following query.

    ALTER TABLE CUSTOMERS ALTER SALARY SET DEFAULT 100000;

    Finally, SQL Constraints Tutorial With Example | Understand SQL Constraints is over.

    The post SQL Constraints Tutorial With Example | Understand SQL Constraints appeared first on AppDividend.

    MySQL InnoDB Cluster – Recovering and provisioning with mysqldump

    As the administrator of a cluster, among other tasks, you should be able to restore failed nodes and grow (or shrink) your cluster by adding (or removing) new nodes. In MySQL, as a backup tool (and if your amount of data is not too big), you can use mysqldump a client utility that performs logical backups. The results are SQL statements that reproduce the original schema objects and data. For substantial amounts of data however, a physical backup solution such as MySQL Enterprise Backup is faster, particularly for the restore operation. But this is the topic of my next blog post :)

    MySQL Master Replication Crash Safety Part #4: benchmarks of high and low durability

    This is a follow-up post in the MySQL Master Replication Crash Safety series.  In the three previous posts, we explored the consequence of reducing durability on masters (including setting sync_binlog to a value different from 1).  But so far, I only quickly presented why a DBA would run MySQL with such configuration.  In this post, I present actual benchmark results.  I also present a

    MySQL Master Replication Crash Safety part #4: benchmarks (under the hood)

    This post is a sister post to MySQL Master Replication Crash Safety Part #4: benchmarks of high and low durability.  There are no introduction or conclusion to this post, only landing sections: reading this post without its context is not recommended. You should start with the main post and come back here for more details. Environment My benchmark environment is composed of three vms in

    Understand Database Performance Trends – SQL Diagnostic Manager for MySQL (formerly Monyog)

    Next in our Benefits of SQL Diagnostic Manager for MySQL (formerly Monyog) blog series, we discuss monitoring and understanding performance trends using visual analytics and the display dashboard of SQL Diagnostic Manager for MySQL. If you missed it, you can read last week’s blog on identifying and analyzing problematic SQL queries.

    View and Understand Trends By Analyzing Historical Data

    Configure the time duration for storing the data collected by SQL Diagnostic Manager for MySQL. It stores the data in a high-performance database (that is, the embedded relational database management system SQLite). By analyzing historical data, quickly obtain answers to questions like:

    • How many times and when did database servers go down during the last six months? Which day of the week has maximum database activity?
    • How many time and when were logins attempted with incorrect passwords yesterday?

    Sudden changes in performance parameters and problems (such as due to a change of application) will also be visible immediately.

    SQL Diagnostic Manager for MySQL provides trend graph analysis that makes it easy to compare the state and performance of multiple database servers in a single chart. Group an important single metric from different database servers into a single unified chart. Visually analyze a metric across database servers at various points in time.

    Display Dashboard Overview

    One of the most significant challenges database administrators face is managing an ever-growing number of database servers and databases. Regardless of the size of the database environment, each database server requires specific attention when it comes to necessary administration, security, performance monitoring, and availability. To provide database administrators for MySQL and MariaDB with a proactive advantage, SQL Diagnostic Manager for MySQL provides the Dashboard web page feature. It lets database administrators determine the cause of a performance spike by zooming in on it and viewing the SQL queries for the time frame of the spike. The design of the dashboard is such that database administrators easily understand the complete security, availability, and performance of all of their MySQL or MariaDB servers in a single place, all from a modern and intuitive web interface based on asynchronous JavaScript and XML (AJAX).

    Unlike other monitoring tools for MySQL and MariaDB that use annoying web page refreshes for real-time charting, SQL Diagnostic Manager for MySQL uses charts based on JavaScript to ensure the display of true real-time charts.

    Compare Large Number of Servers Side-By-Side

    The Enterprise Dashboard feature shows real-time charts of all critical metrics. It provides a consolidated view of the availability and performance of all MySQL or MariaDB servers. From these real-time charts, instantly determine:

    • The availability status of all database servers
    • Essential operating system metrics that may be affecting database servers Which database servers need attention
    • Where and how they need to spend their limited time

    It is not rare to find database administrators who monitor hundreds of database servers. With SQL Diagnostic Manager for MySQL, managing a large number of database servers is easy. With tagging, categorize database servers into logical groups. Monitor a large number of database servers using a single instance of SQL Diagnostic Manager for MySQL.

    Read more in the full solution brief.

    Find and fix MySQL performance problems on-premises and in the cloud with SQL Diagnostic Manager for MySQL.

    The post Understand Database Performance Trends – SQL Diagnostic Manager for MySQL (formerly Monyog) appeared first on Monyog Blog.

    MySQL Workbench and OpenStreetMap Data Visualization

    I have a presentation next month on MySQL and GIS.  MySQL 8.0 has benefited greatly from the three dimensional libraries from Boost.Geometry. There are many facets to the Geographic Data world that it is damn near impossible not to lurch down one rabbit hole into another in an unending spiral of acronyms, standards, projections, and functions.  But thankfully I have MySQL Workbench to aid me.

    Texas
    I wanted some test data to use for some GIS exercises and was very happy to find many useful sets curated by the OpenStreetMaps folks. Shapefiles are used to hold the various data points of an item of interest.  I had assumed that the data would have some sort of longitude/latitude pairs but was wondering what I would need to do to work with that data and what ever came bundled with it.  I download the Texas data and then loaded it into the database.

    You will need a copy of the ogr2org utility This wonderful program reads the raw shapefile and converts it into SQL. You may want to get the source but hopefully a version may be available for you Linux distribution.  There is a copy of the ogr2ogr program that comes with MySQL Workbench but I have not been able to get it to convert the data and load that data into a MySQL instance, with either Windows, Fedora, or Ubuntu.

    $ogr2ogr -overwrite -progress -f "MySQL" mysql:texas,user=dave,password=****** gis_osm_natural_a_free_1.shp
    0...10...20...30...40...50...60...70...80...90...100 - done.
    $ ogr2ogr -overwrite -progress -f "MySQL" mysql:texas,user=dave,password=***** gis_osm_natural_free_1.shp
    0...10...20...30...40...50...60...70...80...90...100 - done.


    The Data
    The ogr2org utility reads the shape files and puts all the attributes into a a table. 

    mysql> DESC gis_osm_natural_free_1;
    +---------+--------------+------+-----+---------+----------------+
    | Field   | Type         | Null | Key | Default | Extra          |
    +---------+--------------+------+-----+---------+----------------+
    | OGR_FID | int(11)      | NO   | PRI | NULL    | auto_increment |
    | SHAPE   | geometry     | NO   | MUL | NULL    |                |
    | osm_id  | varchar(10)  | YES  |     | NULL    |                |
    | code    | decimal(4,0) | YES  |     | NULL    |                |
    | fclass  | varchar(28)  | YES  |     | NULL    |                |
    | name    | varchar(100) | YES  |     | NULL    |                |
    +---------+--------------+------+-----+---------+----------------+
    6 rows in set (0.01 sec)

    And a sample line from that table.
    mysql> select OGR_FID,ST_ASTEXT(SHAPE) as Shape, osm_id, code, fclass, name FROM gis_osm_natural_free_1 limit 1; +---------+-------------------------------+----------+------+--------+--------------------+ | OGR_FID | Shape                         | osm_id   | code | fclass | name               | +---------+-------------------------------+----------+------+--------+--------------------+ |       1 | POINT(29.3060929 -94.7679897) | 80029566 | 4141 | beach  | Stewart Beach Park | +---------+-------------------------------+----------+------+--------+--------------------+ 1 row in set (0.00 sec)
    The SHAPE column has the desired longitude and latitude. So now we have this data but I had zero clue to where Stewart Beach Park was located.
    MySQL Workbench to the Rescue
    Workbench makes it easy to 'see' where the data was.  You can see in the below example that Stewart Beach Park is the first row returned.
    MySQL Workbench has an amazing number of features including the ability to help display GIS data.
    If you right click on the 'BLOB' graphic under the shape column you will see an option to Show Point In Browser.  
    The location of Stewart Beach Park on the Galveston Sea Wall
    So Stewart Beach Park is right on the Galveston Sea Wall and around the corner from the cruise ship terminals. 
    Mid-Atlantic Developer's Conference
    I will be speaking on MySQL & GIS at the Mid-Atlantic Developer's Conference and you can still attend but tickets are going quickly. 







    The Best Way to Host MySQL on Azure Cloud

    Are you looking to get started with the world’s most popular open-source database, and wondering how you should setup your MySQL hosting? So many default to Amazon RDS, when MySQL performs exceptionally well on Azure Cloud. While Microsoft Azure does offer a managed solution, Azure Database, the solution has some major limitations you should know about before migrating your MySQL deployments. In this post, we outline the best way to host MySQL on Azure, including managed solutions, instance types, high availability replication, backup, and disk types to use to optimize your cloud database performance.

    MySQL DBaaS vs. Self-Managed MySQL

    The first thing to consider when weighing between self-management and a MySQL Database-as-a-Service (DBaaS) solution is what internal resources you have available. If you’re reading this, you likely already know the magnitude of operational tasks associated with maintaining a production deployment, but for a quick recap, there’s provisioning, deprovisioning, master-slave configurations, backups, scaling, upgrades, log rotations, OS patching, and monitoring to name a few.

    An internal MySQL expert, or a team of DBA’s depending on your application size, can certainly handle these with your organization for you, but the question becomes where you want your team’s efforts focused. Many decide to move to a MySQL DBaaS to automate these time-consuming tasks so they can focus more on the development and optimization of their applications databases. A good example would be slow query analysis. While almost every DBaaS offers a MySQL Slow Query Analyzer tool to help identify trouble queries, this task still requires human skill and intuition to determine how to optimize those queries impacting their application performance.

    Whether you’re a startup company or a fortune 500 business, you’ll find many organizations choose to leverage a DBaaS to optimize their DBA’s time, while the same business types and sizes also choose to stick with internal self-management. For many enterprise businesses, the decision largely comes down to customization and control. This is why we caution against defaulting to Azure Database, or it’s AWS competitor, Amazon RDS, as they do not allow you to keep MySQL superuser access or even SSH access to your machines. Additionally, the ability to customize your deployment setup is highly limited, such as the instance types, RAM, disk size, or IOPS you can use. You’ll learn more about the best instance types and disks to use below, and you can check out this MySQL Provider Comparison to see the advantages and limitations of the top four managed MySQL solutions, ScaleGrid, Compose, Azure Database, and Amazon RDS.

    High Availability Deployment

    If you’re deploying in production, you should always setup MySQL as a master-slave deployment. Standalone deployments are a single node without any replication, and should really only be used for development or testing environments. With master-slave deployments, you’re able to configure high availability so if one of your nodes goes down, you can failover to a slave with zero downtime. This is typically setup either as a 3-node master-slave-slave, or a 2+1 node master-slave-quorum. The advantage of using a quorum is that it’s a lower cost alternative, but the downside is that you only have 2 data-bearing nodes as the other acts as a quorum node to determine the best failover course. If your application is able to read from the slave, then you need to do read scaling so they return the same data from the cluster volume with minimal lag.

    The Best Way to Host MySQL on Azure CloudClick To Tweet

    When using a MySQL master-slave configuration, we recommend setting up semisynchronous replication to improve your data integrity with data redundancy. This ensures that when a commit returns successfully, the data exists both in the master and the slave, so in the event a datacenter goes down, your MySQL master can failover to a slave without any data loss. You can do this with either asynchronous or semisynchronous replication, and learn more about it in our MySQL High Availability Explained – Part II blog post.

    So, how do we configure high availability for MySQL on Azure? We need to distribute our slave instances across different Azure availability zones (AZ). So, we want to make sure that we choose an Azure region that has at least 3 AZ’s, putting each instance in a different AZ. We do this because the availability guarantees are across AZ’s, so if 1 zone goes down, your application database is still able to stay online through the other 2 AZ’s. Availability zones are fairly new to Azure, so if you’re working in a region that doesn’t offer AZ’s, you have the option to use availability sets. These are slightly weaker than AZ’s, but ensure that you’re deployed across different domains and racks to protect you against a potential outage. There’s also the option to deploy across regions, but this is a more complicated setup so we recommend reaching out to discuss before implementing.

    Azure Virtual Networks

    The best way to protect your database from the internet is by deploying it in a private subnet to ensure it is not exposed. Azure makes this easy to setup through the use of a Virtual Network (VNET) which can be configured for your MySQL servers. With an Azure VNET for MySQL, you’re able to setup secure communications between your servers, the internet, and even your on-premise private cloud network. These are typically configured to communicate across a single network, but if you need to connect more than one region, you can create multiple VNETs to communicate through Virtual Network Peering.

    Additionally, you can manage your MySQL access control through Network Security Groups (NSG) rules without having to deal with IP whitelists. This is not available through Azure Database for MySQL, but both VNET and NSG can be configured through our MySQL Bring Your Own Cloud (BYOC) plans on Azure where you able to host your clusters through your own cloud account.

    Azure Instance Types

    Another important aspect to consider is the performance of your MySQL instances in the public cloud. Azure cloud offers multiple instance types that can be used for your MySQL hosting, including Es2 v3, Ds2, v2, and Ls4.

    We recommend starting with a memory optimized instance types as databases require a lot of RAM and are looking for the fastest disk speed possible for the best performance. The Es2 series is typically a good starting point for most applications MySQL workloads. From there, you can do some performance testing to see if you require more CPU, in which case, balanced instance types or CPU-intensive instance types might better serve your MySQL needs, such as the Dv3 instance types. Your performance tests may also show that you need more I/O (input/output), you can move to a disk-intensive instance type.

    If you plan to leverage Azure as your MySQL cloud provider for the next 1-3 years and maintain fairly consistent deployment configurations, you can also consider reserved instances. These are essentially prepaid instances which allow you to achieve considerable cost savings for your MySQL hosting. On average, you can save around 20% to 30% for one year reserved instances, and 40% to 50% on the 3 year reserved instances.

    Azure Disk Types

    The first determination you need to make when it comes to choosing an Azure disk type for your MySQL deployments is whether to go with a managed vs. unmanaged disk. The unmanaged disks are the legacy disks Azure offers where you have to setup the storage account, map your disk to the storage account, and monitor the IOPS use and limits for that storage account. We highly recommend using managed disks, and if you’re still deploying with unmanaged disks, you should consider moving to the managed.

    MySQL Dev/Test Environments: Standard Disks

    There are multiple managed disks types available through Azure, the default being the standard disks. Standard disks can support up to 500 IOPS (input/output operations per second) and are good for development and testing operations as they can be resized dynamically, but should not be used for MySQL production deployments.

    MySQL Production Deployments: Premium Disks

    For your MySQL production servers, we highly recommend leveraging Azure premium disks. There are a wide variety of premium disks you can pick from. For each premium disk, you can choose the best size, and each size comes with different Provisioned IOPS so you can select the one that best fits your application needs.

    MySQL Production Deployments: Local SSD

    Azure Local SSD’s are a high-performance alternative to premium disks, typically best suited for large clusters. The Local SSD’s provide a much higher I/O performance, and the best throughput in Azure. But, they do have a drawback in that they are ephemeral disks, not a permanent store, so if you stop the instance, the data goes away. We recommend the Ls v2 series which are very fast, but caution that the CPU is really weak which can cause machine bottlenecks.

    MySQL Backups on Azure

    The best way to backup your MySQL data on Azure is by using managed disk snapshots. A snapshot is a read-only point in time version of a disk. These backups can be read, copied, or deleted, but note that they cannot be modified. It’s a good idea to do full backups so all of your databases, users, and settings are backed up on the instance in case you ever need to recover a MySQL database. It’s also a good idea to encrypt your backup snapshots so that the backup can only be restored onto the machine in which the backup was taken.

    Your MySQL backups will result in additional Azure data storage charges, unless you’re leveraging an all-inclusive MySQL on Azure solution like our Dedicated Hosting plans at ScaleGrid. In order to control costs, it’s a good idea to automate your backups through a customizable schedule that allows you to configure the frequency of your backups, the maximum number of backups to retain, and your backup target. This of course also helps you ensure your MySQL data is regularly backed up in case of any data loss in your production deployment so you can quickly recover with a recent backup.

    If you have any questions on the best way to host MySQL on Azure, leave us a comment below or get in touch with us at support@scalegrid.io. You can also start a free 30-day trial to explore the advantages of leveraging a fully managed MySQL service to improve the performance of your deployments.

    Preview of the upcoming new features in MySQL 8.0.17

    Note: I am no longer working for Oracle/MySQL. This post is based on public information.

    The MySQL 8.0.17 release is around the corner. Let's take a look at the new features in the upcoming release.

    InnoDB redo log archive

    InnoDB added new parameter “--innodb-redo-log-archive-dirs” in 8.0.17, according to the doc page. The implementation of both page archive and redo log archive has been in the code for a while, under arch directory. This feature can be used by backup and database clone.


    Multi-Valued Indexes

    As of MySQL 8.0.17, InnoDB supports multi-valued indexes. A multi-valued index is a secondary index defined on a column that stores an array of values. A “normal” index has one index record for each data record (1:1). A multi-valued index can have multiple index records for a single data record (N:1). Multi-valued indexes are intended for indexing JSON arrays. For example, a multi-valued index defined on the array of zip codes in the following JSON document creates an index record for each zip code, with each index record referencing the same data record.
    It has some good examples on the page. Also listed Limitations and Restrictions on Multi-valued Indexes.


    The LOCK_ORDER Tool

    As of MySQL 8.0.17, to enable detection of lock-acquisition deadlocks and enforcement that runtime execution is free of them, MySQL supports LOCK_ORDER tooling. This enables a lock-order dependency graph to be defined as part of server design, and server runtime checking to ensure that lock acquisition is acyclic and that execution paths comply with the graph.


    The LOCK_ORDER tool is intended for debugging the server, not for production use.


    Table 29.4 LOCK_ORDER System Variable Summary

    Variable Name

    Variable Type

    Variable Scope

    lock_order

    Boolean

    Global

    lock_order_debug_loop

    Boolean

    Global

    lock_order_debug_missing_arc

    Boolean

    Global

    lock_order_debug_missing_key

    Boolean

    Global

    lock_order_debug_missing_unlock

    Boolean

    Global

    lock_order_dependencies

    File name

    Global

    lock_order_extra_dependencies

    File name

    Global

    lock_order_output_directory

    Directory name

    Global

    lock_order_print_txt

    Boolean

    Global

    lock_order_trace_loop

    Boolean

    Global

    lock_order_trace_missing_arc

    Boolean

    Global

    lock_order_trace_missing_key

    Boolean

    Global

    lock_order_trace_missing_unlock

    Boolean

    Global


    Default the tool is disabled in source, according to the source configuration options:

    -DWITH_LOCK_ORDER=bool


    Whether to enable LOCK_ORDER tooling. By default, this option is disabled and server builds contain no tooling. If tooling is enabled, the LOCK_ORDER tool is available and can be used as described in Section 29.5.3, “The LOCK_ORDER Tool”.


    Other Changes

    Early plugin loading

    From 8.0.17, all early loading plugins must specify the following flag:

    #definePLUGIN_OPT_ALLOW_EARLY  4UL   /* allow --early-plugin-load */

    This flag was added in MySQL 8.0.17. All plugins compiled using MySQL distributions prior to 8.0.17 do not have this flag set. When loading these into pre-8.0.17 servers this does not matter, but attempts to use --early-plugin-load to load plugin binaries compiled using pre-8.0.17 MySQL distributions into a 8.0.17 or higher server will fail. The plugins must be recompiled against MySQL 8.0.17 or higher.


    Additional Target Types for Casts.

    The functions CAST() and CONVERT() now support conversions to types DOUBLEFLOAT, and REAL


    JSON Schema Validation

    MySQL 8.0.17 adds two functions JSON_SCHEMA_VALID() and JSON_SCHEMA_VALIDATION_REPORT() for validating JSON documents again JSON schemas. 


    hintable time_zone

    As of MySQL 8.0.17, the time_zone session variable is hintable using SET_VAR.


    Encryption of binlog temporary files

    From MySQL 8.0.17, when binary log encryption is active on the server, temporary files used to hold the binary log cache are encrypted using AES-CTR (AES Counter mode) for stream encryption. Because the temporary files are volatile and tied to a single process, they are encrypted using single-tier encryption, using a randomly generated file password and initialization vector that exist only in memory and are never stored on disk or in the keyring.


    print_identified_with_as_hex

    For SHOW CREATE USER, print hash values containing unprintable characters in hex. Added in MySQL 8.0.17.


    Upgrade on case sensitive file system Due to a regression introduced in MySQL 8.0.14, in-place upgrade on a case sensitive file system from MySQL 5.7 or a MySQL 8.0 release prior to MySQL 8.0.14 to MySQL 8.0.16 failed for instances with partitioned tables and lower_case_table_names=1.


    Reserved Keyword

    ARRAY (R); added in 8.0.17 (reserved)

    MEMBER (R); added in 8.0.17 (reserved)


    Compress option

    --compress is only used in one release (8.0.17).

    Option Name

    Description

    Introduced

    Deprecated

    Removed

    --compress

    Compress all information sent between client and server

    8.0.17

    8.0.18

    --compression-algorithms

    Permitted compression algorithms for connections to server

    8.0.18

    This option was added in MySQL 8.0.17. As of MySQL 8.0.18 it is deprecated. It will be removed in a future MySQL version.

    The Perfect Server - Debian 10 (Buster) with Apache, BIND, Dovecot, PureFTPD and ISPConfig 3.1

    This tutorial shows how to prepare a Debian 10 server (with Apache2, BIND, Dovecot) for the installation of ISPConfig 3.1. The web hosting control panel ISPConfig 3 allows you to configure the following services through a web browser: Apache web server, Postfix mail server, Dovecot IMAP/POP3 server, MySQL, BIND nameserver, PureFTPd, SpamAssassin, ClamAV, and many more.

    MySQL Replication with ProxySQL on WHM/cPanel Servers - Part 1

    WHM and cPanel is no doubt the most popular hosting control panel for Linux based environments. It supports a number of database backends - MySQL, MariaDB and PostgreSQL as the application datastore. WHM only supports standalone database setups and you can either have it deployed locally (default configuration) or remotely, by integrating with an external database server. The latter would be better if you want to have better load distribution, as WHM/cPanel handles a number of processes and applications like HTTP(S), FTP, DNS, MySQL and such.

    In this blog post, we are going to show you how to integrate an external MySQL replication setup into WHM seamlessly, to improve the database availability and offload the WHM/cPanel hosting server. Hosting providers who run MySQL locally on the WHM server would know how demanding MySQL is in terms of resource utilization (depending on the number of accounts it hosts and the server specs).

    MySQL Replication on WHM/cPanel

    By default, WHM natively supports both MariaDB and MySQL as a standalone setup. You can attach an external MySQL server into WHM, but it will act as a standalone host. Plus, the cPanel users have to know the IP address of the MySQL server and manually specify the external host in their web application if this feature is enabled.

    In this blog post, we are going to use ProxySQL UNIX socket file to trick WHM/cPanel in connecting to the external MySQL server via UNIX socket file. This way, you get the feel of running MySQL locally so users can use "localhost" with port 3306 as their MySQL database host.

    The following diagram illustrates the final architecture:

    We are having a new WHM server, installed with WHM/cPanel 80.0 (build 18). Then we have another three servers - one for ClusterControl and two for master-slave replication. ProxySQL will be installed on the WHM server itself.

    Deploying MySQL Replication

    At the time of this writing, we are using WHM 80.0 (build 18) which only supports up to MySQL 5.7 and MariaDB 10.3. In this case, we are going to use MySQL 5.7 from Oracle. We assume you have already installed ClusterControl on the ClusterControl server.

    Firstly, setup passwordless SSH from ClusterControl server to MySQL replication servers. On ClusterControl server, do:

    $ ssh-copy-id 192.168.0.31 $ ssh-copy-id 192.168.0.32

    Make sure you can run the following command on ClusterControl without password prompt in between:

    $ ssh 192.168.0.31 "sudo ls -al /root" $ ssh 192.168.0.32 "sudo ls -al /root"

    Then go to ClusterControl -> Deploy -> MySQL Replication and enter the required information. On the second step, choose Oracle as the vendor and 5.7 as the database version:

    Then, specify the IP address of the master and slave:

    Pay attention to the green tick right before the IP address. It means ClusterControl is able to connect to the server and is ready for the next step. Click Deploy to start the deployment. The deployment process should take 15 to 20 minutes.

    Deploying ProxySQL on WHM/cPanel

    Since we want ProxySQL to take over the default MySQL port 3306, we have to firstly modify the existing MySQL server installed by WHM to listen to other port and other socket file. In /etc/my.cnf, modify the following lines (add them if do not exist):

    socket=/var/lib/mysql/mysql2.sock port=3307 bind-address=127.0.0.1

    Then, restart MySQL server on cPanel server:

    $ systemctl restart mysqld

    At this point, the local MySQL server should be listening on port 3307, bind to localhost only (we close it down from external access to be more secure). Now we can proceed to deploy ProxySQL on the WHM host, 192.168.0.16 via ClusterControl.

    First, setup passwordless SSH from ClusterControl node to the WHM server that we want to install ProxySQL:

    (clustercontrol)$ ssh-copy-id root@192.168.0.16

    Make sure you can run the following command on ClusterControl without password prompt in between:

    (clustercontrol)$ ssh 192.168.0.16 "sudo ls -al /root"

    Then, go to ClusterControl -> Manage -> Load Balancers -> ProxySQL -> Deploy ProxySQL and specify the required information:

    Fill in all necessary details as highlighted by the arrows above in the diagram. The server address is the WHM server, 192.168.0.16. The listening port is 3306 on the WHM server, taking over the local MySQL which is already running on port 3307. Further down, we specify the ProxySQL admin and monitoring users' password. Then include both MySQL servers into the load balancing set and then choose "No" in the Implicit Transactions section. Click Deploy ProxySQL to start the deployment.

    Our ProxySQL is now installed and configured with two host groups for MySQL Replication. One for the writer group (hostgroup 10), where all connections will be forwarded to the master and the reader group (hostgroup 20) for all read-only workloads which will be balanced to both MySQL servers.

    The next step is to grant MySQL root user and import it into ProxySQL. Occasionally, WHM somehow connects to the database via TCP connection, bypassing the UNIX socket file. In this case, we have to allow MySQL root access from both root@localhost and root@192.168.0.16 (the IP address of WHM server) in our replication cluster.

    Thus, running the following statement on the master server (192.168.0.31) is necessary:

    (master)$ mysql -uroot -p mysql> GRANT ALL PRIVILEGES ON *.* TO root@'192.168.0.16' IDENTIFIED BY 'M6sdk1y3PPk@2' WITH GRANT OPTION;

    Then, import 'root'@'localhost' user from our MySQL server into ProxySQL user by going to ClusterControl -> Nodes -> pick the ProxySQL node -> Users -> Import Users. You will be presented with the following dialog:

    Tick on the root@localhost checkbox and click Next. In the User Settings page, choose hostgroup 10 as the default hostgroup for the user:

    We can then verify if ProxySQL is running correctly on the WHM/cPanel server by using the following command:

    $ netstat -tulpn | grep -i proxysql tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 17306/proxysql tcp 0 0 0.0.0.0:6032 0.0.0.0:* LISTEN 17306/proxysql

    Port 3306 is what ProxySQL should be listening to accept all MySQL connections. Port 6032 is the ProxySQL admin port, where we will connect to configure and monitor ProxySQL components like users, hostgroups, servers and variables.

    At this point, if you go to ClusterControl -> Topology, you should see the following topology:

    Configuring MySQL UNIX Socket

    In Linux environment, if you define MySQL host as "localhost", the client/application will try to connect via the UNIX socket file, which by default is located at /var/lib/mysql/mysql.sock on the cPanel server. Using the socket file is the most recommended way to access MySQL server, because it has less overhead as compared to TCP connections. A socket file doesn't actually contain data, it transports it. It is like a local pipe the server and the clients on the same machine can use to connect and exchange requests and data.

    Having said that, if your application connects via "localhost" and port 3306 as the database host and port, it will connect via socket file. If you use "127.0.0.1" and port 3306, most likely the application will connect to the database via TCP. This behaviour is well explained in the MySQL documentation. In simple words, use socket file (or "localhost") for local communication and use TCP if the application is connecting remotely.

    In cPanel, the MySQL socket file is monitored by cpservd process and would be linked to another socket file if we configured a different path than the default one. For example, suppose we configured a non-default MySQL socket file as we configured in the previous section:

    $ cat /etc/my.cnf | grep socket socket=/var/lib/mysql/mysql2.sock

    cPanel via cpservd process would correct this by creating a symlink to the default socket path:

    (whm)$ ls -al /var/lib/mysql/mysql.sock lrwxrwxrwx. 1 root root 34 Jul 4 12:25 /var/lib/mysql/mysql.sock -> ../../../var/lib/mysql/mysql2.sock

    To avoid cpservd to automatically re-correct this (cPanel has a term for this behaviour called "automagically"), we have to disable MySQL monitoring by going to WHM -> Service Manager (we are not going to use the local MySQL anyway) and uncheck "Monitor" checkbox for MySQL as shown in the screenshot below:

    Save the changes in WHM. It's now safe to remove the default socket file and create a symlink to ProxySQL socket file with the following command:

    (whm)$ ln -s /tmp/proxysql.sock /var/lib/mysql/mysql.sock

    Verify the socket MySQL socket file is now redirected to ProxySQL socket file:

    (whm)$ ls -al /var/lib/mysql/mysql.sock lrwxrwxrwx. 1 root root 18 Jul 3 12:47 /var/lib/mysql/mysql.sock -> /tmp/proxysql.sock

    We also need to change the default login credentials inside /root/.my.cnf as follows:

    (whm)$ cat ~/.my.cnf [client] #password="T<y4ar&cgjIu" user=root password='M6sdk1y3PPk@2' socket=/var/lib/mysql/mysql.sock

    A bit of explanation - The first line that we commented out is the MySQL root password generated by cPanel for the local MySQL server. We are not going to use that, therefore the '#' is at the beginning of the line. Then, we added the MySQL root password for our MySQL replication setup and UNIX socket path, which is now symlink to ProxySQL socket file.

    At this point, on the WHM server you should be able to access our MySQL replication cluster as root user by simply typing "mysql", for example:

    (whm)$ mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 239 Server version: 5.5.30 (ProxySQL) Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>

    Notice the server version is 5.5.30 (ProxySQL). If you can connect as above, we can configure the integration part as described in the next section.

    WHM/cPanel Integration

    WHM supports a number of database server, namely MySQL 5.7, MariaDB 10.2 and MariaDB 10.3. Since WHM is now only seeing the ProxySQL and it is detected as version 5.5.30 (as stated above), WHM will complain about unsupported MySQL version. You can go to WHM -> SQL Services -> Manage MySQL Profiles and click on Validate button. You should get a red toaster notification on the top-right corner telling about this error.

    Therefore, we have to change the MySQL version in ProxySQL to the same version as our MySQL replication cluster. You can get this information by running the following statement on the master server:

    mysql> SELECT @@version; +------------+ | @@version | +------------+ | 5.7.26-log | +------------+

    Then, login to the ProxySQL admin console to change the mysql-server_version variable:

    (whm)$ mysql -uproxysql-admin -p -h192.168.0.16 -P6032

    Use the SET statement as below:

    mysql> SET mysql-server_version = '5.7.26';

    Then load the variable into runtime and save it into disk to make it persistent:

    mysql> LOAD MYSQL VARIABLES TO RUNTIME; mysql> SAVE MYSQL VARIABLES TO DISK;

    Finally verify the version that ProxySQL will represent:

    mysql> SHOW VARIABLES LIKE 'mysql-server_version'; +----------------------+--------+ | Variable_name | Value | +----------------------+--------+ | mysql-server_version | 5.7.26 | +----------------------+--------+

    If you try again to connect to the MySQL by running "mysql" command, you should now get "Server version: 5.7.26 (ProxySQL)" in the terminal.

    Now we can update the MySQL root password under WHM -> SQL Services -> Manage MySQL Profiles. Edit the localhost profile by changing the Password field at the bottom with the MySQL root password of our replication cluster. Click on the Save button once done. We can then click on "Validate" to verify if WHM can access our MySQL replication cluster via ProxySQL service correctly. You should get the following green toaster at the top right corner:

    If you get the green toaster notification, we can proceed to integrate ProxySQL via cPanel hook.

    ProxySQL Integration via cPanel Hook

    ProxySQL as the middle-man between WHM and MySQL replication needs to have a username and password for every MySQL user that will be passing through it. With the current architecture, if one creates a user via the control panel (WHM via account creation or cPanel via MySQL Database wizard), WHM will automatically create the user directly in our MySQL replication cluster using root@localhost (which has been imported into ProxySQL beforehand). However, the same database user would be not added into ProxySQL mysql_users table automatically.

    From the end-user perspective, this would not work because all localhost connections at this point should be passed through ProxySQL. We need a way to integrate cPanel with ProxySQL, whereby for any MySQL user related operations performed by WHM and cPanel, ProxySQL must be notified and do the necessary actions to add/remove/update its internal mysql_users table.

    The best way to automate and integrate these components is by using the cPanel standardized hook system. Standardized hooks trigger applications when cPanel & WHM performs an action. Use this system to execute custom code (hook action code) to customize how cPanel & WHM functions in specific scenarios (hookable events).

    Firstly, create a Perl module file called ProxysqlHook.pm under /usr/local/cpanel directory:

    $ touch /usr/local/cpanel/ProxysqlHook.pm

    Then, copy and paste the lines from here. For more info, check out the Github repository at ProxySQL cPanel Hook.

    Configure the ProxySQL admin interface from line 16 until 19:

    my $proxysql_admin_host = '192.168.0.16'; my $proxysql_admin_port = '6032'; my $proxysql_admin_user = 'proxysql-admin'; my $proxysql_admin_pass = 'mys3cr3t';

    Now that the hook is in place, we need to register it with the cPanel hook system:

    (whm)$ /usr/local/cpanel/bin/manage_hooks add module ProxysqlHook info [manage_hooks] **** Reading ProxySQL information: Host: 192.168.0.16, Port: 6032, User: proxysql-admin ***** Added hook for Whostmgr::Accounts::Create to hooks registry Added hook for Whostmgr::Accounts::Remove to hooks registry Added hook for Cpanel::UAPI::Mysql::create_user to hooks registry Added hook for Cpanel::Api2::MySQLFE::createdbuser to hooks registry Added hook for Cpanel::UAPI::Mysql::delete_user to hooks registry Added hook for Cpanel::Api2::MySQLFE::deletedbuser to hooks registry Added hook for Cpanel::UAPI::Mysql::set_privileges_on_database to hooks registry Added hook for Cpanel::Api2::MySQLFE::setdbuserprivileges to hooks registry Added hook for Cpanel::UAPI::Mysql::rename_user to hooks registry Added hook for Cpanel::UAPI::Mysql::set_password to hooks registry Related resources  High Availability MySQL on cPanel with Galera Cluster  How to Migrate WHMCS Database to MariaDB Galera Cluster  The Easy Way to Deploy MySQL Replication and ProxySQL with Automatic Failover

    From the output above, this module hooks into a number of cPanel and WHM events:

    • Whostmgr::Accounts::Create - WHM -> Account Functions -> Create a New Account
    • Whostmgr::Accounts::Remove - WHM -> Account Functions -> Terminate an Account
    • Cpanel::UAPI::Mysql::create_user - cPanel -> Databases -> MySQL Databases -> Add New User 
    • Cpanel::Api2::MySQLFE::createdbuser - cPanel -> Databases -> MySQL Databases -> Add New User (requires for Softaculous integration).
    • Cpanel::UAPI::Mysql::delete_user - cPanel -> Databases -> MySQL Databases -> Delete User
    • Cpanel::Api2::MySQLFE::deletedbuser - cPanel -> Databases -> MySQL Databases -> Add New User (requires for Softaculous integration).
    • Cpanel::UAPI::Mysql::set_privileges_on_database - cPanel -> Databases -> MySQL Databases -> Add User To Database
    • Cpanel::Api2::MySQLFE::setdbuserprivileges - cPanel -> Databases -> MySQL Databases -> Add User To Database (requires for Softaculous integration).
    • Cpanel::UAPI::Mysql::rename_user - cPanel -> Databases -> MySQL Databases -> Rename User
    • Cpanel::UAPI::Mysql::set_password - cPanel -> Databases -> MySQL Databases -> Change Password

    If the event above is triggered, the module will execute the necessary actions to sync up the mysql_users table in ProxySQL. It performs the operations via ProxySQL admin interface running on port 6032 on the WHM server. Thus, it's vital to specify the correct user credentials for ProxySQL admin user to make sure all users will be synced with ProxySQL correctly.

    Take note that this module, ProxysqlHook.pm has never been tested in the real hosting environment (with many accounts and many third-party plugins) and obviously does not cover all MySQL related events within cPanel. We have tested it with Softaculous free edition and it worked greatly via cPanel API2 hooks. Some further modification might be required to embrace full automation.

    That's it for now. In the next part, we will look into the post-deployment operations and what we could gain with our highly available MySQL server solution for our hosting servers if compared to standard standalone MySQL setup.

    Tags:  cpanel whm replication MySQL high availability

    SQL SELECT INTO Statement Tutorial With Example

    SQL SELECT INTO Statement Tutorial With Example is today’s topic. The SQL SELECT INTO STATEMENT is used to create a table from an existing table by copying the contents of the current table to the new table. The critical point to note here is that the column and data type of column must be the same. See the following syntax.

    Select column(s) INTO new_table from old_table_name where(condition). #PARAMETERS
    1. Column(s): Name of the columns which we want to copy.
    2. INTO: This is a keyword which is used to perform the copy operation.
    3. New_table: Name of the new table which will be created.
    4. Old_table_name: Name of the old table from where we are copying.
    5. Where(condition): This is used for imposing conditions into the select statement.
    #SQL SELECT INTO Statement

    The SELECT INTO statement copies the data from one table into the new table. You can also see the following syntax.

    SELECT * INTO newtable [IN externaldb] FROM oldtable WHERE condition;

    NOTE:

    When we are using the select into a statement, the new_table should not exist. If it does already, then, the user must have to drop it; otherwise, it will raise an error.

    Let’s clear the above syntax with example.

    Consider table: (Students).

    Roll Name City 1 Rohit Patna 2 Shouvik Jalandhar 3 Shubh Kolkata 4 Karan Allahabad 5 Shivam Palampur 6 Karan Dhulian

     

    See the following query.

    Copying all the contents to new table.

    Select * INTO new_students from students;

    In this query, all the contents of the students’ table will be copied to the new_students table.

    #QUERY 2

    Copying a new table to the database.

    Select * INTO new_students IN ‘sample.mdb’ FROM students;

    In this, all the contents of the students table will be copied to the new_students table later, which will be copied to another database named sample. The new_students table will be stored in MS Access database extension.

    #QUERY 3

    For Copying Selected Columns.

    Select Roll, Name INTO new_students from students;

    In this Roll and Name columns are copied to new table new_students.

    #QUERY 4

    For copying specific rows from columns.

    Select Roll, Name INTO new_students from students where name like ‘S%’;

    In this query Shubh, Shouvik and Shivam under the column named Name are copied to a new table named new_students. We have used the wildcard characters.

    If in the above query if the where clause does not return any data, then an empty new table is created.

    SELECT INTO creates a new table located in the default filegroup.

    #SELECT INTO Statement with IN Clause

    One can use SELECT INTO command in conjunction with the IN clause for copying the table into another database. See the following example.

    SELECT * INTO new_students IN 'Backup.db' FROM students;

    This will create a copy of the new_students table in MS Access database extension.

    #SELECT INTO Statement: Copy Selected Coloums

    If you want to copy the few columns of the table, then below is an example.

    SELECT Roll, Name INTO new_students FROM students #SELECT INTO Statement: Select Specific Rows

    You can use WHERE clause to select specific rows of a table. Suppose in the above example, and one wants to display students name & students roll of students where city = Patna.

    #SQL TOP CLAUSE Output

    All these three commands would give the following results in the respective database.

    SELECT name, city INTO new_students FROM students WHERE city='Patna';

    Finally, SQL SELECT INTO Statement Tutorial With Example is over.

    The post SQL SELECT INTO Statement Tutorial With Example appeared first on AppDividend.

    Configure Alerts on Deadlock Events

    VividCortex can now detect deadlocks in MySQL, and generate events that you can use for alerting and notification. It's normal for an ACID transactional database to encounter deadlocks occasionally, but too many deadlocks could signal a variety of problems from SQL, to application behavior, to indexing and more. To learn more about deadlocks, lock waits, and the difference between them, read this article.

    Fixing MySQL 1045 Error: Access Denied

    During our work in support, we see this again and again: “I try to connect to MySQL and am getting a 1045 error”, and most times it comes accompanied with “…but I am sure my user and password are OK”.  So we decided it was worth showing other reasons this error may occur.

    MySQL 1045 error Access Denied triggers in the following cases: 1) Connecting to wrong host:

    [engineer@percona]# mysql -u root -psekret mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)

    If not specifying the host to connect (with -h flag), MySQL client will try to connect to the localhost instance while you may be trying to connect to another host/port instance.

    Fix: Double check if you are trying to connect to localhost, or be sure to specify host and port if it’s not localhost:

    [engineer@percona]# mysql -u root -psekret -h <IP> -P 3306

     

    2) User does not exist:

    [engineer@percona]# mysql -u nonexistant -psekret -h localhost mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'nonexistant'@'localhost' (using password: YES)

    Fix: Double check if the user exists:

    mysql> SELECT User FROM mysql.user WHERE User='nonexistant'; Empty set (0.00 sec)

    If the user does not exist, create a new user:

    mysql> CREATE USER 'nonexistant'@'localhost' IDENTIFIED BY 'sekret'; Query OK, 0 rows affected (0.00 sec)

     

    3) User exists but client host does not have permission to connect:

    [engineer@percona]# mysql -u nonexistant -psekret mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'nonexistant'@'localhost' (using password: YES)

    Fix: You can check to see which host user/host MySQL allows connections with the following query:

    mysql> SELECT Host, User FROM mysql.user WHERE User='nonexistant'; +-------------+-------------+ | Host | User | +-------------+-------------+ | 192.168.0.1 | nonexistant | +-------------+-------------+ 1 row in set (0.00 sec)

    If you need to check from which IP the client is connecting, you can use the following Linux commands for server IP:

    [engineer@percona]# ip address | grep inet | grep -v inet6 inet 127.0.0.1/8 scope host lo inet 192.168.0.20/24 brd 192.168.0.255 scope global dynamic wlp58s0

    or for public IP:

    [engineer@percona]# dig +short myip.opendns.com @resolver1.opendns.com 177.128.214.181

    You can then create a user with correct Host (client IP), or with ‘%’ (wildcard) to match any possible IP:

    mysql> CREATE USER 'nonexistant'@'%' IDENTIFIED BY 'sekret'; Query OK, 0 rows affected (0.00 sec)

     

    4) Password is wrong, or the user forgot his password:

    [engineer@percona]# mysql -u nonexistant -pforgotten mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'nonexistant'@'localhost' (using password: YES)

    Fix: Check and/or reset password:

    You cannot read user passwords in plain text from MySQL as the password hash is used for authentication, but you can compare hash strings with “PASSWORD” function:

    mysql> SELECT Host, User, authentication_string, PASSWORD('forgotten') FROM mysql.user WHERE User='nonexistant'; +-------------+-------------+-------------------------------------------+-------------------------------------------+ | Host | User | authentication_string | PASSWORD('forgotten') | +-------------+-------------+-------------------------------------------+-------------------------------------------+ | 192.168.0.1 | nonexistant | *AF9E01EA8519CE58E3739F4034EFD3D6B4CA6324 | *70F9DD10B4688C7F12E8ED6C26C6ABBD9D9C7A41 | | % | nonexistant | *AF9E01EA8519CE58E3739F4034EFD3D6B4CA6324 | *70F9DD10B4688C7F12E8ED6C26C6ABBD9D9C7A41 | +-------------+-------------+-------------------------------------------+-------------------------------------------+ 2 rows in set, 1 warning (0.00 sec)

    We can see that PASSWORD(‘forgotten’) hash does not match the authentication_string column, which means password string=’forgotten’ is not the correct password to log in. Also, in case the user has multiple hosts (with different password), he may be trying to connect using the password for the wrong host.

    In case you need to override the password you can execute the following query:

    mysql> set password for 'nonexistant'@'%' = 'hello$!world'; Empty set (0.00 sec)

     

    5) Special characters in the password being converted by Bash:

    [engineer@percona]# mysql -u nonexistant -phello$!world mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'nonexistant'@'localhost' (using password: YES)

    Fix: Prevent bash from interpreting special characters by wrapping password in single quotes:

    [engineer@percona]# mysql -u nonexistant -p'hello$!world' mysql: [Warning] Using a password on the command line interface can be insecure ... mysql>

     

    6) SSL is required but the client is not using it:

    mysql> create user 'ssluser'@'%' identified by 'sekret'; Query OK, 0 rows affected (0.00 sec) mysql> alter user 'ssluser'@'%' require ssl; Query OK, 0 rows affected (0.00 sec) ... [engineer@percona]# mysql -u ssluser -psekret mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'ssluser'@'localhost' (using password: YES)

    Fix: Adding –ssl-mode flag (–ssl flag is deprecated but can be used too)

    [engineer@percona]# mysql -u ssluser -psekret --ssl-mode=REQUIRED ... mysql>

    You can read more in-depth on how to configure SSL in MySQL in the blog post about “Setting up MySQL SSL and Secure Connections” and “SSL in 5.6 and 5.7“.

    7) PAM backend not working:

    mysql> CREATE USER 'ap_user'@'%' IDENTIFIED WITH auth_pam; Query OK, 0 rows affected (0.00 sec) ... [engineer@percona]# mysql -u ap_user -pap_user_pass mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'ap_user'@'localhost' (using password: YES)

    Fix: Double check user/password is correct for the user to authenticate with the PAM currently being used.

    In my example, I am using Linux shadow files for authentication. In order to check if the user exists:

    [engineer@percona]# cat /etc/passwd | grep ap_user ap_user:x:1000:1000::/home/ap_user:/bin/bash

    To reset password:

    [engineer@percona]# sudo passwd ap_user Changing password for user ap_user. New password:

    Finally, if you are genuinely locked out and need to circumvent the authentication mechanisms in order to regain access to the database, here are a few simple steps to do so:

    1. Stop the instance
    2. Edit my.cnf and add skip-grant-tables under [mysqld] (this will allow access to MySQL without prompting for a password). On MySQL 8.0, skip-networking is automatically enabled (only allows access to MySQL from localhost), but for previous MySQL versions it’s suggested to also add –skip-networking under [mysqld]
    3. Start the instance
    4. Access with root user (mysql -uroot -hlocalhost); 
    5. Issue the necessary GRANT/CREATE USER/SET PASSWORD to correct the issue (likely setting a known root password will be the right thing: SET PASSWORD FOR ‘root’@’localhost’ = ‘S0vrySekr3t’). Using grant-skip-tables won’t read grants into memory and GRANT/CREATE/SET PASSWORD statements won’t work straight away. First, you need to execute “FLUSH PRIVILEGES;” before executing any GRANT/CREATE/SET PASSWORD statement, or you can modify mysql.users table with a query which modifies the password for User and Host like “UPDATE mysql.user SET authentication_string=PASSWORD(‘newpwd’) WHERE User=’root’ and Host=’localhost’;”
    6. Stop the instance
    7. Edit my.cnf and remove skip-grant-tables and skip-networking
    8. Start MySQL again
    9. You should be able to login with root from the localhost and do any other necessary corrective operations with root user.

    SQL Except Clause Example | Except Statement In SQL

    SQL Except Clause Example | Except Statement In SQL is today’s topic. The SQL EXCEPT operator is used to combine two select statements, which in turn returns the row that is present in the first select statement and not in the second select statement. This Clause acts like a subtract operation that we perform in sets and Venn diagrams.

    SQL Except Clause Example

    In SQL, EXCEPT returns those tuples that are returned by the first SELECT operation, and not returned by the second SELECT operation.

    #Pictorial Representation

     

    From above you can see that no common records will be displayed between two queries i.e., only Query1 will return all its rows.

     Each Select statement within an EXCEPT query must have the same number of fields in the result sets having similar data types.

    #Syntax Select column(s) from table_name1 where(condition) EXCEPT Select column(s) from table_name2 where(condition). #PARAMETERS
    1. Column(s) represents the name of the columns.
    2. Table_name1 and Table_name2 are the names of the tables.
    3. Where(condition) is the condition to be imposed on select statements.
    4. EXCEPT is the operator who acts like a minus operator on select statements.

    Let’s clear the above operations with examples.

    Consider Tables: (Students) and (Course)

    STUDENTS:

    Roll Name City 1 Rohit Patna 2 Shouvik Jalandhar 3 Shubh Kolkata 4 Karan Allahabad 5 Shivam Palampur 6 Karan Dhulian

     

    Course:

    Name Course Shubh Data Science Rohit Data Science Shivam Android Development

     

    #QUERY Select name from Students Except Select name from Course; #OUTPUT NAME Shouvik Karan

     

    #EXPLANATION

    In the above query name of students were declared who have not registered for any of the courses. So, Shouvik and Karan were displayed in the resulting set because they had not logged in any of the courses.

    NOTE:

    You can see that in the above query Karan was displayed only one time. So, to retain duplicates, we have to use EXCEPTALL clause.

    The corresponding columns in the each of the SELECT statements must have similar data types.

    The EXCEPT operator returns all records from the first SELECT statement that are not in the second SELECT statement.

    See the following query.

    Select name from Students EXCEPTALL Select name from Course #OUTPUT NAME Shouvik Karan Karan

     

    So, you can see that Karan has been displayed twice.

    #DIFFERENCE BETWEEN EXCEPT AND NOT IN CLAUSE

    Except clause removes duplicates from the resulting set whereas NOT IN clause does not remove duplicates from the resulting set.

    NOTE:

    MySQL does not support EXCEPT CLAUSE. So, it is better to use in SQL SERVER.

    Finally, SQL Except Clause Example | Except Statement In SQL is over.

    The post SQL Except Clause Example | Except Statement In SQL appeared first on AppDividend.

    Galera Cluster at Percona Live Austin, DataOps.Barcelona and where next

    It is July now, and summer is upon us. The team at Codership are busy making Galera Cluster 4 for MySQL, and we are just hot of the heels of celebrating the launch of Galera Cluster 4 with MariaDB Server 10.4

    At the end of May, Galera Cluster had a booth and several talks at Percona Live Austin. This was the first time the event happened there, and we thoroughly enjoyed it. The key takeaway from that event would be Seppo Jaakola’s talk titled: Galera Cluster New Features. It covered the then upcoming MariaDB Server 10.4 release with Galera Cluster 4, went in-depth into how streaming replication works including superb diagrams on fragment applying, the new wsrep information in the mysql table, and how to perform a rolling upgrade from MariaDB Server 10.3 to MariaDB Server 10.4.

    We fileded many questions at the booth and some things to make extremely clear based on the questions we received:

    • Yes, there will be a Galera Cluster 4 for MySQL 8 — it is coming after summer 2019 with a lot of features that are slated for MariaDB Server 10.5
    • Codership, the makers and experts of Galera Cluster are in fact running a services business, and you can purchase support subscriptions, consulting, and even training.
    • Galera replication in fact pre-dates group replication in terms of being released and used in production

    Our impression of Percona Live Austin was that the venue was a marked improvement over the usual Santa Clara, it was good to get morning walks alongside the Lady Bird Lake, it was relatively easy to cross over to downtown Austin and enjoy some good BBQ and beverages. Many took note of how they had not seen Galera Cluster present for a few years, and how they were glad we were back. There were clearly quite a lot of PostgreSQL users who were present at the conference, and their interest was definitely piqued as this isn’t something that they have easily available in their land. 

    In the third week of June, Galera Cluster had a booth and a keynote address at DataOps.Barcelona. This was the second time this event has happened at the World Trade Center in Barcelona, and while smaller than the abovementioned Austin event, would give its European cousin a run for its money! The keynote address was similar to the talk above (so link not required), with the only difference being that MariaDB Server 10.4 was released the day before, so it was an opportunity for people to try this right after the talk!

    There were many meaningful conversations had at the booth, potential partnerships forged, and an enjoyable evening reception by the water. Food was also a highlight, as was the extremely large dinner on Friday night where many speakers visited.

    At both events, we ran a raffle and gave away a pair of Bose noise cancelling headphones. Expect similar at any event that we participate in, in the future. 

    Where can you see us next? Very likely we will be at Percona Live Europe Amsterdam. We are very excited that it is moving back to Amsterdam, and hope that the Schipol Airport location does not in any way hinder you from attending. In the meantime, feel free to contact us if you would like to discuss your High Availability solutions.

    Tổng quan về hệ quản trị cơ sở dữ liệu MySQL

    Thật tuyệt vời khi các ứng dụng có sử dụng cơ sở dữ liệu trên hệ thống internet có sự hỗ trợ đặc lực của một hệ thống quản trị miễn phí như MySQL. MySQL được biết đến với tốc độ khá cao, ổn định và rất dễ sử dụng. Đặc biệt nó có thể hoạt động trên tất cả các hệ điều hành. Tại sao MySQL có sự cuốn hút kì diệu đến vậy? Bài viết dưới đây sẽ cho bạn một cái nhìn tổng quan về hệ quản trị cơ sở dữ liệu MySQL.

    Hệ quản trị cơ sở dữ liệu MySQL là gì? mySQL là một hệ quản trị cơ sở dữ liệu mã nguồn mở phổ biến hiện nay.

    Hệ quản trị cơ sở dữ liệu MySQL được hiểu như là chương trình dùng để quản lý hệ thống cơ sở dữ liệu, trong đó, cơ sở dữ liệu là một hệ thống lưu trữ thông tin được sắp xếp rõ ràng, phân lớp ngăn nắp. Nó giúp bạn có thể truy cập dữ liệu một cách thuận lợi và nhanh chóng nhất. Vì hỗ trợ đa số các ngôn ngữ lập trình nên MySQL chính là hệ quản trị cơ sở dữ liệu tự do nguồn mở phổ biến nhất trên thế giới. Hiện MySQL đang được các nhà phát triển rất ưa chuộng trong quá trình phát triển ứng dụng.

    MySQL là cơ sở dữ liệu tốc độ cao, ổn định hoạt động trên nhiều hệ điều hành, cung cấp một hệ thống lớn các hàm tiện ích rất mạnh. Đặc biệt, hệ quản trị cơ sở dữ liệu MySQL hoàn toàn miễn phí nên người dùng có thể thoải mái tải về từ trang chủ. Nó có rất nhiều những phiên bản cho các hệ điều hành khác nhau. MySQL được sử dụng cho việc bỗ trợ PHP, Perl và nhiều ngôn ngữ khác. Là nơi lưu trữ những thông tin trên các trang web viết bằng framework PHP hay Perl…

    Đặc điểm của hệ quản trị cơ sở dữ liệu MySQL
    • Hệ quản trị cơ sở dữ liệu MySQL chính là một trong những phần mềm quản trị CSDL dạng server based, hệ gần giống với SQL server of Microsoft.
    • MySQL là phần mềm quản lý dữ liệu thông qua CSDL. Và mỗi một CSDL đều có bảng quan hệ chứa dữ liệu riêng biệt.
    • MySQL có cơ chế quản lý sử dụng riêng giúp cho mỗi người sử dụng đều có thể quản lý cùng lúc một hay nhiều CSDL khác nhau. Và mỗi người dùng đều có 1 username và password để truy nhập và truy xuất đến CSDL. Khi truy vấn đến CSDL của MySQL, bạn phải cung cấp tài khoản và mật khẩu có quyền sử dụng cơ sở dữ liệu đó.
    Những lợi ích khi sử dụng hệ quản trị cơ sở dữ liệu MySQL mySQL ngày càng hoàn thiện và mang đến nhiều lợi ích cho người dùng.

    Ra mắt người dùng hoàn toàn miễn phí, hệ quản trị cơ sở dữ liệu MySQL mang đến rất nhiều những lợi ích thiết thực:

    Hiệu năng sử dụng cao

    Hầu hết, hệ quản trị cơ sở dữ liệu MySQL đang được các nhà phát triển rất ưa chuộng trong quá trình phát triển ứng dụng. Và họ đánh giá rất cao ở hiệu năng sử dụng của MySQL. Với kiến trúc storage-engine, MySQL đặc trưng cho các ứng dụng chuyên biệt, đặc biệt là đối với những trang web có dung lượng lớn, phục vụ hàng triệu khách hàng. Hoặc đối với những hệ thống xử lý giao dịch tốc độ cao thì MySQL đều cùng có thể đáp ứng được những khả năng xử lý khắt khe của mọi hệ thống. Đặc biệt, với những tiện ích tải tốc độ cao, cơ chế xử lý nâng cao cùng bộ nhớ caches. MySQL đưa ra tất cả những tính năng cần có, đây là giải pháp hoàn hảo nhất ngay cả đối với những hệ thống doanh nghiệp khó tính nhất hiện nay.

    MySQL hỗ trợ giao dịch mạnh mẽ

    Hệ quản trị cơ sở dữ liệu MySQL không những mang lại hiệu năng sử dụng cao. Mà nó còn đưa ra một trong số những engine giao dịch cơ sở dữ liệu tốt nhất trên thị trường hiện nay. Tính năng này bao gồm: Khóa mức dòng không hạn chế; hỗ trợ giao dịch ACID hoàn thiện; khả năng giao dịch được phân loại và hỗ trợ giao dịch đa dạng mà người đọc không cản trở cho người viết và ngược lại. Với MySQL, dữ liệu sẽ được đảm bảo trong suốt quá trình server có hiệu lực. Các mức giao dịch độc lập sẽ được chuyên môn hóa, nếu phát hiện có lỗi khóa chết ngay tức thì.

    Tốc độ rất nhanh Tốc độ xử lý dữ liệu của mySQL rất nhanh.

    Đánh giá chung của các nhà phát triển, tất cả họ đều cho rằng hệ quản trị cơ sở dữ liệu MySQL là cơ sở dữ liệu nhanh nhất. Đây là nơi để cho các website có thể trao đổi thường xuyên các dữ liệu bởi nó có engine xử lý tốc độ cao. Khả năng chèn dữ liệu cực nhanh và hỗ trợ mạnh mẽ các chức năng chuyên dụng cho trang web. Các tính năng này cũng được sử dụng cho môi trường lưu trữ dữ liệu mà hệ quản trị này tăng cường đến hàng terabyte cho các server đơn. Ngoài ra còn có những tính năng khác như: chỉ số băm, bảng nhớ chính, bảng lưu trữ và cây B được gói lại để giúp giảm các yêu cầu lưu trữ tối đa đến 80%. Với tốc độ nhanh, thật không thể phủ nhận hệ quản trị cơ sở dữ liệu MySQL là sự lựa chọn tốt nhất cho cả ứng dụng web cũng như các ứng dụng của doanh nghiệp ngày nay.

    Dễ dàng sử dụng

    MySQL ngoài được biết đến với tốc độ khá cao, ổn định thì nó thực sự là một hệ thống cơ sở dữ liệu rất đơn giản, rất dễ sử dụng. Ít phức tạp khi cài đặt và quản trị hơn các hệ thống lớn. Đặc biệt nó có thể hoạt động trên tất cả các hệ điều hành.

    Hỗ trợ ngôn ngữ truy vấn

    MySQL hệ quản trị cơ sở dữ liệu là ngôn ngữ của sự lựa chọn cho tất cả các hệ thống cơ sở dữ liệu hiện đại. Người dùng hoàn toàn có thể truy cập MySQL bằng cách sử dụng các ứng dụng mà hỗ trợ ODBC (một giao thức giao tiếp cơ sở dữ liệu được phát triển bởi Microsoft). Nhiều client có thể truy cập đến server trong cùng một thời gian. Đặc biệt các client có thể sử dụng nhiều cơ sở dữ liệu một cách đồng thời. Bạn cũng có thể truy cập MySQL tương tác với khi sử dụng một vài giao diện để đưa vào các truy vấn và xem kết quả như: các dòng yêu cầu của khách hàng, các trình duyệt web…

    Tính kết nối và bảo mật cao MySQL có tính kết nối và bảo mật cao.

    Điều quan trọng nhất của mỗi một doanh nghiệp chính là việc bảo mật dữ liệu tuyệt đối. Và hệ quản trị cơ sở dữ liệu MySQL tích hợp các tính năng bảo mật an toàn tuyệt đối. MySQL được nối mạng một cách đầy đủ. Các cơ sở dữ liệu có thể được truy cập từ bất cứ nơi nào trên internet. Bạn có thể chia sẻ dữ liệu của bạn với bất kì ai, bất cứ lúc nào và bất cứ nơi đâu bạn muốn. Nhưng MySQL kiểm soát quyền truy cập nên người không nên nhìn thấy dữ liệu của bạn sẽ không thể nào nhìn được. Với việc xác nhận truy cập cơ sở dữ liệu, MySQL trang bị các kĩ thuật mạnh. Chỉ có những người sử dụng đã được xác nhận mới truy cập được vào cơ sở dữ liệu. Ngoài ra, SSH và SSL cũng được hỗ trợ nhằm đảm bảo kết nối an toàn và bảo mật. Tiện ích backup và recovery cung cấp bởi MySQL hệ quản trị cơ sở dữ liệu và các hãng phần mềm thứ 3 cho phép backup logic và vật lý cũng như recovery toàn bộ ngay tại một thời điểm.

    Tính linh động cao

    MySQL có thể hoạt động trên tất cả các hệ điều hành, chạy được với mọi phần cứng từ các máy PC ở nhà cho đến các máy server. Máy chủ hệ quản trị cơ sở dữ liệu MySQL đáp ứng nhiều tính năng linh hoạt. Nó có sức chứa để xử lý các ứng dụng được nhúng sâu với 1MB dung lượng để chạy kho dữ liệu khổng lồ lên đến hàng terabytes thông tin. Tính chất mã nguồn mở của MySQL cho phép tùy biến theo ý muốn để thêm các yêu cầu phù hợp cho database server.

    Mã nguồn mở tự do và hỗ trợ 24/7

    Băn khoăn của rất nhiều doanh nghiệp khi họ gặp khó khăn trong việc giao toàn bộ phần mềm cho một mã nguồn mở. Bởi khó có thể tìm được hỗ trợ hay bảo mật an toàn phục vụ một cách chuyên nghiệp. Nhưng vấn đề khó khăn này sẽ không còn nữa nếu sử dụng hệ quản trị cơ sở dữ liệu MySQL. Với MySQL, mọi sự cam kết đều rất rõ ràng, mọi sự cố đều được MySQL bồi thường. Bạn có thể tìm thấy các tài nguyên có sẵn mà hệ quản trị này hỗ trợ. Cộng đồng MySQL rất có trách nhiệm. Họ thường trả lời các câu hỏi trên mailing list chỉ trong vài phút. Nếu lỗi xảy ra, các nhà phát triển sẽ đưa ra cách khắc phục nhanh nhất cho bạn. Và cách khắc phục đó sẽ ngay lập tức có sẵn trên internet.

    Chi phí sở hữu thấp nhất

    Hệ quản trị cơ sở dữ liệu MySQL cung cấp miễn phí cho hầu hết các việc sử dụng trong một tổ chức. Chính vì vậy, sử dụng MySQL cho các dự án, các doanh nghiệp đầu nhận thấy được sự tiết kiệm cho phí rất đáng kể. Người dùng của MySQL cũng không phải mất nhiều thời gian để sửa chữa hoặc vấn đề thời gian chết.

    Không gì hoàn hảo hơn khi doanh nghiệp của bạn có được sự hỗ trợ đắc lực từ hệ quản trị cơ sở dữ liệu MySQL. Cơ hội phát triển sẽ nằm ngay trong tầm tay bạn thật đơn giản, dễ dàng, hiệu quả cao.

    The post Tổng quan về hệ quản trị cơ sở dữ liệu MySQL appeared first on DBAhire.

    The Fast Way to Import CSV Data Into a Tungsten Cluster

    The Question Recently, a customer asked us:

    After importing a new section of user data into our Tungsten cluster, we are seeing perpetually rising replication lag. We are sitting at 8.5hrs estimated convergence time after importing around 50 million rows and this lag is climbing continuously. We are currently migrating some of our users from a NoSQL database into our Tungsten cluster. We have a procedure to write out a bunch of CSV files after translating our old data into columns and then we recursively send them to the write master using the mysql client. Specifically our import SQL is doing LOAD DATA LOCAL INFILE and the reading in a large CSV file to do the import. We have 20k records per CSV file and we have 12 workers which insert them in parallel.

    Simple Overview The Skinny

    In cases like this, the slaves are having trouble with the database unable to keep up with the apply stage (q-to-dbms).

    Technical Explanation A Deep Dive

    Each CSV file will equate to a transaction, which means one sequence number will contain 20,000 rows.

    Naturally the load is quick on the master, because LOAD DATA INFILE is a fast and efficient process, and especially when you have 12 parallel threads running.

    However, the replicators are not running in parallel so it is going to be the same as a single load of 20,000 rows at a time.

    Reducing the number of records per CSV and the number of parallel workers may reduce the latency but with 50 million records there is going to be some significant overhead to process this downstream to the slaves.

    LOAD DATA is treated as unsafe and when binlog_format=MIXED the statement is logged in ROW-based format.

    So, both ROW and MIXED = ROW, which means that the slower method is used.

    The Answer Session-based Success!

    For LOAD DATA INFILE operations, set the binlog_format to STATEMENT for that specific LOAD DATA session only – this will generate less THL traffic to be processed.

    It is critical that this setting must only be at session level for the LOAD DATA process, not for anything else.

    Select a slave, and add the following property to your ini file:

    optimize-row-events=false

    Turning this optimize parameter off will create a single INSERT per row, which may not sound like it would be quicker but it means the target database is processing one insert at a time rather than 20,000.

    As it stands, your 20,000-record CSV file is most likely getting converted into a single INSERT for all 20000 rows. In some cases the optimize-row-events setting being true is good as it can help to batch multiple inserts into a single statement for quicker processing, however it could be that 20,000 records per insert is causing too much stress on the target database hence it’s slower to apply.

    Also, does the target table have indexes? Huge data loads with a heavily indexed table will increase the time taken for the DB to process these rows.

    Summary The Wrap-Up

    In this blog post we discussed importing CSV data into a Tungsten Cluster.

    To learn about Continuent solutions in general, check out https://www.continuent.com/solutions

    The Library Please read the docs!

    For more information about monitoring Tungsten clusters, please visit https://docs.continuent.com.

    Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

    For more information, please visit https://www.continuent.com/solutions

    Want to learn more or run a POC? Contact us.

    Troubleshooting MySQL InnoDB Cluster GTID Inconsistency

    This tutorial is showing how manual fix of GTID inconsistency but it can be VERY RISKY to mess up the InnoDB Cluster.   It purely serves as a tutorial how to troubleshoot and identify potential issues.

    Environment
    MySQL InnoDB Cluster being setup on Single Machine with 3 Nodes
    Node 1 : Port 3306
    Node 2 : Port 3316
    Node 3 : Port 3326





    The InnoDB Cluster Member Status is as follows





    Retrieving the GTID executed on each member "SELECT @@gtid_executed;"






    The 3 sets of GTIDs are the same and the InnoDB Cluster of 3 nodes is running properly.

    Let to simulate a breakdown of Server (Kill the Instance with PORT 3326)
    1.  Change the persisted variables group_replication_start_on_boot = off first
         mysql > set persist  group_replication_start_on_boot = off;
         mysql > restart;

    2. Check the status of the InnoDB Cluster


    The node (3326) is down.

    Now  make a FLUSH TABLES on the Server Instance "3326".
    1. SELECT @@GTID_EXECUTED;
    2. FLUSH TABLES;
    3. SELECT @@GTID_EXECUTED;
    4. SHOW VARIABLES LIKE '%READ_ONLY%';





    It clearly shows the GTID has been changed in this Server "3326".

    Change the "group_replication_start_on_boot=on" so that when restarting the Node 3326, it will automatically rejoin.  However, at this time, the server is failed to REJOIN because the GTID has inconsistency with the InnoDB Cluster.


    Checking the Error Log with the Nodes.  The Node 3326 error log shows the Error as follows :
    2019-07-04T14:34:11.465132Z 0 [ERROR] [MY-011526] [Repl] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: 0e632a11-9e24-11e9-a2ca-0800271b198a:1,
    115217b5-9e24-11e9-8686-0800271b198a:1-17,
    a7a6cc05-9e0f-11e9-ad3e-0800271b198a:1-20034 > Group transactions: 115217b5-9e24-11e9-8686-0800271b198a:1-17,
    a7a6cc05-9e0f-11e9-ad3e-0800271b198a:1-20034'
    2019-07-04T14:34:11.465213Z 0 [ERROR] [MY-011522] [Repl] Plugin group_replication reported: 'The member contains transactions not present in the group. The member will now exit the group.'

    It clearly shows the Error is due to the "This member has more executed transactions than those present in the group"

    To identify the GTID, compare the @@GTID_EXECUTED between all nodes, it shows that the following transaction is the EXTRA
    Local transactions: 0e632a11-9e24-11e9-a2ca-0800271b198a:1

    Connecting to the Server 3326 and deduce the action due to the GTID,
    1.  mysql -uroot -h127.0.0.1 -P3326 -pmysql -e "show master status\G"

     

    2. The binlog file name is number as mysqllog.00006, we can use the following command to show the binlog events


    mysql> show binlog events in 'mysqllog.000006';
    +-----------------+-----+----------------+-----------+-------------+---------------------------------------------------------------------------------------------------------------------------------+
    | Log_name        | Pos | Event_type     | Server_id | End_log_pos | Info                                                                                                                            |
    +-----------------+-----+----------------+-----------+-------------+---------------------------------------------------------------------------------------------------------------------------------+
    | mysqllog.000006 |   4 | Format_desc    |       103 |         124 | Server ver: 8.0.16-commercial, Binlog ver: 4                                                                                    |
    | mysqllog.000006 | 124 | Previous_gtids |       103 |         271 | 0e632a11-9e24-11e9-a2ca-0800271b198a:1,
    115217b5-9e24-11e9-8686-0800271b198a:1-17,
    a7a6cc05-9e0f-11e9-ad3e-0800271b198a:1-20034 |
    +-----------------+-----+----------------+-----------+-------------+---------------------------------------------------------------------------------------------------------------------------------+
    2 rows in set (0.00 sec)



    It does not tell anything about the GTID : 0e632a11-9e24-11e9-a2ca-0800271b198a:1

    Continue to check the binlog file backwards - 'mysqllog.00005'

    mysql> show binlog events in 'mysqllog.000005';
    +-----------------+-----+----------------+-----------+-------------+-----------------------------------------------------------------------------------------+
    | Log_name        | Pos | Event_type     | Server_id | End_log_pos | Info                                                                                    |
    +-----------------+-----+----------------+-----------+-------------+-----------------------------------------------------------------------------------------+
    | mysqllog.000005 |   4 | Format_desc    |       103 |         124 | Server ver: 8.0.16-commercial, Binlog ver: 4                                            |
    | mysqllog.000005 | 124 | Previous_gtids |       103 |         231 | 115217b5-9e24-11e9-8686-0800271b198a:1-17,
    a7a6cc05-9e0f-11e9-ad3e-0800271b198a:1-20034 |
    | mysqllog.000005 | 231 | Gtid           |       103 |         304 | SET @@SESSION.GTID_NEXT= '0e632a11-9e24-11e9-a2ca-0800271b198a:1'                       |
    | mysqllog.000005 | 304 | Query          |       103 |         383 | flush tables                                                                            |
    | mysqllog.000005 | 383 | Stop           |       103 |         402 |                                                                                         |
    +-----------------+-----+----------------+-----------+-------------+-----------------------------------------------------------------------------------------+
    5 rows in set (0.00 sec)


    The GTID '0e632a11-9e24-11e9-a2ca-0800271b198a:1'  is located and the "Query" is "flush tables".   

    The issue of inconsistency is identified due to the exact "flush tables" on the Node 3326.




    To recovery this, because this 'flush table' is simple gap, we can fix it by adding this GTID to the existing INNODB CLUSTER.

    Login to the Node 3306 (which is the PRIMARY node), adding back the transaction :
    mysql -uroot -h127.0.0.1 -P3306 -pmysql
    mysql >SET @@SESSION.GTID_NEXT= '0e632a11-9e24-11e9-a2ca-0800271b198a:1';
    mysql>  flush tables;
    mysql> exit

    And now RESTART the node 3326 
     

    Check the InnoDB Cluster status again - It comes back as NORMAL


    IMPORTANT : Please note that, this is something manual fix but it can be VERY RISKY.    No guarantee to work because situations can be different from this tutorial.

    Enjoy.

    Pages