Planet MySQL

MySQL 8.0 Delayed Replication – New Features and Benefits

What is new with MySQL 8.0 Delayed Replication ?

Delayed Replication – You can deliberately execute transactions later than the master by a specific duration of time , Why you do that and for what ? Consider this, Accidentally someone did a wrong UPDATE / DELETE in the master and the transaction is committed, Now how can DBA rollback the database system to the last known good condition ? This is when we benefit from MySQL delayed slave replication investment. The default replication delay in MySQL is “0” seconds, To delay the slave by seconds use the CHANGE MASTER TO MASTER_DELAY = N, The transactions received from the master is not executed until N seconds later than it’s commit on the immediate master. We have blogged here how to setup delayed slave replication in MySQL. In this blog post we have explained how MySQL 8.0 advanced Delayed Slave Replication features.

MySQL 8.0 and Delayed Slave Replication

In MySQL 8.0 the delayed replication is controlled by two system variables on timestamps – orginal_commit_timestamp and immediate_commit_timestamp ,  They depend on GTID of each transaction (instead of each event like in MySQL 5.7) written to the binary log.These two system variables are applicable only when your entire replication infrastructure is on MySQL 8.0.1 or above , If either Master or slave is not using these timestamps, then delayed replication from MySQL 5.7 is used. 

  • original_commit_timestamp: The total number of microseconds since epoch when the transaction was written (committed) to the binary log of the original master.
  • immediate_commit_timestamp: The total number of microseconds since epoch when the transaction was written (committed) to the binary log of the immediate master / slave.

The orginal_commit_timestamp will be always same on all replication when the transaction is applied. In a typical master-slave replication, the original_commit_timestamp of a transaction in the (original) master’s binary log is always the same as its immediate_commit_timestamp. In the slave’s relay log, the original_commit_timestamp and immediate_commit_timestamp of the transaction are the same as in the master’s binary log; whereas in its own binary log, the transaction’s immediate_commit_timestamp corresponds to when the slave committed the transaction.

Monitoring the Delayed Slave Replication

We strongly recommend following Performance Schema tables to monitor the replication delay (lag):

  • replication_connection_status: The current status of connections to the master, This data dictionary table provides information on the last and current transaction the connection thread queued into the relay log.
  • replication_applier_status_by_coordinator: The current status of the coordinator thread that only displays information when using a multithreaded slave, This data dictionary table also provides information on the last transaction buffered by the coordinator thread to a worker’s queue, as well as the transaction it is currently buffering.
  • replication_applier_status_by_worker: The current status of the thread(s) applying transactions received from the master and it also provides information about the transactions applied by the applier thread, or by each worker when using a multithreaded slave.

The following two matrices from the output of SHOW SLAVE STATUS is also helpful to monitor Delayed Replication:

SQL_Delay – This is measured in seconds of replication delay which configured using CHANGE MASTER TO MASTER_DELAY=N

SQL_Remaining_Delay – This shows total number seconds left of the delay configured intentionally , i.e. Slave_SQL_Running_State is Waiting for MASTER_DELAY seconds

Conclusion

We can never avoid the human error in database infrastructure operations. But rollback to the last known good condition from delayed Master / Slave is the best thing recommended during the entire database infrastructure corruption scenarios. We at MinervaDB strongly recommend delayed Master /  Slaves for most of the customers to rollback quickly when there is an emergency, Thanks for your comments !

The post MySQL 8.0 Delayed Replication – New Features and Benefits appeared first on MySQL Consulting, Support and Remote DBA Services.

Some queries related to MySQL Roles

MySQL Roles are becoming more and more popular. Therefor, we receive more and more questions related to them.

First I encourage you to read this previous 2 posts:

In this post, I will share you some queries I find useful when using MySQL Roles.

Listing the Roles

The first query allows you to list the Roles created on your MySQL Server and if they are assigned to users, and how many:

SELECT any_value(User) 'Role Name', IF(any_value(from_user) is NULL,'No', 'Yes') Active, count(to_user) 'Assigned #times' FROM mysql.user LEFT JOIN mysql.role_edges ON from_user=user WHERE account_locked='Y' AND password_expired='Y' AND authentication_string='' GROUP BY(user);

And this is the output:

+------------------+--------+-----------------+ | Role Name | Active | Assigned #times | +------------------+--------+-----------------+ | dbt3_reader | Yes | 2 | | dbt3_update | Yes | 1 | | dbt3_writer | Yes | 1 | | dbt3_full_reader | No | 0 | +------------------+--------+-----------------+ Listing the active Roles with the users

The following query list all active Roles with the list of users they are assigned to:

SELECT from_user Roles, GROUP_CONCAT(to_user SEPARATOR ', ') Users FROM mysql.role_edges GROUP BY from_user;

That will returns something similar to this:

+-------------+------------------------+ | Roles | Users | +-------------+------------------------+ | dbt3_reader | dbt3_user1, dbt3_user2 | | dbt3_update | dbt3_user1 | | dbt3_writer | dbt3_user3 | +-------------+------------------------+ Listing all users assigned to one or multiple Roles

And finally, this query list all users having at least a Role assigned to them. Each users have the list of Roles they have assigned:

SELECT to_user Users, GROUP_CONCAT(from_user SEPARATOR ', ') Roles FROM mysql.role_edges GROUP BY to_user;

The above query will return something like:

+------------+--------------------------+ | User | Roles | +------------+--------------------------+ | dbt3_user1 | dbt3_reader, dbt3_update | | dbt3_user2 | dbt3_reader | | dbt3_user3 | dbt3_writer | +------------+--------------------------+

I hope you will find those queries useful and that you enjoy the MySQL Roles in MySQL 8.0

MySQL InnoDB Cluster – Recovering and provisioning with MySQL Enterprise Backup

Like I stated in my previous article - MySQL InnoDB Cluster - Recovering and provisioning with mysqldump : "As the administrator of a cluster, among others tasks, you should be able to restore failed nodes and to add (or remove) new nodes". Well, I still agree with myself :) MySQL customers using a Commercial Edition have access to MySQL Enterprise Backup (MEB) which provide enterprise-grade physical backup and recovery for MySQL. MEB delivers hot, online, non-blocking backups on multiple platforms including Linux, Windows, Mac & Solaris.

SQL CREATE INDEX Statement Tutorial With Example

SQL CREATE INDEX Statement Tutorial With Example is today’s topic. SQL CREATE INDEX statement is used to create the indexes in the tables. Indexes are used to retrieve the data from the database very fast. The users cannot see the indexes, and they are just used to speed up the searches/queries. Updating the table with indexes takes more time than updating the table without (because the indexes also need the update). So, only create indexes on the columns that will be frequently searched against.

#What is an Index in SQL

The index is a performance-tuning method of allowing the faster retrieval of records. SQL index creates the entry for each value that appears in the indexed columns. Each index name must be unique in the database.

SQL CREATE INDEX Statement

SQL Indexes are used to improve the efficiency of searches for data, presenting the data in the specific order when joining tables (see the “JOIN” Guides) and more.

An index is a “system” object, meaning that the database manager uses it.

Part of this usage is for the DBMS to update the index when the data used by index changes in the related table. Keep this in mind because as the number of indexes increase in the database, overall system performance can be impacted.

If you find that your SQLs are running slow on the given table or tables, creating the index is the first thing to consider to correct that issue.

See the following syntax of CREATE INDEX.

CREATE INDEX index_name ON table_name (column1, column2, ...);

See the following example.

CREATE INDEX IDX_ProductName ON Products (ProductName);

In the above query, we have created an index called IDX_ProductName on the Products table, and the column name is ProductName.

If you want to create the index on the combination of columns, you can list all the column names within the parentheses, separated by commas. See the following query.

CREATE INDEX IDXname ON Products (ProductName, id);

In the above query, we have changed the syntax name, and we have added two columns.

  1. ProductName
  2. id

simple index is an index on the single column, while the composite index is an index on two or more columns. In the examples above, ProductNameIx is the simple index because there is only one column, while IDX_proname_id is the composite index because there are two columns.

There is no strict or universal rule on how to name the index. The generally accepted method is to place the prefix, such as “IDX_,” before the index name to avoid the confusion with other database objects. It is also a great idea to provide an information on which table and column(s) the index is used on.

Here one thing you need to note that the exact syntax for CREATE INDEX may be different for the different DBMS. You should consult with your database manual for the related database syntax.

#Unique Index in SQL

Similar to a primary key, a unique key allows you to choose one column or combination of columns that must be unique for each record. Although you can only have one primary key on a table, you can create as many unique indexes on a table as you need.

If we want to create a unique index on the table, you need to specify a UNIQUE keyword in the CREATE INDEX statement. See the following example.

CREATE UNIQUE INDEX IDX_ProductName ON Apps (ProductName);

In the above query, we have created an index called IDX_ProductName on the Apps table. The column name on which we are indexing is here ProductName.

Above example would create a unique index on the ProductName field so that this field must always contain a unique value with no duplicates. This is a great way to enforce the integrity within your database if you require the unique values in the columns that are not part of your primary key.

Finally, SQL CREATE INDEX Statement Tutorial With Example is over.

The post SQL CREATE INDEX Statement Tutorial With Example appeared first on AppDividend.

SQL Truncate Table Statement Tutorial With Example

SQL Truncate Table Statement Tutorial With Example is today’s topic. The DROP TABLE the command deletes a table in the database. Be careful before removing a table and deleting table results in loss of all information stored in the table. The TRUNCATE TABLE command deletes the data inside a table, but not the table itself.

SQL Truncate Table Statement

TRUNCATE statement is the Data Definition Language (DDL) operation that is used to mark the extents of the table for deallocation (empty for reuse).

The result of this operation quickly deletes all the data from the table, typically bypassing the number of integrity enforcing mechanisms. The truncate was officially introduced in SQL:2008 standard.

SQL Truncate performs the same function as a DELETE statement without a WHERE clause.

Okay, now let’s see the syntax of SQL Truncate Table.

TRUNCATE TABLE table_name

Okay, now let’s see the following example.

See the following table.

 

I have already created and inserted the data into the Apps table.

Now, we will truncate the Apps table using the following query.

TRUNCATE TABLE Apps;

So, it has removed all the data

Now, again type the following query and try to fetch all the records.

Select * from Apps

You will see that there are data left inside the table.

#DROP TABLE and TRUNCATE TABLE

The significant difference between the Drop Table and Truncate Table is that DROP table deletes the table itself, whereas the Truncate Table does not eliminate the Table itself, it removes the data from the table.

You might choose to truncate the table instead of dropping the table and recreating it.

Truncating the table can be faster and does not affect any of the table’s indexes, triggers, and dependencies. It is also a quick way to clear out the records from the table if you don’t need to worry about the rolling back.

So, DROP TABLE command to delete the complete table, but it would remove the entire table structure form the database, and you will need to re-create that table once again if you wish you store some data still or in the future.

Table or Database deletion using a DROP statement cannot be rolled back, so it must be used wisely.

#DELETE TABLE Vs. TRUNCATE TABLE

Truncate table is much faster and it uses lesser resources than the DELETE TABLE statement.

If the table contains a primary key column, the counter for that column will be reset to a first value. For example, we have ID INT IDENTITY(1, 1) which contains the 100 rows/records, and we performed the TRUNCATE Statement table on ID. That truncate statement will delete all the rows from ID, and reset the IDENTITY to 1.

If you are using the DELETE TABLE query and remove one id, then next id will be removed id + 1. Means it does not restructure the whole data.

If the particular id row is gone, then it is gone. That id will not assign to another row in case of DELETE TABLE.

The SQL Truncate Table query deletes all the rows from a specified table, but the table structure, constraints, columns, indexes will remain the same.

Finally, SQL Truncate Table Statement Tutorial With Example is over.

The post SQL Truncate Table Statement Tutorial With Example appeared first on AppDividend.

How To Access REST API Using JavaScript fetch API

This tutorial help to access Rest API Using fetch API.I am using vanila JavaScript to create method fetch data.You can use GET/POST/PUT and DELETE method to consume rest api using JavaScript.

You have used jquery ajax method to get data from rest api, But JavaScript introduced Fetch API to get data from rest api. You can get more information about AJAX from JQuery Ajax Add, Edit And Delete Example with Demo.

The fetch API allows to create XMLHttpRequest (XHR) request and handle callback using Promises.The callback is legacy approaches to handle rest api request.The Fetch API uses Promises to avoid callback.The ES7 introduced async-await to avoid promise.The promise is help to send the request and receive a response.

You can also use fetch api with nodejs , if you want create rest api using nodejs. You can learn nodejs API from CRUD operations using Nodejs,MySQL and Restify

What’s JavaScript Fetch API

The Fetch API provides a fetch() method defined on window Object.The Fetch API default is using GET method to consume rest api. The fetch method has only one mandatory parameter is URL.The simple GET call using fetch() method.

fetch(url) .then(function() { // success response data }) .catch(function() { //server returns any errors });

The Simple Example –

fetch('http://dummy.restapiexample.com/api/v1/employees') .then(response => response.json()) .then(data => { console.log(data) // Prints result from `response.json()` in getRequest }) .catch(error => console.error(error))

The response is not JSON but an object with a series of following methods –

  • clone() – This method implies this method creates a clone of the response.
  • json() – This method resolves the promise with JSON.
  • redirect() – This method creates a new response but with a different URL.
  • text() – In this case, it resolves with a string.
  • arrayBuffer() – In here we return a promise that resolves with an ArrayBuffer.
  • blob() – This is one determines with a Blob.
  • formData() – Also returns a promise but one that determines with FormData object.

The post How To Access REST API Using JavaScript fetch API appeared first on Rest Api Example.

SQL Delete Query Tutorial | Delete Statement In SQL Example

SQL Delete Query Tutorial | Delete Statement In SQL Example is today’s topic. The DELETE statement is used to remove existing records in a table. We have already seen the create table, insert row, update row, and now we will see how to remove or delete the row from the SQL table. You can use the WHERE clause with the DELETE query to delete the selected rows; otherwise, all the records would be eliminated.

SQL Delete Query

Delete query is a type of DML type of statement in which we manipulate the tables. If you are not careful while deleting the records from the table, then you will end up losing lots of data.

#Syntax

See the following syntax of SQL Delete Statement.

DELETE FROM table WHERE [condition];

In the above query, we need to specify the table name and a condition; otherwise, the whole table will delete. You can also combine the N number of conditions using AND or OR operators.

See the following table.

 

Now, let’s write a query where we will remove the AFTERGAME SNEAKER row.

DELETE FROM Products WHERE ProductName = 'AFTERGAME SNEAKER'

Okay, so if the query is executed successfully, then now select all the rows using the following query.

Select * from Products

Now, see the output. There are not four rows.

 

If you want to remove all the records from the Products table, you do not have to use the WHERE clause, and the DELETE statement would be as follows.

DELETE from Products

It will remove all the data from the Products table.

#SQL Truncate Statement

If you want to remove all the rows in the big table, you should use the TRUNCATE TABLE query which is more efficient than the DELETE statement.

#SQL DELETE related rows in multiple tables

It becomes more and more complicated when you want to delete the row in the table which is associated with the other rows in another table. We can define the relationships between the two tables using Foreign Key.

For instance, let’s say we have two tables.

  1. Categories
  2. Products

Each product belongs to one category, and a category has multiple products.

That means, if we delete a particular category, then we have to remove all the products related to that category; otherwise the integrity among the tables will be violated.

So, one table has a primary key, and another table has the foreign key, and foreign key defines the relationships between those tables.

See the following query.

DELETE from categories WHERE id = 3; DELETE from products WHERE category_id = 3

Most database management systems (DBMS) allows us to create the foreign key constraint so that if you delete a row in the table, the corresponding rows to the related table are also removed automatically.

It ensures the integrity of the data. In our case, you just have to run the first DELETE statement only to delete the rows in two tables.

If the database management system does not support any foreign key constraint, you have to run both DELETE statements in the single transaction to make sure that the statements run in all-or-nothing mode.

So, DELETE Statement permanently removes records from a table.

Finally, SQL Delete Query Tutorial | Delete Statement In SQL Example is over.

The post SQL Delete Query Tutorial | Delete Statement In SQL Example appeared first on AppDividend.

SQL Merge Statement Tutorial With Example | Merge In SQL

SQL Merge Statement Tutorial With Example is today’s topic. SQL MERGE STATEMENT is the combination of INSERT, UPDATE, and DELETE statement. Merge Statement can perform all these operations in our main target table when the source table is provided. MERGE is very useful when it comes to loading the data warehouse tables, which can be very large and require the specific actions to be taken when the rows are or are not present.

SQL Merge Statement

The syntax is following.

MERGE <target_table> [AS TARGET] USING <table_source> [AS SOURCE] ON <search_condition> [WHEN MATCHED THEN <merge_matched> ] [WHEN NOT MATCHED [BY TARGET] THEN <merge_not_matched> ] [WHEN NOT MATCHED BY SOURCE THEN <merge_matched> ]; #How to use SQL MERGE STATEMENT
  1. Identify the target table which is going to be used in that logic.
  2. Next step is to identify the source table which we can use in the logic.
  3. Next step is to determine the appropriate search conditions in the ON clause to match the rows.
  4. Implement logic when records are matched or not matched between the target and source.
  5. For each of this comparison, conditions write the logic, and When matched, generally an update condition is used and When not matched, then insert or delete statement is used.

Let’s Clear this by seeing an example:

Consider Table Products: (This will be considered as Target Table).

ID NAME PRICE 101 Tea 5.00 201 Chips 10.00 301 Coffee 15.00

Updated_Products: (This will be Considered as SOURCE Table).

ID NAME PRICE 101 Tea 5.00 201 Biscuits 20.00 301 Coffee 25.00

 

#QUERY MERGE PRODUCTS AS TARGET USING UPDATED_PRODUCTS AS SOURCE ON (TARGET.ID=SOURCE.ID) THEN MATCHED AND TARGET.NAME SOUCE.NAME OR TARGET.PRICE SOURCE.PRICE THEN UPDATE SET TARGET.NAME=SOURCE.NAME, TARGET.PRICE=SOURCE.PRICE WHEN NOT MATCHED BY TARGET THEN INSERT (ID, NAME, PRICE) VALUES (SOURCE.ID, SOURCE.NAME, SOURCE.PRICE) WHEN NOT MATCHED BY SOURCE THEN DELETE; #Output

So, after running the above query Products table will be replaced by the Updated_products table.

You can see the table below.

ID NAME PRICE 101 Tea 5.00 201 Biscuits 20.00 301 Coffee 25.00

 

So, in this way, we can perform all three operations together using MERGE clause.

NOTE:

We can use any name other than source and target we have used these names to give you a better explanation.

#Some basic Key Points
  1. The MERGE SQL statement requires the semicolon (;) as a statement terminator Otherwise Error 10713 will be raised.
  2. At least one of three MATCHED clauses must be specified when we are using the MERGE statement.
  3. The user using the MERGE statement should have SELECT permission on the SOURCE table and INSERT, UPDATE and DELETE permissions on a TARGET table.
  4. While inserting, deleting or updating using merge statement in SQL Server fires any corresponding AFTER triggers defined on that target table, but it does not guarantee which action to fire triggers first or last.

Finally, SQL Merge Tutorial With Example is over.

The post SQL Merge Statement Tutorial With Example | Merge In SQL appeared first on AppDividend.

Shinguz: Enable General Query Log per Connection in MariaDB

The General Query Log in MariaDB is a Log (file or table) that logs every statement sent from the Application to the MariaDB Database Server from the Connect Command to the Quit Command. A simple example you can find here:

bin/mysqld, Version: 10.4.6-MariaDB-log (MariaDB Server). started with: Tcp port: 3306 Unix socket: /var/run/mysqld/mysql.sock Time Id Command Argument 190709 21:27:30 3872 Connect root@localhost as anonymous on test 3872 Query INSERT INTO test VALUES (NULL, CONCAT('Test data insert from laptop5 on ', @@hostname), CURRENT_TIMESTAMP()) 3872 Quit

The General Query Log can be enabled dynamically with the following command:

SQL> SET GLOBAL general_log = 1;

and is written either to a file (defined with general_log_file) or a table (mysql.general_log, defined with log_output=TABLE).

But unfortunately the General Query Log logs all Commands from all Connections. So it can grow quickly and you loose overview over what happens...

With the MariaDB Server System Variables sql_log_off you can configure to NOT log a Connection to the General Query Log. This is the opposite of what we want to have. But with a clever combination of these variables we can manage how to log only specific Connections to the General Query Log.

Preparation for Connection Logging to the General Query Log

First of all we have to enable the General Query Log but disable it for all Connections so nothing is written to the General Query Log:

SQL> SET GLOBAL sql_log_off = on; -- 1 = no logging SQL> SET SESSION sql_log_off = on; -- also for my current session! SQL> SET GLOBAL general_log = 1; -- 1 = enable general log file

Then we need a place where to specify and control which user from which host should be logged. So we can control General Query Log logging dynamically. For this purpose we create a table called general_log_settings in the sys Schema of the MariaDB sys Schema:

SQL> CREATE SCHEMA IF NOT EXISTS `sys`; SQL> use `sys`; SQL> DROP TABLE IF EXISTS `general_log_settings`; SQL> CREATE TABLE `general_log_settings` ( `HOST` char(60) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '%', `USER` char(32) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '%', `ENABLED` enum('YES','NO') NOT NULL DEFAULT 'YES' ) ENGINE=Aria DEFAULT CHARSET=utf8; SQL> INSERT INTO `general_log_settings` (`HOST`, `USER`, `ENABLED`) VALUES ('%', '%', 'NO');

Now we need a mechanism to enable General Query Log logging for every user. For this we use a Stored Procedure:

SQL> use `sys`; SQL> DROP PROCEDURE IF EXISTS `enable_general_log`; SQL> DELIMITER // SQL> CREATE OR REPLACE DEFINER=`root`@`localhost` PROCEDURE `enable_general_log`() DETERMINISTIC READS SQL DATA SQL SECURITY DEFINER BEGIN DECLARE user VARCHAR(32); DECLARE host VARCHAR(60); DECLARE cnt SMALLINT; SELECT SUBSTRING_INDEX(SESSION_USER(), '@', 1) INTO user; SELECT SUBSTRING_INDEX(SESSION_USER(), '@', -1) INTO host; SELECT COUNT(*) INTO cnt FROM `general_log_settings` WHERE `ENABLED` = 'YES' AND (`HOST` = host OR `HOST` = '%') AND (`USER`= user OR `USER`= '%') ; IF cnt > 0 THEN SET SESSION sql_log_off = off; DO 'Connect'; END IF; END; // SQL> DELIMITER ;

Then we have to grant to ALL! users without the SUPER privilege the privilege to execute this procedure. These users can be found with the following query:

SQL> SELECT user, host, super_priv FROM mysql.user WHERE super_priv != 'Y'; +------+-----------+------------+ | User | Host | Super_priv | +------+-----------+------------+ | oli | localhost | N | | app | % | N | +------+-----------+------------+

To grant the privilege to execute the procedure you must run the queries as follows:

SQL> GRANT EXECUTE ON PROCEDURE `sys`.`enable_general_log` TO 'oli'@'localhost'; SQL> GRANT EXECUTE ON PROCEDURE `sys`.`enable_general_log` TO 'app'@'%';

Caution: If you do NOT grant the execute privilege to all users their new connections will fail as soon as you have enabled the logon trigger!

Then you have to add the users you want to log to the General Query Log to the general_log_settings table:

SQL> INSERT INTO `sys`.`general_log_settings` VALUES ('%', 'app', 'YES'); SQL> SELECT * FROM `sys`.`general_log_settings`; +------+------+---------+ | HOST | USER | ENABLED | +------+------+---------+ | % | % | NO | | % | app | YES | +------+------+---------+

You can test the logging to the General Query Log in your Connection with the following command now:

SQL> CALL `sys`.`enable_general_log`();

If everything is done correctly you should see some entries in your General Query Log as follows:

190709 23:51:21 6695 Query DO 'Connect' 6695 Query select count(*) from test.test 6695 Quit

And you should notice that the Connect Command is missing (see above). This is because enabling the General Query Log happens after the Connect. To give an equivalent entry tag in the General Query Log we have added the DO 'Connect' tag in the Stored Procedure.

Up to this point here all actions should not have any impact (neigther operational nor performance wise) to your MariaDB database system yet.

Activating General Query Log per Connection

Now we want to activate General Query Log logging for every new Connection to your MariaDB database server. This step can have an impact on your MariaDB database system and you should know how an be prepared to revert this step:

SQL> SET GLOBAL init_connect = 'CALL `sys`.`enable_general_log`();';

The init_connect Server System Variable specifies a command or a list of commands that will be executed for each new client connection. We also call this a Logon Trigger. These statements are only executed for users without the SUPER privilege. So you cannot track admin accounts with this method.

Further readings

Please let us know if you think we should add this tool to the MariaDB sys Schema or not.

Taxonomy upgrade extras:  general query log connection logon trigger

Percona XtraBackup 2.4.15 Is Now Available

Percona is glad to announce the release of Percona XtraBackup 2.4.15 on July 10, 2019. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

All Percona software is open-source and free.

Bugs Fixed
  • When the encrypted tablespaces feature was enabled, encrypted and compressed
    tables were not usable on the joiner node (Percona XtraDB Cluster) via SST
    (State Snapshot Transfer) with the xtrabackup-v2 method. Bug fixed PXB-1867.
  • xbcloud did not update date related fields of the HTTP
    header when retrying a request. Bug fixed PXB-1874.
  • xbcloud did not retry to send the request after receiving the HTTP 408
    error (request timeout). Bug fixed PXB-1875.
  • If the user tried to merge an already prepared incremental backup, a
    misleading error was produced without informing that incremental backups may
    not be used twice. Bug fixed PXB-1862.
  • xbcloud could crash with the Swift storage when project options were
    not included. Bug fixed PXB-1844.
  • xtrabackup did not accept decimal fractions as values of the
    innodb_max_dirty_pages_pct option. Bug fixed PXB-1807.

Other bugs fixed:  PXB-1850, PXB-1879, PXB-1887, PXB-1888, PXB-1890.

Release notes with all the improvements for Percona XtraBackup 2.4.15 are available in our online documentation. Please report any bugs to the issue tracker.

How to install Apache, PHP 7.3 and MySQL on CentOS 7.6

This tutorial shows how to install an Apache web server on a CentOS 7 server with PHP (mod_php with PHP 5.4, 7.0, 7.1, 7.2 or 7.3) and MySQL support. This setup is often referred to as LAMP which stands for Linux - Apache - MySQL - PHP.

The MySQL High Availability Landscape and where Galera Cluster fits in (USA and EMEA Webinar)

The MySQL Server High Availability landscape provides with a slew of tools to help you ensure that your databases keep humming. Such tools are Galera Cluster, however, it is worth looking at semi-synchronous replication with failover tools, and also the usual leader-follower asynchronous replication. Today more workloads are moving to the cloud, and what failover options do you get with Amazon RDS for MySQL or even Aurora? What about the newfangled group replication and InnoDB Cluster? And let us not forget that Galera Cluster has spawned branches too!

Join Codership, the developers and experts of Galera Cluster,  as we discuss where Galera Cluster fits in and how to get best out of it.

EMEA webinar 9th April 10-11 AM CEST (Central European Time)
JOIN THE EMEA WEBINAR

USA webinar 9th April 9-10 AM PDT (Pacific Daylight Time)
JOIN THE USA WEBINAR

Presenter: Colin Charles, Galera Cluster Chief Evangelist, Codership

Running Galera Cluster effectively on Amazon Web Services (AWS) and comparing it to RDS and Aurora (EMEA and USA webinar)

Do you want to run Galera Cluster in the cloud? Why not learn to setup a 3-node Galera Cluster using Amazon Web Services (AWS) Elastic Compute Cloud (EC2), and run it yourself (3-node because this is the minimum preferred size; you can have a 5, 7, or 9 node cluster too — not in the demo though!). In this webinar, we will cover the steps to do this, with a demonstration of how easy it is for you to do. We will also cover how you may want to load balance traffic to your Galera Cluster using a proxy like ProxySQL.

In addition, we will cover why you may want to run a 3-node (or more) Galera Cluster (multi-master synchronous clusters) instead of (or in addition to) using Amazon Relational Database Service (RDS) MySQL, which has the ability to have asynchronous replication (Read Replicas), and High Availability provided by DRBD (Multi-AZ). We will also cover where Galera Cluster differs from Amazon Aurora RDS for MySQL.

Join us and learn about storage options, backup & recovery, as well as monitoring & metrics options for the “roll your own Galera Cluster” in EC2.

EMEA webinar 30th July 10-11 AM CEST (Central European Time)
JOIN THE EMEA WEBINAR

USA webinar 30th July 9-10 AM PDT (Pacific Daylight Time)
JOIN THE USA WEBINAR

Presenter: Colin Charles, Galera Cluster Chief Evangelist, Codership

How To Add Comments in SQL Query Tutorial With Example

How To Add Comments in SQL Query Tutorial With Example is today’s topic. Comments are used to explain the sections of SQL statements, or to prevent the execution of SQL statements. Comments are not supported in the Microsoft Access databases. Mozilla Firefox and Microsoft Edge are using Microsoft Access database in our examples.

How To Add Comments in SQL

In SQL, you can comment the code just like any other language. Comments can appear on the single line or span across the multiple lines. Let’s explore how to comment on your SQL statements.

Comments can be written in the following formats:

  1. Single line comments.
  2. Multi-line comments
  3. Inline comments
#Syntax #Syntax Using — Symbol

The syntax for creating the comment in SQL using — symbol is the following.

-- comment goes here

Any text between — and the end of the line will be ignored (will not be executed).

The comment started with  symbol must be at the end of the line in your SQL statement with the line break after it. The above method of commenting can only span the single line within your SQL and must be at the end of the line.

See the following example. If you do not know how to create a table, then check out the article on how to create a table in MySQL.

-- fetch the records staring from LV SELECT * FROM Products WHERE ProductName LIKE 'LV%'

See the following output.

 

#Multi-line Comments OR Syntax Using /* and */ symbols

See the following query.

/*Select all the columns of all the records in the Products table:*/ SELECT * FROM Products

See the output.

 

If we want to ignore just the part of the statement, also use a /* */ comment.

See the following example which uses a comment to ignore part of the line.

SELECT ProductName, /*ProductCategory,*/ ProductPrice FROM Products

See the following output.

Finally,  How To Add Comments in SQL Query Tutorial With Example is over.

The post How To Add Comments in SQL Query Tutorial With Example appeared first on AppDividend.

SQL LIKE Operator Tutorial With Example

SQL LIKE Operator Tutorial With Example is today’s leading topic. The LIKE operator is used in the WHERE clause to search for the specified pattern in a column. There are two wildcards often used in conjunction with a LIKE operator:

  1. % – Percent sign represents zero, one, or multiple characters.
  2. _ – Underscore represents a single character.
SQL LIKE Operator

The SQL LIKE  is the logical operator that determines if the character string matches the specified pattern. The pattern may include the regular characters and wildcard characters. SQL Like operator is used in the WHERE clause of the SELECT, UPDATE, and DELETE statements to filter rows based on pattern matching.

Syntax SELECT FROM table_name WHERE column LIKE 'AAAA%' or SELECT FROM table_name WHERE column LIKE '%AAAA%' or SELECT FROM table_name WHERE column LIKE 'AAAA_' or SELECT FROM table_name WHERE column LIKE '_AAAA' or SELECT FROM table_name WHERE column LIKE '_AAAA_'

In the above syntax, after the LIKE Operator, there is the pattern.

#The pattern in SQL LIKE

The design is the sequence of characters to search for in a column or expression. It can contain the following valid wildcard characters:

  1. A percent wildcard (%): Any string of zero or more characters.
  2. An underscore (_) wildcard: Any single character.
  3. A
      wildcard: Any single character within a specified set.
    • A [character-character]: Any single character within the specified range.
    • A [^]: any single character not within a list or the range.

    The wildcard characters make the LIKE operator more flexible than the equal (=) and not equal (!=) string comparison operators.

    #Escape character

    The escape character instructs a LIKE operator to treat the wildcard characters as the regular characters. An escape character has no default value and must be an evaluated to only one character.

    #Examples

    Let’s say we have the following table.

     

    Now, we will use the LIKE Operator to query the database. See the following query.

    SELECT * FROM Products WHERE ProductName LIKE 'LV%';

    See the following output.

     

    #More Statements Sr.No. Statement & Description 1 WHERE ProductName LIKE ‘LV%’

    It finds any values that start with LV.

    2 WHERE ProductName LIKE ‘%TIA%’

    It finds any values that have TIA in any position.

    3 WHERE ProductName LIKE ‘___INI%’

    It finds any values that have INI in the fourth, fifth, and sixth positions.

    4 WHERE ProductName LIKE ‘L_%_%’

    It finds any values that start with 2 and are at least 3 characters in length.

    5 WHERE ProductName LIKE ‘%s’

    It finds any values that end with s.

    6 WHERE ProductName LIKE ‘_V%s’

    It finds any values that have a V in the second position and end with an s.

    Finally, SQL LIKE Operator Tutorial With Example is over.

    The post SQL LIKE Operator Tutorial With Example appeared first on AppDividend.

    MySQL DATE_SUB Function with Simple Examples

    This tutorial explains MySQL DATE_SUB function which subtracts a slice of time (in days/hours, e.g., 1 day or 10 days) from the given date. We’ll describe the complete date arithmetic of this method with the help of simple examples. 1. DATE_SUB() Syntax 2. DATE_SUB() with -ve Interval 3. DATE_SUB() for Invalid Date 4. DATE_SUB() for Auto Adjustment Let’s now go through each of the section one by one. MySQL DATE_SUB() Function As stated initially, DATE_SUB() is a built-in MySQL function which subtracts the specified no. of days from a given date. So, let’s now see the details and check out

    The post MySQL DATE_SUB Function with Simple Examples appeared first on Learn Programming and Software Testing.

    MySQL FIND_IN_SET Function with Simple Examples

    This tutorial explains MySQL FIND_IN_SET function which finds the position of a string within a sequence of strings separated by commas (i.e., comma-separated such as str1, str2, find_this_string,…) with examples. 1. FIND_IN_SET() Syntax 2. FIND_IN_SET Flow Diagram 3. FIND_IN_SET() Examples 4. FIND_IN_SET() Function Vs. IN operator Let’s now go through each of the section one by one. MySQL FIND_IN_SET() Function As stated initially, FIND_IN_SET() is a built-in MySQL function which returns the index of a string in a list of comma-delimited strings. So, let’s now see the details and check out how can we use it. Syntax Below is the

    The post MySQL FIND_IN_SET Function with Simple Examples appeared first on Learn Programming and Software Testing.

    How to Declare Variables in MySQL

    This tutorial describes how to declare a variable (like user-defined, local, and system vars) in MySQL. We’ll tell you the complete syntax and provide simple examples for clarity. MySQL puts up the below three ways: 1. Declare a user-defined variable 2. Declare a local variable 3. Declare a system variable Let’s start with looking at all of them one by one. Declare Variable in MySQL There are primarily three types of variables in MySQL. And each has its specific way to provide a declaration. 1. Declare a User-defined Variable In MySQL, we can use the SET statement to declare a

    The post How to Declare Variables in MySQL appeared first on Learn Programming and Software Testing.

    Where are the logs for a Tungsten Cluster?

    The Question Recently, a customer asked us:

    Where are the logs for a Tungsten Cluster and which are the proper log files to monitor if I do a master role switch to another node?

    The Answer: Part I Both basic and advanced logs are available!

    The basic logs are symbolic links into the log subdirectory for each layer of the cluster.

    For example, this is the listing of the default log directory, /opt/continuent/service_logs:

    connector-user.log -> /opt/continuent/tungsten/tungsten-connector/log/connector-user.log manager-user.log -> /opt/continuent/tungsten/tungsten-manager/log/manager-user.log mysqldump.log -> /opt/continuent/tungsten/tungsten-replicator/log/mysqldump.log replicator-user.log -> /opt/continuent/tungsten/tungsten-replicator/log/replicator-user.log xtrabackup.log -> /opt/continuent/tungsten/tungsten-replicator/log/xtrabackup.log

    As you can see, each log file is a symlink to the user-level log for each of the layers, along with logs for backups.

    You may also cd into each of the “real” log directories and look at the following, more detailed logs:

    /opt/continuent/tungsten/tungsten-manager/log/tmsvc.log /opt/continuent/tungsten/tungsten-connector/log/connector.log /opt/continuent/tungsten/tungsten-replicator/log/trepsvc.log

    The Answer: Part II Focus on the Manager and Connector logs

    To monitor a master switch, the Manager log is best:
    /opt/continuent/tungsten/tungsten-manager/log/tmsvc.log

    and to a lesser degree the Connector log:
    /opt/continuent/tungsten/tungsten-connector/log/connector.log

    Summary The Wrap-Up

    In this blog post we discussed the Tungsten Cluster basic and advanced logs, their locations and which ones to use to monitor a master role switch to another node.

    To learn about Continuent solutions in general, check out https://www.continuent.com/solutions

    The Library Please read the docs!

    For more information about Tungsten clusters, please visit https://docs.continuent.com

    Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

    For more information, please visit https://www.continuent.com/solutions

    Want to learn more or run a POC? Contact us

    Backing Up All MySQL Server Databases or Only the Required Ones

    When having only several databases to manage, regular backup operations can be done quite easily and seamlessly either with the help of a few simple scripts or by configuring an SQL Server agent that will perform the backup automatically. But sometimes the situation is more complicated. When, for instance, there are hundreds of databases, backing […]

    Pages