Planet MySQL

How to Manage Session using Node.js and Express

Session handling in any web application is very important and is a must-have feature, without it, we won’t be able to track user and it’s activity.

In this article, I am going to teach you how to handle Session in Node.js. We will use express as a framework and various other modules such as body-parser to handle form data.

YOUTUBE DEMO DOWNLOAD

At the time of writing article, the latest version of Express is 4.16.4.

What we are buiding

To demonstrate Session handling in Node, I have developed a basic Log-in and log-out System. In this User can log-in by providing their email, and that email will be used for further Session tracking. Once User log-out, Session will be destroyed and User will be redirected to home page.

Creating Node Project

Let’s create a new Node project. Create a new folder and switch to it using the terminal.

Run this command to create a new Node project.

npm init --y

This command will create a new package.json file. Let’s install the required dependency.

npm install --save express express-session body-parser

Once the dependencies are installed, we can proceed to code our app.

How to use Express Session ?

Before heading to actual code, i want to put few words about express-session module. to use this module, you must have to include express in your project. Like for all packages, we have to first include it.

server.js const express = require('express');
const session = require('express-session');
const app = express();

After this, we have to initialize the session and we can do this by using following.

app.use(session({secret: 'ssshhhhh'}));

Here ‘secret‘ is used for cookie handling etc but we have to put some secret for managing Session in Express.

Now using ‘request‘ variable you can assign session to any variable. Just like we do in PHP using $_SESSION variable. for e.g

var sess;
app.get('/',function(req,res){
    sess=req.session;
    /*
    * Here we have assign the 'session' to 'sess'.
    * Now we can create any number of session variable we want.
    * in PHP we do as $_SESSION['var name'].
    * Here we do like this.
    */
    sess.email; // equivalent to $_SESSION['email'] in PHP.
    sess.username; // equivalent to $_SESSION['username'] in PHP.
});

After creating Session variables like sess.email , we can check whether this variable is set or not in other routers and can track the Session easily.

Tracking session in global variable won’t work with multiple users. This is just for the demonstration.

Project Structure

We are going to put all of Server side code in the server.js file. Front-end code will be placed inside the views folder.

Here is our Server side code.

server.js const express = require('express');
const session = require('express-session');
const bodyParser = require('body-parser');
const router = express.Router();
const app = express();

app.use(session({secret: 'ssshhhhh',saveUninitialized: true,resave: true}));
app.use(bodyParser.json());      
app.use(bodyParser.urlencoded({extended: true}));
app.use(express.static(__dirname + '/views'));

var sess; // global session, NOT recommended

router.get('/',(req,res) => {
    sess = req.session;
    if(sess.email) {
        return res.redirect('/admin');
    }
    res.sendFile('index.html');
});

router.post('/login',(req,res) => {
    sess = req.session;
    sess.email = req.body.email;
    res.end('done');
});

router.get('/admin',(req,res) => {
    sess = req.session;
    if(sess.email) {
        res.write(`<h1>Hello ${sess.email} </h1><br>`);
        res.end('<a href='+'/logout'+'>Logout</a>');
    }
    else {
        res.write('<h1>Please login first.</h1>');
        res.end('<a href='+'/'+'>Login</a>');
    }
});

router.get('/logout',(req,res) => {
    req.session.destroy((err) => {
        if(err) {
            return console.log(err);
        }
        res.redirect('/');
    });

});

app.use('/', router);

app.listen(process.env.PORT || 3000,() => {
    console.log(`App Started on PORT ${process.env.PORT || 3000}`);
});

In the code shown above, there are four routers. First, which render the home page, the second router is used for login operation. We are not doing any authentication here for the sake of simplicity.

The third router is used for admin area where the user can only go if he/she is log-in. The fourth and the last router is for session destruction.

Each router checks whether the sess.emailvariable is set or not and that could be set only by logging in through front-end. Here is my HTML code which resides in views directory.

views/index.html <html>
<head>
<title>Session Management in NodeJS using Node and Express</title>
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script>
$(document).ready(function(){
    var email,pass;
    $("#submit").click(function(){
        email=$("#email").val();
        pass=$("#password").val();
        /*
        * Perform some validation here.
        */
        $.post("/login",{email:email,pass:pass},function(data){
            if(data==='done') {
                window.location.href="/admin";
            }
        });
    });
});
</script>
</head>
<body>
<input type="text" size="40" placeholder="Type your email" id="email"><br />
<input type="password" size="40" placeholder="Type your password" id="password"><br />
<input type="button" value="Submit" id="submit">
</body>
</html>

In jQuery code, we are calling our Router ‘/login’ and redirecting it to the ‘admin‘ if log-in is successful, you can add validation to fields as per your requirement, for demo purpose i have not added any.

The Bug Alert!

As I have mentioned earlier, using a global variable for the session won’t work for multiple users. You will receive the same session information for all of the users.

So how do we solve this? By using a session store.

We save every session in the store so that one session will belong to the one user only. I have explained and build session store using Redis in this article.

For the quick reference, he is how we can extend the code shown above using Redis as a session store.

First, you need to install Redis on your computer. Click here to learn how to install Redis.

Then, install these dependencies in your project.

npm i --S redis connect-redis

Here is the codebase after upgrading it to support Redis.

server.js /*
 * Manage Session in Node.js and ExpressJS
 * Author : Shahid Shaikh
 * Version : 0.0.2
*/
const express = require('express');
const session = require('express-session');
const bodyParser = require('body-parser');
const redis = require('redis');
const redisStore = require('connect-redis')(session);
const client  = redis.createClient();
const router = express.Router();
const app = express();

app.use(session({
    secret: 'ssshhhhh',
    // create new redis store.
    store: new redisStore({ host: 'localhost', port: 6379, client: client,ttl : 260}),
    saveUninitialized: false,
    resave: false
}));

app.use(bodyParser.json());      
app.use(bodyParser.urlencoded({extended: true}));
app.use(express.static(__dirname + '/views'));

router.get('/',(req,res) => {
    let sess = req.session;
    if(sess.email) {
        return res.redirect('/admin');
    }
    res.sendFile('index.html');
});

router.post('/login',(req,res) => {
    req.session.email = req.body.email;
    res.end('done');
});

router.get('/admin',(req,res) => {
    if(req.session.email) {
        res.write(`<h1>Hello ${req.session.email} </h1><br>`);
        res.end('<a href='+'/logout'+'>Logout</a>');
    }
    else {
        res.write('<h1>Please login first.</h1>');
        res.end('<a href='+'/'+'>Login</a>');
    }
});

router.get('/logout',(req,res) => {
    req.session.destroy((err) => {
        if(err) {
            return console.log(err);
        }
        res.redirect('/');
    });

});

app.use('/', router);

app.listen(process.env.PORT || 3000,() => {
    console.log(`App Started on PORT ${process.env.PORT || 3000}`);
});

If you notice in the code shown above, we have removed the global variable. We are using Redis to store our session instead. Try this multiple users and you should see unique session for each user.

I highly recommend following this article for more detailed information.

How to run example code

Download code and extract the zip file. Open your command prompt or Terminal and switch to the directory. Install dependency first by using.

npm install

Then run code using

node server.js

Visit localhost:3000 to view the app.

Conclusion:

Like I mentioned session is very important for any web application. Node.js allows us to create an HTTP server and HTTP is a stateless protocol. It stores no information about previous visit and Express solves this problem very beautifully.

Further reading

Nodejs tutorials
Node.js MySQL Tutorial
Programming a Voice Controlled Drone Using Node and ARDrone

MySQL 8.0 GIS -- Inserting Data & Fun Functions

The last blog entry was very popular and there were lots of requests for some introductory information on the spatial data types. 

Well Known Text Or Binary
I am going to use the GEOMETRY data type over POINT, LINESTRING, or POLYGON as it can store any of those three while the other three can only contain data matching their name (so POINT can holds only point data, etc.). The values are stored in an internal geometry format but it takes wither WKT or WKB formatted data.

Those are Well-Known Text (WKT) or Well-Known Binary (WKB) formats repectively. I am hoping most of your are better with text than binary so the following examples demonstrate how to insert geometry values into a table by converting WKT values to internal geometry format.

So let us start with a simple table.

mysql> create table geom (type text, g geometry);
Query OK, 0 rows affected (0.04 sec)

We can use the ST_GeomFromText function to take some strings and convert into the internal format.

mysql> insert into geom values 
       ('point', st_geomfromtext('point(1 1)'));
Query OK, 1 row affected (0.01 sec)

mysql> insert into geom values 
       ('linestring', st_geomfromtext('linestring(0 0,1 1, 2 2)'));
Query OK, 1 row affected (0.01 sec)

There are type specific functions for POINT, LINESTRING, and POLYGON that we can also take advantage of for this work.

mysql> SET @g = 
     'POLYGON((0 0,10 0,10 10,0 10,0 0),(5 5,7 5,7 7,5 7, 5 5))';
Query OK, 0 rows affected (0.00 sec)

mysql> INSERT INTO geom 
       VALUES ('polygon',ST_PolygonFromText(@g));
Query OK, 1 row affected (0.00 sec)

If you do a SELECT * FROM geom; you will get the g column output in binary.  Thankfully we can use ST_AsText() to provide us with something more readable.


mysql> select type, st_astext(g) from geom;
+------------+----------------------------------------------------------+
| type       | st_astext(g)                                             |
+------------+----------------------------------------------------------+
| point      | POINT(1 1)                                               |
| linestring | LINESTRING(0 0,1 1,2 2)                                  |
| polygon    | POLYGON((0 0,10 0,10 10,0 10,0 0),(5 5,7 5,7 7,5 7,5 5)) |
+------------+----------------------------------------------------------+
3 rows in set (0.00 sec)

mysql>


Put the 'fun' in Function

There are functions that can be use with each of these type to provide information., For instance there are the x & Y coordinates for a point.

mysql> select type, st_x(g), st_y(g) from geom where type='point';
+-------+---------+---------+
| type  | st_x(g) | st_y(g) |
+-------+---------+---------+
| point |       1 |       1 |
+-------+---------+---------+
1 row in set (0.00 sec)

We can even use our linestring data to get the minimal bounding rectangle for the given coordinates (basically if we had to put an envelope around the points the results are the coordinates points to enclose those x,y coordinates)

mysql> select st_astext(st_envelope(g)) 
       from geom where type='linestring';
+--------------------------------+
| st_astext(st_envelope(g))      |
+--------------------------------+
| POLYGON((0 0,2 0,2 2,0 2,0 0)) |
+--------------------------------+
1 row in set (0.00 sec)


And we can get the area of a polygon too.

mysql> select type, st_area((g)) from geom where type='polygon';
+---------+--------------+
| type    | st_area((g)) |
+---------+--------------+
| polygon |           96 |
+---------+--------------+
1 row in set (0.00 sec)

And find the mathematical center of that polygon.

mysql> select type, 
       st_astext(st_centroid(g)) 
       from geom where type='polygon';
+---------+--------------------------------------------+
| type    | st_astext(st_centroid(g))                  |
+---------+--------------------------------------------+
| polygon | POINT(4.958333333333333 4.958333333333333) |
+---------+--------------------------------------------+
1 row in set (0.00 sec)

Plus we can get the linestring data if we wanted to draw out polygon.

mysql> select type, 
       st_astext(st_exteriorring(g)) 
       from geom where type='polygon';
+---------+-------------------------------------+
| type    | st_astext(st_exteriorring(g))       |
+---------+-------------------------------------+
| polygon | LINESTRING(0 0,10 0,10 10,0 10,0 0) |
+---------+-------------------------------------+
1 row in set (0.00 sec)



React Tutorial: Consume a JSON REST API with Fetch and Styling UI with Bootstrap 4

In this tutorial we'll learn how to build a React application that consumes a third-party REST API using the fetch() API. We'll also use Bootstrap 4 to style the UI. We'll consume a third-party API available from this link. We'll also see some basics of React such as: The state object to hold the state of the app and the setState() method to mutate the state. The componentDidMount() life-cycle method for running code when the component is mounted in the DOM. How to embed JavaScript expressions in JSX using curly braces. How to render lists of data using the map() method and JSX and conditionally render DOM elements using the logical && operator. React is the most popular UI library for building user interfaces built and used internally by Facebook. React has many features such as: Declarative: React makes it easy to build interactive UIs by creating views for each state in your application, and let React render just the right components when the data changes. Component-Based: Build components that have their own state and compose them to build complex UIs. Learn Once, Write Anywhere: React can be rendered on the server using Node and can be use to build native mobile apps using React Native. Prerequisites You will need the following prerequisites to successfully complete the steps in this tutorial: Working experience of JavaScript, Basic understanding of REST APIs, Node and NPM installed on your machine. React is a client side library but you need Node.js to run the create-react-app utility that can be used to generate React projects and work with them locally. You can very easily install Node and NPM by getting the binaries from your system from the official website. A better way is to use NVM or Node Version Manager to easily install and manage multiple active Node versions. If you are ready, let's get started! Installing create-react-app We'll use the create-react-app utility to generate a React project with best practices and development scripts created by the official team behind React. Open a new terminal and run the following command to install the utility in your system: $ npm install -g create-react-app Note: Please note that you may need to add sudo before you command in Debian systems and macOS to be able to install npm packages globally. You can also just fix your npm permissions to avoid using sudo. If you installed Node and NPM using NVM, this will be handled automatically for you. At the time of this writing, create-react-app v2.1.8 is installed in our system. Creating a React Project After install create-react-app, let's use it to generate our React project. Head back to your terminal and run the following commands: $ cd ~ $ npx create-react-app react-fetch-rest-api We navigated to the home folder and issued the npx create-react-app command to create our project. Note: You can obviously navigate to any folder you choose for your project. npx is a tool that allows you to run executables from the node_modules folder, you can find more details from the official website. Wait for the installation process to finish. This may take a while! Next, navigate to your project's root folder and run the development server using the following commands: $ cd create-react-app react-fetch-rest-api $ npm start Your local development server will be running from the http://localhost:3000 address and you web browser will be opened automatically and navigated to your React application. Since we use a live-reload dev server, you can leave the current terminal window open and start a new one for running the rest of the commands in this tutorial. After any changes, you server will be automatically restarted and your application will be live-reloaded in the browser. This is a screenshot of our application at this point: Open the src/App.js file and let's remove the default boilerplate code that we are not using in our example. Simply, change the content with the following: import React, { Component } from 'react'; class App extends Component { render() { return ( // Your JSX code goes here. ); } } export default App; Styling the UI with Bootstrap 4 We'll use Bootstrap 4 for styling the UI. Integrating Bootstrap 4 with React is quite easy. Open the public/index.html file and add the following code: <head> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous"> <!-- [...] --> Now, let's display an example todo just for making sure Bootstrap is successfully added. Open the src/App.js and replacte with following code: import React, { Component } from 'react'; class App extends Component { render() { return ( <div className="container"> <div className="col-xs-12"> <h1>My Todos</h1> <div className="card"> <div className="card-body"> <h5 className="card-title">Example Todo</h5> <h6 className="card-subtitle mb-2 text-muted">Completed</h6> </div> </div> </div> </div> ); } } export default App; In React we use className instead of class for adding a CSS class to DOM elements. This is a screenshot of the UI: Fetching and Displaying the REST API As we said earlier we are going to use the browser fetch API to consume JSON data from the todos API endpoint. First, in the src/App.js file and add a state object to hold our todos once we fetch them: import React, { Component } from 'react'; class App extends Component { state = { todos: [] } // [...] } export default App; We created a state variable called todos. Next, add a componentDidMount() life-cycle method in our src/App.js file and add the code to fetch the JSON data inside it. This method is executed when the component is mounted in the DOM so it's the right place to place our JSON fetching logic: import React, { Component } from 'react'; class App extends Component { state = { todos: [] } componentDidMount() { fetch('http://jsonplaceholder.typicode.com/todos') .then(res => res.json()) .then((data) => { this.setState({ todos: data }) console.log(this.state.todos) }) .catch(console.log) } // [...] } export default App; We simply send a GET request to the /todos endpoint. Once the returned Promise is resolved we use the setState() method to assign the returned data to the todos state variable. If there is an error we simply display it in the console. If you open your browser console, you should the fetched todos displayed as an array of objects. Let's render them in our UI. Update the render() method in the src/App.js file as follows: render() { return ( <div className="container"> <div className="col-xs-12"> <h1>My Todos</h1> {this.state.todos.map((todo) => ( <div className="card"> <div className="card-body"> <h5 className="card-title">{todo.title}</h5> <h6 className="card-subtitle mb-2 text-muted"> { todo.completed && <span> Completed </span> } { !todo.completed && <span> Pending </span> } </h6> </div> </div> ))} </div> </div> ); } In React, you can build lists of elements and include them in JSX using curly braces {}. In the code, we loop through the state.todos array using the JavaScript map() method and we return a Bootstrap 4 card element for each todo. We can also embed any expressions in JSX by wrapping them in curly braces. We used the logical && operator for conditionally including the <span>Completed</span> or <span>Pending</span> elements depending on the value of the completed boolean value of the todo element. This works because in JavaScript, true && expression always evaluates to expression, and false && expression always evaluates to false. If the completed variable is true, the element right after && will appear in the output. If it is false, React will ignore and skip it. See: Conditional Rendering. This will render the todos in the state object. This is a screenshot of our UI: Conclusion That's the end of this tutorial. As a recap: We have installed the create-react-app and used it to create a React project. Next, we integrated Bootstrap 4 in our React application and used to fetch API to send a GET request to consume JSON data from a third-party REST API in the componendDidMount() life-cycle method. We also used the state object to hold our fetched JSON data and the setState() method to set the state. Finally we have seen how to embed JS expressions in JSX and how to render lists of data and conditionally render DOM elements using the logical && operator.

SQL Insert Query Tutorial | SQL INSERT INTO Statement Example

SQL Insert Query Tutorial | SQL Insert Into Statement Example is today’s topic. If you are not familiar with creating a table in SQL, then please check out my how to create a table in SQL tutorial. The INSERT INTO statement is used to add new values into the database. The INSERT INTO statement adds the new record to the table. INSERT INTO can contain the values for some or all of its columns. INSERT INTO can be combined with a SELECT to insert records.

SQL Insert Query Tutorial

The general syntax of SQL INSERT INTO Statement is following.

INSERT INTO table-name (column-names) VALUES (values)

Here, column-names could be column1, column2, column3,…columnN are the names of the columns in a table into which you want to insert a data into the database.

You may not need to specify a column(s) name in the SQL query if you are adding the values for all the columns of a table. But make sure you need to preserve the order of the columns. Let’s take an example.

Step 1: Create a SQL Table.

I am using MacOS and SQLite client. You can use phpMyAdmin or Oracle or other database software.

Now, type the following query to create a table.

CREATE TABLE Apps ( AppID int, AppName varchar(255), CreatorName varchar(255), AppCategory varchar(255), AppPrice int );

So, we have created a table called Apps which has five columns.

Step 2: Insert the values inside the table in SQL

Okay, next step is to use the SQL INSERT INTO query to add the rows.

INSERT INTO Apps (AppID,`AppName`,`CreatorName`,`AppCategory`,`AppPrice`) VALUES (1, 'AppDividend', 'Krunal', 'Software', 50 );

Now, run the query and you will see in the table that one row is added.

So, we have successfully inserted data into the Database.

Insert Multiple Values in Database

We can also insert the multiple values to the table. See the following query.

INSERT INTO Apps (AppID,`AppName`,`CreatorName`,`AppCategory`,`AppPrice`) VALUES (2, 'Escrow', 'LVVM', 'Fashion', 60 ), (3, 'KGB', 'MJ', 'Music', 70 ), (4, 'Moscow', 'Mayor', 'Area', 80 ), (5, 'MoneyControl', 'Mukesh', 'Investment', 90 ), (6, 'Investing', 'Bill', 'Stocks', 100 )

The output of the above SQL query is following.

 

If you are adding values for all of the columns of a table, you do not need to specify the column names in a SQL query. However, make sure the order of the values is in the same order as the columns in a table. 

Populate one table using another table

You can populate the values into the table through the select statement over another table; provided that the other table has the set of fields, which are required to fill the first table. See the following SQL query. Let’s try to add the data to the DummyApp table.

INSERT INTO DummyApp (AppID, `AppName`) SELECT AppID, AppName FROM Apps;

In the above query, we are inserting two column values inside the DummyApp table using Apps table.

The DummyApp table has only two columns which are AppID and AppName.

Also, Apps table has two columns. That is why we can easily add all the values from one table to another table. If you run the above query and you have set up the DummyApp table then all the values from Apps table will be copied to the DummyApp table.

Insert The Data Only in Specified Columns

It is also possible to only insert data in the specific columns. See the following query.

INSERT INTO Apps (AppID, `AppName`) VALUES (10, 'Stavanger');

So, the above query only inserts one row and two column values.

All the other column values should be NULL for that particular row.

Here, you need to be very careful because if some column does not allow the NULL values then SQL Engine throw an exception and the data will not be inserted in the database.

Finally, SQL Insert Query Tutorial | SQL INSERT INTO Statement Example is over.

The post SQL Insert Query Tutorial | SQL INSERT INTO Statement Example appeared first on AppDividend.

Percona XtraDB Cluster Operator 0.3.0 Early Access Release Is Now Available

Percona announces the release of Percona XtraDB Cluster Operator 0.3.0 early access.

The Percona XtraDB Cluster Operator simplifies the deployment and management of Percona XtraDB Cluster in a Kubernetes or OpenShift environment. It extends the Kubernetes API with a new custom resource for deploying, configuring and managing the application through the whole life cycle.

You can install the Percona XtraDB Cluster Operator on Kubernetes or OpenShift. While the operator does not support all the Percona XtraDB Cluster features in this early access release, instructions on how to install and configure it are already available along with the operator source code, hosted in our Github repository.

The Percona XtraDB Cluster Operator is an early access release. Percona doesn’t recommend it for production environments. New features Improvements Fixed Bugs
  • CLOUD-148: Pod Disruption Budget code caused the wrong configuration to be applied for ProxySQL and had lack of multiple availability zones support.
  • CLOUD-138: The restore-backup.sh script was exiting with an error because its code was not taking into account images version numbers.
  • CLOUD-118: The backup recovery job was unable to start if Persistent Volume for backup and Persistent Volume for Pod-0 were placed in different availability zones.

Percona XtraDB Cluster is an open source, cost-effective and robust clustering solution for businesses. It integrates Percona Server for MySQL with the Galera replication library to produce a highly-available and scalable MySQL® cluster complete with synchronous multi-master replication, zero data loss and automatic node provisioning using Percona XtraBackup.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

Terraform on OCI – Building MySQL On Compute – initial setups

I have written previous blog posts about Oracle Cloud OCI and this series continues. My post titled with Iaas Getting Started was to get us acquainted with important security-focused items like Compartments and network Services like NAT and Internet-Gateways. Then I posted about building MySQL on Compute with Scripting using a mix of OCI Web console navigation… Read More »

Simple STONITH with ProxySQL and Orchestrator

Distributed systems are hard – I just want to echo that. In MySQL, we have quite a number of options to run highly available systems. However, real fault tolerant systems are difficult to achieve.

Take for example a common use case of multi-DC replication where Orchestrator is responsible for managing the topology, while ProxySQL takes care of the routing/proxying to the correct server, as illustrated below. A rare case you might encounter is that the primary MySQL

node01 on DC1 might have a blip of a couple of seconds. Because Orchestrator uses an adaptive health check – not only the node itself but also consults its replicas – it can react really fast and promote the node in DC2.

Why is this problematic?

The problem occurs when

node01 resolves its temporary issue. A race condition could occur within ProxySQL that could mark it back as read-write. You can increase an “offline” period within ProxySQL to make sure Orchestrator rediscovers the node first. Hopefully, it will set it to read-only immediately, but what we want is an extra layer of predictable behavior. This normally comes in the form of STONITH – by taking the other node out of action, we practically reduce the risk of conflict close to zero. The solution

Orchestrator supports hooks to do this, but we can also do it easily with ProxySQL using its built in scheduler. In this case, we create a script where Orchestrator is consulted frequently for any nodes recently marked as

downtimed, and we also mark them as such in ProxySQL. The script proxy-oc-tool.sh can be found on Github.

What does this script do? In the case of our topology above:

  • If for any reason, connections to MySQL on node01 fail, Orchestrator will pick node02  as the new primary.
  • Since node01 is unreachable –  cannot modify read_only nor update replication – it will be marked as downtimed with lost-in-recovery as the reason.
  • If node01 comes back online, and ProxySQL sees it before the next Orchestrator check, it can rejoin the pool. Then it’s possible that you have two writeable nodes in the hostgroup.
  • To prevent the condition above, as soon as the node is marked with downtime from Orchestrator, the script proxy-oc-tool.sh will mark it OFFLINE_SOFT so it never rejoins the writer_hostgroup  in ProxySQL.
  • Once an operator fixes node01 i.e. reattaches as a replica and removes the downtimed mark, the script proxy-oc-tool.sh will mark it back ONLINE  automatically.
  • Additionally, if DC1 gets completely disconnected from DC2 and AWS, the script will not be able to reach Orchestrator’s raft-leader and will set all nodes to OFFLINE_SOFT preventing isolated writes on DC1.

Adding the script to ProxySQL is simple. First you download and set permissions. I placed the script in

/usr/bin/ – but you can put it anywhere accessible by the ProxySQL process.wget https://gist.githubusercontent.com/dotmanila/1a78ef67da86473c70c7c55d3f6fda89/raw/b671fed06686803e626c1541b69a2a9d20e6bce5/proxy-oc-tool.sh chmod 0755 proxy-oc-tool.sh mv proxy-oc-tool.sh /usr/bin/

Note, you will need to edit some variables in the script i.e.

ORCHESTRATOR_PATH .

Then load into the scheduler:

INSERT INTO scheduler (interval_ms, filename) VALUES (5000, '/usr/bin/proxy-oc-tool.sh'); LOAD SCHEDULER TO RUNTIME; SAVE SCHEDULER TO DISK;

I’ve set the interval to five seconds since inside ProxySQL, a shunned node will need about 10 seconds before the next read-only check is done. This way, this script is still ahead of ProxySQL and is able to mark the dead node as

OFFLINE_SOFT .

Because this is the simple version, there are obvious additional improvements to be made in the script like using scheduler args to specify and

ORCHESTRATOR_PATH implement error checking.

MySQL JSON Document Store

MySQL 8.0 provides another way to handle JSON documents, actually in a "Not only SQL" (NoSQL) approach... In other words, if you need/want to manage JSON documents (collections) in a non-relational manner, with CRUD (acronym for Create/Read/Update/Delete) operations then you can use MySQL 8.0! Did you know that?

How does a relational database execute SQL statements and prepared statements

Introduction In this article, we are going to see how a relational database executes SQL statements and prepared statements. SQL statement lifecycle The main database modules responsible for processing a SQL statement are: the Parser, the Optimizer, the Executor. A SQL statement execution looks like in the following diagram. Parser The Parser checks the SQL statement and ensures its validity. The statements are verified both syntactically (the statement keywords must be properly spelled and following the SQL language guidelines) and semantically (the referenced tables and column do exist in the database). During... Read More

The post How does a relational database execute SQL statements and prepared statements appeared first on Vlad Mihalcea.

dbdeployer cookbook - Advanced techniques

In the previous post about the dbdeployer recipes we saw the basics of using the cookbook command and the simpler tutorials that the recipes offer.

Here we will see some more advanced techniques, and more demanding examples.


We saw that the recipe for a single deployment would get a NOTFOUND when no versions were available, or the highest MySQL version when one was found.

$ dbdeployer cookbook show single | grep version=
version=$1
[ -z "$version" ] && version=8.0.16

But what if we want the latest Percona Server or MariaDB for this recipe? One solution would be to run the script with an argument, but we can ask dbdeployer to find the most recent version for a given flavor and use it in our recipe:

$ dbdeployer cookbook show single --flavor=percona | grep version=
version=$1
[ -z "$version" ] && version=ps8.0.15

$ dbdeployer cookbook show single --flavor=pxc | grep version=
version=$1
[ -z "$version" ] && version=pxc5.7.25

$ dbdeployer cookbook show single --flavor=mariadb | grep version=
version=$1
[ -z "$version" ] && version=ma10.4.3

This works for all the recipes that don’t require a given flavor. When one is indicated (see dbdeployer cookbook list) you can override it using --flavor, but do that at your own risk. Running the ndb recipe using pxc flavor won’t produce anything usable.


Replication between sandboxes

When I proposed dbdeployer support for NDB, the immediate reaction was that this was good to test cluster-to-cluster replication. Although I did plenty of such topologies in one of my previous jobs, I had limited experience replicating between single or composite sandboxes. Thus, I started thinking about how to do it. In the old MySQL-Sandbox, I had an option --slaveof that allowed a single sandbox to replicate from an existing one. I did not implement the same thing in dbdeployer, because that solution looked limited, and only useful in a few scenarios.

I wanted something more dynamic, and initially I thought of creating a grandiose scheme, involving custom templates and user-defined fillers. While I may end up doing that some day, I quickly realized that it was overkill for this purpose, and that the sandboxes had already all the information needed to replicate from and to every other sandbox. I just had to expose the data in such a way that it can be used to plug one sandbox to the other.

Now every sandbox has a script named replicate_from, and a companion script called metadata. Using a combination of the two (in fact, replicate_from on the would-be replica calls metadata from the donor) we can quickly define the replication command needed for most situations.


Replication between single sandboxes

Before we tackle the most complex one, let’s demonstrate that the system works with a simple case.

There is a recipe named replication_between_single that creates a file named, aptly, ./recipes/replication-between-single.sh.

If you run it, you will see something similar to the following:

$ ./recipes/replication-between-single.sh 5.7.25
+ dbdeployer deploy single 5.7.25 --master --gtid --sandbox-directory=msb_5_7_25_1 --port-as-server-id
Database installed in $HOME/sandboxes/msb_5_7_25_1
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ dbdeployer deploy single 5.7.25 --master --gtid --sandbox-directory=msb_5_7_25_2 --port-as-server-id
Database installed in $HOME/sandboxes/msb_5_7_25_2
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ dbdeployer sandboxes --full-info
.--------------.--------.---------.---------------.--------.-------.--------.
| name | type | version | ports | flavor | nodes | locked |
+--------------+--------+---------+---------------+--------+-------+--------+
| msb_5_7_25_1 | single | 5.7.25 | [5725 ] | mysql | 0 | |
| msb_5_7_25_2 | single | 5.7.25 | [5726 ] | mysql | 0 | |
'--------------'--------'---------'---------------'--------'-------'--------'
0
+ $HOME/sandboxes/msb_5_7_25_1/replicate_from msb_5_7_25_2
Connecting to $HOME/sandboxes/msb_5_7_25_2
--------------
CHANGE MASTER TO master_host="127.0.0.1",
master_port=5726,
master_user="rsandbox",
master_password="rsandbox"
, master_log_file="mysql-bin.000001", master_log_pos=4089
--------------

--------------
start slave
--------------

Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 4089
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Exec_Master_Log_Pos: 4089
Retrieved_Gtid_Set:
Executed_Gtid_Set: 00005725-0000-0000-0000-000000005725:1-16
Auto_Position: 0
0
# Inserting data in msb_5_7_25_2
+ $HOME/sandboxes/msb_5_7_25_2/use -e 'create table if not exists test.t1 (id int not null primary key, server_id int )'
+ $HOME/sandboxes/msb_5_7_25_2/use -e 'insert into test.t1 values (1, @@server_id)'
# Retrieving data from msb_5_7_25_1
+ $HOME/sandboxes/msb_5_7_25_1/use -e 'select *, @@port from test.t1'
+----+-----------+--------+
| id | server_id | @@port |
+----+-----------+--------+
| 1 | 5726 | 5725 |
+----+-----------+--------+

The script deploys two sandboxes of the chosen version, using different directory names (dbdeployer takes care of choosing a free port) and then starts replication between the two using $SANDBOX1/replicate_from $SANDBOX2. Then a quick test shows that the data created in a sandbox can be retrieved in the other.


Replication between group replication clusters

The method used to replicate between two group replications is similar to the one seen for single sandboxes. The script replicate_from on the group top directory delegates the replication task to its first node, which points to the second group.

$ ./recipes/replication-between-groups.sh 5.7.25
+ dbdeployer deploy replication 5.7.25 --topology=group --concurrent --port-as-server-id --sandbox-directory=group_5_7_25_1
[...]
+ dbdeployer deploy replication 5.7.25 --topology=group --concurrent --port-as-server-id --sandbox-directory=group_5_7_25_2
[...]
+ dbdeployer sandboxes --full-info
.----------------.---------------------.---------.----------------------------------------.--------.-------.--------.
| name | type | version | ports | flavor | nodes | locked |
+----------------+---------------------+---------+----------------------------------------+--------+-------+--------+
| group_5_7_25_1 | group-multi-primary | 5.7.25 | [20226 20351 20227 20352 20228 20353 ] | mysql | 3 | |
| group_5_7_25_2 | group-multi-primary | 5.7.25 | [20229 20354 20230 20355 20231 20356 ] | mysql | 3 | |
'----------------'---------------------'---------'----------------------------------------'--------'-------'--------'
0
+ $HOME/sandboxes/group_5_7_25_1/replicate_from group_5_7_25_2
Connecting to $HOME/sandboxes/group_5_7_25_2/node1
--------------
CHANGE MASTER TO master_host="127.0.0.1",
master_port=20229,
master_user="rsandbox",
master_password="rsandbox"
, master_log_file="mysql-bin.000001", master_log_pos=1082
--------------

--------------
start slave
--------------

Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 1082
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Exec_Master_Log_Pos: 1082
Retrieved_Gtid_Set:
Executed_Gtid_Set: 00020225-bbbb-cccc-dddd-eeeeeeeeeeee:1-3
Auto_Position: 0
0
# Inserting data in group_5_7_25_2 node1
+ $HOME/sandboxes/group_5_7_25_2/n1 -e 'create table if not exists test.t1 (id int not null primary key, server_id int )'
+ $HOME/sandboxes/group_5_7_25_2/n1 -e 'insert into test.t1 values (1, @@server_id)'
# Retrieving data from one of group_5_7_25_1 nodes
# At this point, the data was replicated twice
+ $HOME/sandboxes/group_5_7_25_1/n2 -e 'select *, @@port from test.t1'
+----+-----------+--------+
| id | server_id | @@port |
+----+-----------+--------+
| 1 | 20229 | 20227 |
+----+-----------+--------+

The interesting thing about this recipe is that the sandboxes are created using the option --port-as-server-id. While it was used also in the replication between single sandboxes as an excess of caution, in this recipe, and in all the recipes involving compound sandboxes, it is a necessity, as the replication would fail if primary and replica servers have the same server_id.

All the work is done by the replicate_from script, which knows how to check whether the target is a single sandbox or a composite one, and where to find the primary server.

Using a similar method, we can run more recipes on the same tune.


Replication between different things

I won’t reproduce the output of all recipes here. I will just mention what every recipe needs to prepare to ensure a positive outcome.

  • Replication between NDB clusters. Nothing special here, except making sure to use a MySQL Cluster tarball. If you don’t dbdeployer will detect it and refuse the installation. For the rest, it’s like replication between groups.
  • Replication between master/slave. This is a bit trickier, because the replication data comes to a master, and if we want to propagate to its slaves we need to activate log-slave-update. The recipe shows how to do it.
  • Replication between group and master/slave. In addition to the trick mentioned in the previous recipe, we need to make sure that the master/slave deployment is using GTID.
  • Replication between master/slave and group. See the previous one.
  • Replication between group and single (and vice versa). We just need to make sure the single sandbox has GTID enabled.

Replication between different versions

This is a simple recipe that comes from a feature request. All you need to do is make sure that the version on the master is lower than the one on the slaves. The recipe script replication-multi-versions.sh, looks for tarballs of 5.6, 5.7, and 8.0, but you can start it using three versions that you’d like. For example:

./recipes/replication-multi-versions.sh 5.7.23 5.7.24 5.7.25

The first version will be used as the master.


Circular replication

I didn’t want to do this, as I consider ring replication to be weak and difficult to handle. I stated that much in the feature request and in the list of dbdeployer features. But then I saw that with the latest enhancements it was so easy, that I had to at least make a recipe for it. And then you have it. recipes/circular-replication.sh does what it promises, but the burden of maintenance is still on the user’s shoulders. I suggest looking at it, and then forgetting it.


Upgrade from MySQL 5.5 to 8.0 (through 5.6 and 5.7)

This is one of the most advanced recipes. To enjoy it, you need to have expanded tarballs from 5.5, 5.6, 5.7, and 8.0.

Provided that you do, running this script will do the following:

  1. deploy MySQL 5.5
  2. Create a table upgrade_log and insert some data.
  3. deploy MySQL 5.6
  4. run mysql_upgrade (through dbdeployer)
  5. Add data to the log table
  6. deploy MySQL 5.7
  7. run mysql_upgrade again
  8. add data to the log table
  9. deploy MySQL 8.0
  10. run mysql_upgrade for the last time
  11. Show the data from the table

Here’s a full transcript of the operation. It’s interesting to see how the upgrade procedure has changed from older versions to current ones.


$ ./recipes/upgrade.sh

# ****************************************************************************
# Upgrading from 5.5.53 to 5.6.41
# ****************************************************************************
+ dbdeployer deploy single 5.5.53 --master
Database installed in $HOME/sandboxes/msb_5_5_53
run 'dbdeployer usage single' for basic instructions'
.. sandbox server started
0
+ dbdeployer deploy single 5.6.41 --master
Database installed in $HOME/sandboxes/msb_5_6_41
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ $HOME/sandboxes/msb_5_5_53/use -e 'CREATE TABLE IF NOT EXISTS test.upgrade_log(id int not null auto_increment primary key, server_id int, vers varchar(50), urole varchar(20), ts timestamp)'
+ $HOME/sandboxes/msb_5_5_53/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''original'\'')'
+ dbdeployer admin upgrade msb_5_5_53 msb_5_6_41
stop $HOME/sandboxes/msb_5_5_53
stop $HOME/sandboxes/msb_5_6_41
Data directory msb_5_5_53/data moved to msb_5_6_41/data
. sandbox server started
Looking for 'mysql' as: $HOME/opt/mysql/5.6.41/bin/mysql
Looking for 'mysqlcheck' as: $HOME/opt/mysql/5.6.41/bin/mysqlcheck
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
mysql.columns_priv OK
mysql.db OK
mysql.event OK
mysql.func OK
mysql.general_log OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.host OK
mysql.ndb_binlog_index OK
mysql.plugin OK
mysql.proc OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.servers OK
mysql.slow_log OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Running 'mysql_fix_privilege_tables'...
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
test.upgrade_log OK
OK

The data directory from msb_5_6_41/data is preserved in msb_5_6_41/data-msb_5_6_41
The data directory from msb_5_5_53/data is now used in msb_5_6_41/data
msb_5_5_53 is not operational and can be deleted
+ dbdeployer delete msb_5_5_53
List of deployed sandboxes:
$HOME/sandboxes/msb_5_5_53
Running $HOME/sandboxes/msb_5_5_53/stop
Running rm -rf $HOME/sandboxes/msb_5_5_53
Directory $HOME/sandboxes/msb_5_5_53 deleted
+ $HOME/sandboxes/msb_5_6_41/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''upgraded'\'')'
+ $HOME/sandboxes/msb_5_6_41/use -e 'SELECT * FROM test.upgrade_log'
+----+-----------+------------+----------+---------------------+
| id | server_id | vers | urole | ts |
+----+-----------+------------+----------+---------------------+
| 1 | 5553 | 5.5.53-log | original | 2019-04-01 20:27:38 |
| 2 | 5641 | 5.6.41-log | upgraded | 2019-04-01 20:27:46 |
+----+-----------+------------+----------+---------------------+

# ****************************************************************************
# The upgraded database is now upgrading from 5.6.41 to 5.7.25
# ****************************************************************************
+ dbdeployer deploy single 5.7.25 --master
Database installed in $HOME/sandboxes/msb_5_7_25
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ $HOME/sandboxes/msb_5_6_41/use -e 'CREATE TABLE IF NOT EXISTS test.upgrade_log(id int not null auto_increment primary key, server_id int, vers varchar(50), urole varchar(20), ts timestamp)'
+ $HOME/sandboxes/msb_5_6_41/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''original'\'')'
+ dbdeployer admin upgrade msb_5_6_41 msb_5_7_25
stop $HOME/sandboxes/msb_5_6_41
stop $HOME/sandboxes/msb_5_7_25
Data directory msb_5_6_41/data moved to msb_5_7_25/data
.. sandbox server started
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Checking system database.
mysql.columns_priv OK
mysql.db OK
mysql.engine_cost OK
mysql.event OK
mysql.func OK
mysql.general_log OK
mysql.gtid_executed OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.host OK
mysql.innodb_index_stats OK
mysql.innodb_table_stats OK
mysql.ndb_binlog_index OK
mysql.plugin OK
mysql.proc OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.server_cost OK
mysql.servers OK
mysql.slave_master_info OK
mysql.slave_relay_log_info OK
mysql.slave_worker_info OK
mysql.slow_log OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Upgrading the sys schema.
Checking databases.
sys.sys_config OK
test.upgrade_log
error : Table rebuild required. Please do "ALTER TABLE `upgrade_log` FORCE" or dump/reload to fix it!

Repairing tables
`test`.`upgrade_log`
Running : ALTER TABLE `test`.`upgrade_log` FORCE
status : OK
Upgrade process completed successfully.
Checking if update is needed.

The data directory from msb_5_7_25/data is preserved in msb_5_7_25/data-msb_5_7_25
The data directory from msb_5_6_41/data is now used in msb_5_7_25/data
msb_5_6_41 is not operational and can be deleted
+ dbdeployer delete msb_5_6_41
List of deployed sandboxes:
$HOME/sandboxes/msb_5_6_41
Running $HOME/sandboxes/msb_5_6_41/stop
Running rm -rf $HOME/sandboxes/msb_5_6_41
Directory $HOME/sandboxes/msb_5_6_41 deleted
+ $HOME/sandboxes/msb_5_7_25/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''upgraded'\'')'
+ $HOME/sandboxes/msb_5_7_25/use -e 'SELECT * FROM test.upgrade_log'
+----+-----------+------------+----------+---------------------+
| id | server_id | vers | urole | ts |
+----+-----------+------------+----------+---------------------+
| 1 | 5553 | 5.5.53-log | original | 2019-04-01 20:27:38 |
| 2 | 5641 | 5.6.41-log | upgraded | 2019-04-01 20:27:46 |
| 3 | 5641 | 5.6.41-log | original | 2019-04-01 20:27:51 |
| 4 | 5725 | 5.7.25-log | upgraded | 2019-04-01 20:28:01 |
+----+-----------+------------+----------+---------------------+

# ****************************************************************************
# The further upgraded database is now upgrading from 5.7.25 to 8.0.15
# ****************************************************************************
+ dbdeployer deploy single 8.0.15 --master
Database installed in $HOME/sandboxes/msb_8_0_15
run 'dbdeployer usage single' for basic instructions'
.. sandbox server started
0
+ $HOME/sandboxes/msb_5_7_25/use -e 'CREATE TABLE IF NOT EXISTS test.upgrade_log(id int not null auto_increment primary key, server_id int, vers varchar(50), urole varchar(20), ts timestamp)'
+ $HOME/sandboxes/msb_5_7_25/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''original'\'')'
+ dbdeployer admin upgrade msb_5_7_25 msb_8_0_15
stop $HOME/sandboxes/msb_5_7_25
Attempting normal termination --- kill -15 10357
stop $HOME/sandboxes/msb_8_0_15
Data directory msb_5_7_25/data moved to msb_8_0_15/data
... sandbox server started
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Upgrading system table data.
Checking system database.
mysql.columns_priv OK
mysql.component OK
mysql.db OK
mysql.default_roles OK
mysql.engine_cost OK
mysql.func OK
mysql.general_log OK
mysql.global_grants OK
mysql.gtid_executed OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.host OK
mysql.innodb_index_stats OK
mysql.innodb_table_stats OK
mysql.ndb_binlog_index OK
mysql.password_history OK
mysql.plugin OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.role_edges OK
mysql.server_cost OK
mysql.servers OK
mysql.slave_master_info OK
mysql.slave_relay_log_info OK
mysql.slave_worker_info OK
mysql.slow_log OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Found outdated sys schema version 1.5.1.
Upgrading the sys schema.
Checking databases.
sys.sys_config OK
test.upgrade_log OK
Upgrade process completed successfully.
Checking if update is needed.

The data directory from msb_8_0_15/data is preserved in msb_8_0_15/data-msb_8_0_15
The data directory from msb_5_7_25/data is now used in msb_8_0_15/data
msb_5_7_25 is not operational and can be deleted
+ dbdeployer delete msb_5_7_25
List of deployed sandboxes:
$HOME/sandboxes/msb_5_7_25
Running $HOME/sandboxes/msb_5_7_25/stop
Running rm -rf $HOME/sandboxes/msb_5_7_25
Directory $HOME/sandboxes/msb_5_7_25 deleted
+ $HOME/sandboxes/msb_8_0_15/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''upgraded'\'')'
+ $HOME/sandboxes/msb_8_0_15/use -e 'SELECT * FROM test.upgrade_log'
+----+-----------+------------+----------+---------------------+
| id | server_id | vers | urole | ts |
+----+-----------+------------+----------+---------------------+
| 1 | 5553 | 5.5.53-log | original | 2019-04-01 20:27:38 |
| 2 | 5641 | 5.6.41-log | upgraded | 2019-04-01 20:27:46 |
| 3 | 5641 | 5.6.41-log | original | 2019-04-01 20:27:51 |
| 4 | 5725 | 5.7.25-log | upgraded | 2019-04-01 20:28:01 |
| 5 | 5725 | 5.7.25-log | original | 2019-04-01 20:28:07 |
| 6 | 8015 | 8.0.15 | upgraded | 2019-04-01 20:28:20 |
+----+-----------+------------+----------+---------------------+
What else can we do?

The replication recipes seen so far use the same principles. The method used in these recipes doesn’t work for all-masters and fan-in replication, because mixing named channels and nameless ones is not allowed. Also, there are things that don’t respond to replication commands at all, like TiDB. But it should be easy to enhance the current scripts (or to add some more specialized ones) that will include also these exceptions. Given the recent wave of collaboration, I expect it will happen relatively soon.

SQL Create Table Statement Example | Create Table in SQL Tutorial

SQL Create Table Statement Example | Create Table in SQL Tutorial is today’s topic. A CREATE TABLE statement is used to create a new table in the database. SQL is the Standard Query Language for manipulating, storing and retrieving data in databases. SQL is used in MySQL, SQL Server, MS Access, Oracle, Sybase, Informix, Postgres, and other database systems. SQL stands for Structured Query Language. SQL lets you access and manipulates databases. RDBMS stands for Relational Database Management System. RDBMS is the basis for SQL, and for all modern database systems such as MS SQL Server, IBM DB2, Oracle, MySQL, and Microsoft Access.

SQL Create Table Statement Example

Working with SQL for data analysis and manipulation sometimes requires creating the new tables. Requirements like, Do you want to store the output of your SQL queries? Do you need to pull the new data sources (for example, csv files) into your data analysis? Do you want to store your transformed data without deleting your original data sets? In all of those scenarios, First, you have to know how to create tables in SQL.

Creating a primary table involves naming the table and defining its columns and each column’s data type.

See the following syntax of creating a table in SQL is following.

CREATE TABLE table_name ( column1 datatype, column2 datatype, column3 datatype, .... );

The column parameters indicate the names of the columns of a table.

The data type parameter specifies the type of data the column can hold (e.g., varchar, integer, date, boolean, etc.).

CREATE TABLE is a SQL keyword. You should always have it at the beginning of your SQL statement.

CREATE TABLE is the keyword telling the DBMS what you want to do. In this case, you want to create the new table. The unique name or identifier for the table follows the CREATE TABLE statement.

Then in brackets comes the list defining each column in the table and what sort of data type it is.

Let’s take the example of creating a table.

CREATE TABLE Apps ( AppID int, AppName varchar(255), CreatorName varchar(255), AppCategory varchar(255), AppPrice int );

Now, run the query. I am using Sequel Pro for running the SQL queries. I have already connected the client to the Database. After running the query, Apps table will be created in the database.

 

It has created AppID, AppName, CreatorName, AppCategory, and AppPrice columns.

Create Table Using Another Table

The copy of an existing table can also be created using the CREATE TABLE.

The new table gets the same column definitions. All columns or specific columns can be selected based on your requirement.

If you create a new table using the existing one, then a new table will be filled with all the existing values from the old table.

The syntax for creating a table using another table is following.

CREATE TABLE new_table_name AS SELECT column1, column2,... FROM existing_table_name WHERE ....;

Let’s see the following example.

CREATE TABLE DummyApp AS SELECT AppID, AppName FROM Apps

So, we have created DummyApp table from Apps table.

Create Table With Extra parameters

After defining a data type of the column, you can add some extra parameters too. These are optional arguments and mostly technical things, but still, I will highlight the three most important parameters:

  1. NOT NULL: If you add this to your column that means you can’t add NULL values to the given column.
  2. UNIQUE: If you add this to your column that means you can’t add the same value to a column twice. It is especially important when you store unique user IDs. In these cases, duplicate values are not allowed.
  3. PRIMARY KEY: Practically speaking, this is the combination of NOT NULL and UNIQUE, but it has some technical advantages as well. A primary key is a field in a table which uniquely identifies each row/record in a database table. Primary keys must contain unique values. A primary key column cannot have NULL values. A table can have only one primary key, which may consist of single or multiple fields. When multiple fields are used as a primary key, they are called a composite key.

Let’s see the following example by creating a new table with extra parameters.

CREATE TABLE test_results ( name TEXT, student_id INTEGER PRIMARY KEY, birth_date DATE, test_result DECIMAL NOT NULL, grade TEXT NOT NULL, passed BOOLEAN NOT NULL );

So, in the above code, we have defined the student_id as a PRIMARY KEY and test_result, grade, and passed column has NOT NULL attribute. It means that those columns do not take NULL values while inserting the values inside the table. If NULL found, then it will throw an error.

The output is following.

 

Finally, SQL Create Table Statement Example | Create Table in SQL Tutorial is over.

The post SQL Create Table Statement Example | Create Table in SQL Tutorial appeared first on AppDividend.

Authentication in MariaDB 10.4 — understanding the changes

MariaDB Server 10.4 came with a whole lot of Security related changes. Some of them are merely optimizations (like MDEV-15649), some improve existing features to be more robust (MDEV-15473, MDEV-7598) or convenient (MDEV-12835, MDEV-16266). Some are MySQL compatibility features, requested by our users (MDEV-7597, MDEV-13095). But the first thing any MariaDB Server user, whether an […]

The post Authentication in MariaDB 10.4 — understanding the changes appeared first on MariaDB.org.

Support for Admin Interface in MySQL server

For some time, there have been several requests to the MySQL dev team to add dedicated support for an admin to connect to the MySQL server.  We received a contribution from Facebook, bug#90395 ,  to add this functionality, which we then paired with more requirements, implemented in WL#12138 and delivered in MySQL Server 8.0.14.…

Facebook Twitter Google+ LinkedIn

The 10 Things We Built on Twitch in March

This year feels like it's on moving too fast! March is done and it was an eventful month for us here at Scotch.

We've gotten the following done:

Twitch streams have been quite the learning experience for me. Between video and audio gear and learning how to read chat/code/stream at the same time while keep it interesting. Big learning month!

On these Twitch streams, we build out demos, mini-projects and hangout while having fun.

Let's take a look a the 10 things we built on these Twitch streams in March. You really should check them out. We'll be streaming today if you want to join! https://www.twitch.tv/chrisoncode

1. Building a React Calendar

In this stream, a calendar was built from scratch using ReactJs and Styled Components. Each React component was written using Hooks, a date range can also be choosen and displayed when you hover on the calendar.

You have to click through to the demo to see how this one works.

https://codesandbox.io/s/yqoq6y6j3z?runonclick=1&view=preview

2. Building a Pokemon Battle Simulator using Vue and PokeAPI

Here, we built a fun battle simulator with Pokemons attacking each other and you can visualize their health bar reducing. This was built with Vue and utilizes the Vue lifecycle methods to make the app tick. This got trickier than I expected but ended up being a fun one.

https://codesandbox.io/s/jnorz9wjzy?runonclick=1&view=preview

3. Building a Meme Generator in Vue

Chris built a Gif generator using the Vue-CLI and the Giphy API. This mini-project also uses CSS grid. In this application, you can search for a certain Gif using a provided search bar, also you can see trending Gifs on display.

https://codesandbox.io/s/k57o0j3wov?runonclick=1&view=preview

4. Build a Markdown Parser in React

In this stream, a markdown parser which converts markdown text while typing, into normal text. This application was built with React, Styled Components, react-markdown and PrismJS. The markdown parser handles header text, normal text, links, code blocks, block quotes and more.

https://codesandbox.io/s/nwm83w9y1l?runonclick=1&view=preview

5. Using Tailwind to Build a Dashboard in React

Tailwind is a fun utility-first framework in that is give you low-level classes to cutomize your site. I've heard of everyone talking and using Tailwind so I thought we could try it out. This was about my 2nd time using Tailwind.

https://codesandbox.io/s/x55vj11rq?runonclick=1&view=preview

6. Building a TailwindCSS Cheatsheet

While I am really liking Tailwind, I find the docs are a little hard to navigate. We made a cheatsheet that will quickly search for the right class that you want!

We even deployed it to a live Netlify site! https://tailwind-cheatsheet.netlify.com/

https://codesandbox.io/s/yjlvn462wx?runonclick=1&view=preview

7. Build a Modal Component in React

In this stream, a modal component was built in React. This app utilized React Hooks to handle state and lifecycle actions. Also styled-components was utilized to to handle styling.

https://codesandbox.io/s/yvz4royxp9?runonclick=1&view=preview

8. Build a Trivia App in Vue

This was probably my favorite stream. Building a trivia app automatically gets the chatroom involved and we had a lot of fun answering the random questions that showed up.

https://codesandbox.io/s/n9lz67mlrm?runonclick=1&view=preview

9. Build a Stripe-like Menu Carousel in Vue

This is one that you have to click through and see. The hover and CSS animations are what we focused on. Pulled from Stripe.com's website since they always have the best UI tricks.

https://codesandbox.io/s/1yoz2432m7?runonclick=1&view=preview

10. React Infinite Scroll Challenge

This stream is a solution to Scotch code challenge #16. In this stream, an image gallery masonry with infinte scroll was built with React, the unsplash API, react-infinite-scroll-component and Bulma. CSS grid was also used to create the masonry effect on the gallery.

https://codesandbox.io/s/yvnr3qo109?runonclick=1&view=preview

Let's Keep Going!

While we are stoked about these streams, we'll keep doing them and you can join us weekly here. Got a fun application or mini-project to build out in these streams? Know any way we can make these streams better?

Let us know by mentioning Scotch on Twitter or letting Chris know. Happy keyboard slapping!

MySQL Error: Too many connections!

When your application get error "too many connections" underlying problem might be caused by multiple things. For a in-depth understanding of how MySQL handles connections read this blog.

In this blog we will investigate why and how to solve these problems when running MySQL installed on Debian/Ubuntu using systemd (default MySQL packages).

OS: Debian 9
MySQL Server version: 5.7.25 MySQL Community Server (GPL)
(will most likely be the same for MySQL 8.0 versions)

Goal is to have 10.000 working connections to MySQL!

The default value for max_connections is 151 connections so first step is to increase the max_connections variable to 10.000.
This is documented in the manuals here:
- https://dev.mysql.com/doc/refman/5.7/en/too-many-connections.html
- https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_max_connections

The max_connections is a dynamic setting so lets increase value to 10.000 and see what happens.

root@debian:~# mysql -uroot -proot -se "select @@max_connections"
@@max_connections
214
Hmmm, it looks like we only got 214 connections, lets look at the error log:

2019-04-01T06:29:48.301871Z 0 [Warning] Changed limits: max_open_files: 1024 (requested 50000)
2019-04-01T06:29:48.302000Z 0 [Warning] Changed limits: max_connections: 214 (requested 10000)
2019-04-01T06:29:48.302004Z 0 [Warning] Changed limits: table_open_cache: 400 (requested 2000)
Looks like we hit some resource limit.
Lets look in the MySQL manual for running MySQL under systemd, that can be found here.
Looks like we need to increase the number of allowed open files for MySQL, locate the systemd configuration folder for MySQL and create file /etc/systemd/system/mysqld.service.d/override.conf (file can be called anything ending with .conf). You can also create/modify the override file by using native systemctl command like: systemctl edit mysql

Add LimitNOFILE=10000 in file override.conf like:

root@debian:~# cat /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=10000
After this we need to reload the systmed daemon and restart the MySQL service like:

root@debian:~# systemctl daemon-reload
root@debian:~# systemctl restart mysql
MySQL is now saying we have 9190 connections:

root@debian:~# mysql -uroot -proot -se "select @@max_connections"
@@max_connections
9190
So, MySQL is using some files for additional work and we need to set this a bit higher to get 10.000 connections, lets set it to 11.000 and reload the systemd daemon and restart the MySQL service.

root@debian:~# mysql -uroot -proot -se "select @@max_connections"
mysql: [Warning] Using a password on the command line interface can be insecure.
@@max_connections
10000
Good, now we have 10.000 connections available according to MySQL.

Lets run our application and verify we can get 10.000 connections, I use this small perl script to open 10.000 connections.

Just below 5.000 connection I get a new error in the application "Can't create a new thread (errno 11)"
Lets have a look at the MySQL error log:

root@debian:~# tail -1 /var/log/mysql/error.log
2019-04-01T06:50:35.657397Z 0 [ERROR] Can't create thread to handle new connection(errno= 11)
I found this new limit by running command below when perl script was running:
watch -n.5 "mysql -uroot -proot -se'show status like \"%threads%\"'"

Strange, where is this new limitation just below 5.000 connections coming from?

Looking at resource limits for my MySQL daemon I should have close to 8000 processes:

root@debian:~# cat /proc/$( pgrep -o mysql )/limits
Max processes             7929                 7929                 processes
Max open files            11000                11000                files
Lets looks at status report for my MySQL service:

root@debian:~# systemctl status mysql | grep Tasks
    Tasks: 127 (limit: 4915)
There seem to be some additional limit on Tasks that limit me to 4915 connections.
Lets expand our override.conf configuration to cover for 11.000 tasks also.

root@debian:~# cat /etc/systemd/system/mysql.service.d/
[Service]
LimitNOFILE=11000
TasksMax=11000
(remember to reload systemd and restart MySQL service after each change in override.conf)

Now we got just under 8.000 connection and got the same error " Can't create a new thread (errno 11)" but this time it's because of the limit of max processes:

root@debian:~# cat /proc/$( pgrep -o mysql )/limits
Max processes             7929                 7929                 processes
Lets increase this limit to 11.000 in our override.conf:

root@debian:~# cat /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=11000
LimitNPROC=11000
TasksMax=11000
After reloading systemd configuration and restarting MySQL service I can now get 10.000 connections and the perl script runs without any errors!

Summary:
There are different limits when setting max_connections in MySQL:
- The default max connections is 151.
- At 214 connections you are limited by max open files.
- At 4915 you are limited by Max Tasks (systemd)
- Just below 8000 you are limited by max number of processes

By adding a override file (as showed above) you can overcome all these limits.

Remember to:
Look at error message in application and MySQL error logs.
Look at output from: cat /proc/$( pgrep -o mysql )/limits
Look at output from: systemctl status mysql
Test your application (or use my perl script) and monitor that it works!
Monitor how many connections you have: mysql -uroot -p -e'show status like "Threads_connected"'

MySQL Backup Best Practices

Today is World Backup Day, so I thought I would use the opportunity to discuss some best practices and general considerations regarding backing up MySQL instances. While I focus on MySQL, several of these tips apply to backups in general.

Backup you data

Before heading into the gory details, let’s first take a look at the best practices at a high level:

  • Make sure you can restore your backups:
    • Document and script the restore procedures. Do you know the steps required to restore a full backup – or a single table?
    • Keep copies of the backups off-site. Do you have a copy of your backup if the data center becomes unavailable for example due to a fire?
    • Validate your backups. Does your backup method work with the features you use? Are you writing to a disk which is failing?
  • Monitor the backups. Do you know when a backup failed? How long time does the backups take?
  • Use a backup method appropriate for your system and your requirements.
  • Never stop considering your backup strategy. The World changes, so does your backup requirements.

Information

No two systems are identical. Make sure you consider your specific requirements when you design your backup procedures.

The rest of this blog will discuss all of these things in more detail.

Make Sure You Can Restore Your Backups

It may seem obvious, but one of the more common issues I see is that backups exist, but the steps to restore them are not known. Or even worse, the backups cannot be restored at all as they are broken.

Advice

Whatever you do regarding backups, the single most important thing is that you verify you know how to restore the backups and that the restored instance is valid.

There are several things to consider: how to restore a backup, do you have access to the backup, and is the backup valid?

The Restore Procedure

If you do not know how to restore your backups, the day you do need to restore one, a relatively standard operation can turn into a major crisis with the manager staring down your backup.

So, make sure you practice the steps to restore your backups for all of the scenarios you can think of, for example:

  • a plain full restore of the backup
  • a point-in-time recovery, that is: restore the backup itself and apply binary logs up to a given point in time
  • a partial restore, for example to restore a single table or schema from a full backup

There are more possible scenarios. Take some time to consider which are important for your databases and regular practice doing those kind of restores.

When you practice a restore, document all steps you make in detail and keep the steps in a place where they can easily be found again – for example in a knowledge base. Even better, script the restore; that works both to document how the restore should be done, but also automates the steps and ensure each restore is done in the same way. If you need to restore a backup in the middle of a crisis, then having all the steps scripted and documented not only helps you remember what to do, but also reduces the chance that something goes wrong.

Related to this discussion is that you should copy the backups to remote storage.

Copy the Backups Off Site

In the previous section, it was discussed how you need to consider all your restore scenarios. That should include the case where the whole server or even whole data center is gone. What do you do in that case?

From XKCD comic 1718.

Other than the need to provision a new MySQL instance somewhere else, you also need to still have access to your backups. This means that a backup that is only stored locally (either on the same host or in the same data center) is of no use for this case.

Advice

Always copy your backups to remote storage. How remote depends on your needs.

When you decide where to store your backups, you need to consider your requirements. How long time is acceptable to wait to download the backup during a recovery, and what kind of disasters (power outage, fire, earthquake, meteor strike, etc.) must the backup be able to survive? You can choose to have backups available on the local host and/or data center, so they are quickly available, for example in case a user deletes the wrong data. Then have another storage location either in the other end of the country or even on another continent to protect against a major disaster.

Advice

Replication is also a great way to export your data to another location. Note though that unless the replication is delayed, the replica will not help you recover from bad queries.

Of course even having the best written instructions in the World and copies of the backups on all continents do not help you if the backup is corrupted or broken.

Verify Your Backups

A backup is only as good as your ability to restore it and bring the restored instance online. This is the reason, it is so important to test your restore procedures as discussed above. Optimally, you should restore every single backup. In the real world that is not always realistic, but it is still important that you practice a restore from time to time.

Advice

The ultimate validation of your backups is to restore them, bring the restored instance online, and verify the data is correct.

In practice it may not be possible to restore every single backup in all the restore combinations. So, you will need to add some other checks. The exact checks you should do depend on your backups, but some possibilities are:

  • MySQL Enterprise Backup (MEB) has a validate command. This will verify the InnoDB checksums for each page. This checks whether the backup is truncated, corrupted, or damaged.
  • MySQL Enterprise Backup can store the result of the backup in the mysql.backup_history table (enabled by default). This includes the overall backup status.
  • Verify the backup is created and has a minimum size.
  • If you have a logical backup, grep for some strings you know should be in the backup, such as CREATE TABLE statements.

The validation of your backups is of course only useful if you realize when the validation fails, so you also need to monitor the backups.

Monitor the Backups

Monitoring is one of the most important tasks for a database administrator. That also includes monitoring the backups. Once you have verification of the backups in place, you need to ensure the validation status is monitored.

How you do this depends on the monitoring solution you use. In MySQL Enterprise Monitor (MEM) there is a built-in backup dashboard with information about your MySQL Enterprise Backup (MEB) backups; this information is based on the data logged by MySQL Enterprise Backup to the mysql.backup_history table and includes the type of backup, the backup status, how long time the backup took, how long time locks were held, etc. MySQL Enterprise Monitor also creates events when backups fail.

The MySQL Enterprise Monitor (MEM) dashboard for backups.

This far, all the advises have been focused on what you should do with the backup after it has been created. What about creating the backups?

Creating Backups

When you decide how you want to create the backup, there are many considerations to take. This section will consider some of those.

Information

Remember that all systems are unique. What is the best choice for one system may not be the best for another.

First of all you need to determine what you need for your backups and what interruption of your production system is allowed when creating the backups. Some of the things to consider are:

  • How much data can you afford to lose in case of a catastrophic disaster?
  • How long time is acceptable to restore the backup?
  • What data must be included in the backup?
  • Which other files (for example binary logs and configuration files) must be included?
  • Do you need to be able to do a point-in-time recovery?
  • Can the production system be taken offline during the backup or into read-only mode? If so, for how long?

Advice

Since MySQL 5.6, mysqlbinlog has been able to stream binary logs to disk giving you a near real-time backup of the binary logs that can be used for a point-in-time recovery. The binary log is enabled by default in MySQL 8.0.

Answering these questions helps you determine the backup method that is optimal for your system. Some of the backup methods available are:

  • Logical Backups:
    • mysqlpump: This is available in MySQL 5.7 and later and allows for parallel backups. In most cases other than for MySQL NDB Cluster, it is preferred over mysqldump.
    • mysqldump: This is the classical program to create logical backups in MySQL.
    • Native NDB Backups: This is a bit of hybrid between a logical backup and a raw backup specialized for the NDBCluster storage engine. It uses a native storage format but can be converted to CSV files.
  • Binary (Raw) Backups:

Whichever method you choose, make sure you understand its limitations. As an example, file system snapshots can work great in many cases, but if MySQL uses more than one file system for the database files, then it may not be possible to create a consistent snapshot (FLUSH TABLES WITH READ LOCK does not stop background writes for InnoDB except for tables that have been explicitly listed).

Advice

Always do your testing with as much write activity as you can have in a worst-case scenario. Backing up an idle instance will not reveal limitations related to writes happening during the backup.

You also need to take the overhead of the backup method into consideration. At the very least it will impact MySQL by reading the data. There will also be some locking involved even if it in some cases may be very limited. In all cases, creating the backup at the most quiet time of the day can help reduce the impact. An option is also to use a replica for the backups, but even in that case the overhead must be considered as the replica need to be able to keep up or catch up before the next backup.

Advice

If you create the backup from a dedicated replica, you create the backup without impacting the users. In this case, you should also monitor the replica and ensure it can keep up with the production source, so you always create up to date backups.

Now you have considered how to create the backups, validated them, copied them to secure off-site locations, and tested all possible the restore scenarios. So you are all set and can put backups on auto-pilot? Not so fast.

Backups Are a Never Ending Process

The World is not a static place. Neither are your MySQL instances. The configuration changes, the application add new features, requirements change, the amount of data grows, new MySQL instance are installed on new hardware or different cloud shapes or with a different cloud provider, there are updates to MySQL and the backup program, and so on.

This means that the process of working with the backup and restore processes never ends. You need to regularly evaluate whether your backup strategy still works and fulfills all requirements. Look at the bright side: you keep learning and the experience you have gathered since the last evaluation may help you implement an even better backup solution.

Happy World Backup Day.

On References to MariaDB and MariaDB Bugs (MDEVs) in MySQL Bug Reports

Recently I noted that some of my comments to public MySQL bug reports got hidden by somebody from Oracle with privileges to do so. I was not able to find out who did that and when, as this information is not communicated to bug subscribers (this may change if my feature requests, Bug #94807 - "Subscriber should be notified when comment is made private", is eventually implemented).

When it happened for the first time I thought it was probably non-intentional. When it happened for a second time I complained with a tweet that got few likes and zero comments. Recently this happened again and yet another tweet had not got much attention, but at least I've got a comment via Bug #94797 that my comment there (where I checked test case on MariaDB version I had at hand to find out it's not affected, something I often do for bugs mentioned in my blog posts here) was hidden as irrelevant and "an attempt to advertise MariaDB".

Snow hides everything, good and bad, dog shit, holes in the road and autumn flowers... Do we really want information provided in comments to public MySQL bugs got hidden just because someone once decided it's "bad"? I really wonder if any of my readers think that I advertise MariaDB with my public posts or public comments anywhere or specifically in MySQL bug reports?

I'd also like to share here, where no one besides me can hide or delete comments (I hope), what was hidden in the case that caused me to tweet about censorship I have to deal with. In Bug #94610 - "Server stalls because ALTER TABLE on partitioned table holds dict mutex" that ended up as "Not a Bug" (not even a duplicate of a verified Bug #83435 - "ALTER TABLE is very slow when using PARTITIONED table" it referred to and extended with a global mutex usage highlighted and impact explained), I've added the following comment:
"[12 Mar 7:30] Valeriy Kravchuk
Not only it stalls, but if it stalls for long enough time it will crash :)

Useful related reading is here: https://jira.mariadb.org/browse/MDEV-15641" The comment was hidden very soon. Now, if you check that link, you'll see confirmed, unresolved MariaDB bug report. I mostly had this comment to the MDEV-15641 in mind, were my colleague and well known InnoDB developer Marko Mäkelä stated:
"The row_log_table_apply() is actually invoked while holding both dict_sys->mutex and dict_operation_lock. If there is a lot of log to apply, this may actually cause InnoDB to crash."I may be mistaking in linking these two bug reports together, but isn't highlighting the possibility of crash due to long semaphore wait important to understand the impact of the bug report and triage it properly? What wrong MySQL users and bug report readers may see if they follow the link to MariaDB bug I considered relevant? What was advertised by this comment that is harmful or useless for MySQL Community?

I was even more surprised by these recent actions on my comments because in the past I had never noted similar approach. Check the following bug reports, for example (I searched for those with "MDEV" and "Kravchuk" in them to get these):
  • Bug #80919 - "MySQL Crashes when Droping Indexes - Long semaphore wait". In this bug report (real bug fixed in 5.7.22) I've added a comment that refers to MDEV-14637. The comment still remains public and, IMHO, is still useful. Providing this link helped to get proper attention to the bug, so it was re-opened and got comments from Oracle engineers finally. Was it an attempt to advertise MariaDB? How this case is different from my comment in Bug #94610 quoted above? 
  • Bug #84185 - "Not all "Statements writing to a table with an auto-increment..." are unsafe". I reported this "upstream" MySQL bug based on MDEV-10170 - "Misleading "Statements writing to a table with an auto-increment column after selecting from another table are unsafe" on DELETE ... SELECT", previously found by my colleague Hartmut Holzgraefe. I've also added link to the "upstream" MySQL bug report to that MDEV. Does anybody in MySQL or MariaDB user communities think that such cross-references are useless, harmful or may be considered as and "attempt to advertise competitor" if any of vendors fixes the bug first?
  • Bug #48392 - "mysql_upgrade improperly escapes passwords with single quotes". I verified this bug in 2009 while working for MySQL at Sun, and it still remains "Verified" (I had not re-checked if it's still repeatable with current MySQL versions). Then in 2013 community user added a comment referring to the MariaDB bug, MDEV-4664 - "mysql_upgrade crashes if root's password contains an apostrophe/single quotation mark" that was fixed later, in 2015. This comment still remains public and is useful!
So, had my comments that mention MDEVs or MariaDB in general became so irrelevant and MariaDB advertising recently comparing to the previous ones? What exact community standards or rules they break? Is it now forbidden to any user of MySQL bugs database to mention MariaDB or bugs in it, use MariaDB in tests to make some point and share the results in public in MySQL bugs database, or the problem is with me personally doing this?

I'd be happy to read explanations or opinions from MySQL community users and my former Oracle colleagues in comments to this blog post.

dbdeployer cookbook - usability by example

When I designed dbdeployer, I wanted to eliminate most of the issues that the old MySQL-Sandbox had:

  • dependencies during installation
  • mistaken tarballs
  • clarity of syntax
  • features (un)awareness.


Dependencies during installation did go away right from the start, as the dbdeployer executable is ready to be used without additional components. The only dependency is to have a host that can run MySQL. There is little dbdeployer can do about detecting whether or not your system can run MySQL. It depends on which version and flavor of MySQL you are running. It should not be a big deal as I assume that anyone in need of dbdeployer has already the necessary knowledge about MySQL prerequisites. This is not always the case, but the issue goes beyond dbdeployer’s scope.


Mistaken tarballs are a common problem for users who have never seen a binaries tarball. Here dbdeployer can help, up to a point, to guide the user. It recognizes most cases where you are trying to use a source tarball or the wrong operating system tarball. It still does not recognize when you try to run a sandbox for a Linux 64bits out of a 32bit tarball, but to be honest I haven’t tried to solve this problem yet. There are still cases where users are a great risk of picking the wrong tarball (Percona Server download page is a minefield and the one for MariaDB is not picnic either) but I feel that I have given dbdeployer users a big help on this count.


Clarity of syntax is probably the biggest issue with the previous tool. It’s a consequence of the tool being developed over several years, slowly adapting to changing circumstances. I made dbdeployer clearer from the beginning, when I decided to avoid piling up many actions as the consequence of the same command. In dbdeployer, you need to unpack the tarball explicitly before running a deployment, and this gives dbdeployer users the visibility on the architecture that eluded many MySQL-Sandbox users. The architecture of dbdeployer is such that adding new features, commands, and options is easy and fits within an easily detectable paradigm. Thus, the operations are easier to spot and use.


Features awareness is still a problem. There is a lengthy description of everything dbdeployer can do, but, as often happens with even the best tools, users don’t read manuals.

The biggest obstacle about reading manuals is that dbdeployer executable is installed without any need to take the README file along. If you need it, you should go online and read it, and given that dbdeployer is built to be used mostly without internet connection, there are cases when you want to know how to do something, but you can’t get the manual right away.

There is the tool integrated help (dbdeployer [command] -h), which gives you a lot of information, but this tells you how to do something that you know already exists, not what you can do in general.

To help on this count, I added a collection of samples (the cookbook) that were initially released in a directory of the GitHub project, but then the sample scripts suffered of the same invisibility that plagues the README file. There is one more problem: when you tried using the generic cookbook scripts (now removed) you had to use the same environment as I did when defining them, or they would fail.

The current solution is to include cookbook files right within the tool, using templates (same as for regular sandbox scripts) with the double advantage that the scripts are available anywhere the dbdeployer executable is, and the scripts are adapted to the environment, since dbdeployer knows how to search for available binaries and can suggest the best parameters for the scripts.


Introducing dbdeployer cookbook

The command dbdeployer cookbook (with aliases recipes or samples) has three subcommands:

  • list displays a list of available samples.
  • show displays the contents of a recipe.
  • create (with alias make) will build the recipe script.

We should try the list first:

$ dbdeployer cookbook list
.----------------------------------.-------------------------------------.--------------------------------------------------------------------.--------.
| recipe | script name | description | needed |
| | | | flavor |
+----------------------------------+-------------------------------------+--------------------------------------------------------------------+--------+
| all-masters | all-masters-deployment.sh | Creation of an all-masters replication sandbox | mysql |
| delete | delete-sandboxes.sh | Delete all deployed sandboxes | |
| fan-in | fan-in-deployment.sh | Creation of a fan-in (many masters, one slave) replication sandbox | mysql |
| group-multi | group-multi-primary-deployment.sh | Creation of a multi-primary group replication sandbox | mysql |
| group-single | group-single-primary-deployment.sh | Creation of a single-primary group replication sandbox | mysql |
| master-slave | master-slave-deployment.sh | Creation of a master/slave replication sandbox | |
| ndb | ndb-deployment.sh | Shows deployment with ndb | ndb |
| prerequisites | prerequisites.sh | Shows dbdeployer prerequisites and how to make them | |
| pxc | pxc-deployment.sh | Shows deployment with pxc | pxc |
| remote | remote.sh | Shows how to get a remote MySQL tarball | |
| replication-restart | repl-operations-restart.sh | Show how to restart sandboxes with custom options | |
| replication-operations | repl-operations.sh | Show how to run operations in a replication sandbox | |
| replication_between_groups | replication-between-groups.sh | Shows how to run replication between two group replications | mysql |
| replication_between_master_slave | replication-between-master-slave.sh | Shows how to run replication between two master/slave replications | |
| replication_between_ndb | replication-between-ndb.sh | Shows how to run replication between two NDB clusters | ndb |
| show | show-sandboxes.sh | Show deployed sandboxes | |
| single | single-deployment.sh | Creation of a single sandbox | |
| single-reinstall | single-reinstall.sh | Re-installs a single sandbox | |
| tidb | tidb-deployment.sh | Shows deployment and some operations with TiDB | tidb |
| upgrade | upgrade.sh | Shows a complete upgrade example from 5.5 to 8.0 | mysql |
'----------------------------------'-------------------------------------'--------------------------------------------------------------------'--------'

The recipe that seems to be the simplest one is single. We can try to see what is in there:


$ dbdeployer cookbook show single
#!/bin/bash

[...]
# Generated by dbdeployer 1.26.0 using template single on Fri Mar 29 12:27:53 UTC 2019
cd $(dirname $0)
source cookbook_include.sh

version=$1
[ -z "$version" ] && version=NOTFOUND_mysql
check_version $version

if [ -n "$(dbdeployer sandboxes | grep 'single\s*'$version)" ]
then
echo "single version $version is already installed"
else
header "Deploying a single sandbox for version $version"
run dbdeployer deploy single $version
fi

What looks odd is the line that says NOTFOUND. If we try creating that script and then running it, it won’t work, and rightfully so.


There is a recipe named prerequisites that could probably help us.

$ dbdeployer cookbook create prerequisites
recipes/prerequisites.sh created

So, now, we have a starting point. Let’s run it:


$ ./recipes/prerequisites.sh

# ****************************************************************************
# Creating Sandbox binary directory ($HOME/opt/mysql)
# ****************************************************************************

## HOW TO GET binaries for dbdeployer
# FOR REGULAR MYSQL
# run the commands:

1. dbdeployer remote list
2. dbdeployer remote get mysql-5.7.25
3. dbdeployer unpack mysql-5.7.25.tar.xz

4. dbdeployer versions

# FOR MySQL forks, MySQL Cluster, PXC:

# 1. Get the binaries from the maker download pages
# 2. run the command
dbdeployer unpack FlavorName-X.X.XX-OS.tar.gz --prefix=FlavorName
3. dbdeployer versions

The first thing we see is that the sandbox binary directory was created, and then we see a series of steps to fill it in.

Let’s try:

$ dbdeployer remote list
Files available in https://raw.githubusercontent.com/datacharmer/mysql-docker-minimal/master/dbdata/available.json
5.5 -> [mysql-5.5.61 mysql-5.5.62]
5.6 -> [mysql-5.6.41 mysql-5.6.43]
5.7 -> [mysql-5.7.24 mysql-5.7.25]
8.0 -> [mysql-8.0.13 mysql-8.0.15]
4.1 -> [mysql-4.1.22]
5.0 -> [mysql-5.0.15 mysql-5.0.96]
5.1 -> [mysql-5.1.72]

$ dbdeployer remote get mysql-5.7.25
File /home/msandbox/mysql-5.7.25.tar.xz downloaded

$ dbdeployer unpack mysql-5.7.25.tar.xz
Unpacking tarball mysql-5.7.25.tar.xz to $HOME/opt/mysql/5.7.25
[...]
Renaming directory $HOME/opt/mysql/mysql-5.7.25 to $HOME/opt/mysql/5.7.25

$ dbdeployer versions
Basedir: /home/msandbox/opt/mysql
5.7.25

If we repeat the show command now, we get a different result:


$ dbdeployer cookbook show single
#!/bin/bash
[...]
# Generated by dbdeployer 1.26.0 using template single on Fri Mar 29 12:37:26 UTC 2019
cd $(dirname $0)
source cookbook_include.sh

version=$1
[ -z "$version" ] && version=5.7.25
check_version $version

if [ -n "$(dbdeployer sandboxes | grep 'single\s*'$version)" ]
then
echo "single version $version is already installed"
else
header "Deploying a single sandbox for version $version"
run dbdeployer deploy single $version
fi

There! instead of the NOTFOUND we saw before, it now shows the version that we just downloaded. If we repeat the same procedure (remote list, remote get, unpack) for MySQL 8.0.15, we would see 8.0.15 as the recommended version.


Now we can create the single recipe. Or even better, since we want to try several ones, we can create all of them.


$ dbdeployer cookbook create all
recipes/replication-between-master-slave.sh created
recipes/single-reinstall.sh created
recipes/fan-in-deployment.sh created
recipes/group-multi-primary-deployment.sh created
recipes/repl-operations.sh created
recipes/tidb-deployment.sh created
recipes/remote.sh created
recipes/upgrade.sh created
recipes/ndb-deployment.sh created
recipes/cookbook_include.sh created
recipes/master-slave-deployment.sh created
recipes/prerequisites.sh created
recipes/replication-between-groups.sh created
recipes/replication-between-ndb.sh created
recipes/pxc-deployment.sh created
recipes/single-deployment.sh created
recipes/show-sandboxes.sh created
recipes/delete-sandboxes.sh created
recipes/all-masters-deployment.sh created
recipes/group-single-primary-deployment.sh created
recipes/repl-operations-restart.sh created

Now it’s time to try one:


msandbox@505969e46289:~$ ./recipes/single-deployment.sh

# ****************************************************************************
# Deploying a single sandbox for version 5.7.25
# ****************************************************************************
+ dbdeployer deploy single 5.7.25
Creating directory /home/msandbox/sandboxes
Database installed in $HOME/sandboxes/msb_5_7_25
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0

Looking at the list, we see a single-reinstall recipe. If we run it, we will get a mini tutorial on how to use a single sandbox:


$ ./recipes/single-deployment.sh

# ****************************************************************************
# Deploying a single sandbox for version 5.7.25
# ****************************************************************************
+ dbdeployer deploy single 5.7.25
Creating directory $HOME/sandboxes
Database installed in $HOME/sandboxes/msb_5_7_25
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
msandbox@505969e46289:~$ ./recipes/show-sandboxes.sh
+ dbdeployer sandboxes --full-info
.------------.--------.---------.---------.--------.-------.--------.
| name | type | version | ports | flavor | nodes | locked |
+------------+--------+---------+---------+--------+-------+--------+
| msb_5_7_25 | single | 5.7.25 | [5725 ] | mysql | 0 | |
'------------'--------'---------'---------'--------'-------'--------'
0
msandbox@505969e46289:~$ ./recipes/single-reinstall.sh

# ****************************************************************************
# Deploying the same sandbox again, with different parameters
# We need to use --force, as we are overwriting an existing sandbox
# Incidentally, the new deployment will run a query before and after the grants
# ****************************************************************************
+ dbdeployer deploy single 5.7.25 '--pre-grants-sql=select host, user from mysql.user' '--post-grants-sql=select host, user from mysql.user' --force
Overwriting directory $HOME/sandboxes/msb_5_7_25
stop $HOME/sandboxes/msb_5_7_25
Database installed in $HOME/sandboxes/msb_5_7_25
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
+-----------+---------------+
| host | user |
+-----------+---------------+
| localhost | mysql.session |
| localhost | mysql.sys |
| localhost | root |
+-----------+---------------+
+-----------+---------------+
| host | user |
+-----------+---------------+
| 127.% | msandbox |
| 127.% | msandbox_ro |
| 127.% | msandbox_rw |
| 127.% | rsandbox |
| localhost | msandbox |
| localhost | msandbox_ro |
| localhost | msandbox_rw |
| localhost | mysql.session |
| localhost | mysql.sys |
| localhost | root |
+-----------+---------------+

# ****************************************************************************
# Deploying the same sandbox with a different directory.
# No --force is necessary, as dbdeployer will choose a different port
# ****************************************************************************
+ dbdeployer deploy single 5.7.25 --sandbox-directory=msb_5_7_25_new
Database installed in $HOME/sandboxes/msb_5_7_25_new
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0

+ dbdeployer sandboxes --header
name type version ports
---------------------------- -------- --------- ---------
msb_5_7_25 : single 5.7.25 [5725 ]
msb_5_7_25_new : single 5.7.25 [5726 ]
0

# ****************************************************************************
# Removing the second sandbox
# ****************************************************************************
+ dbdeployer delete msb_5_7_25_new
List of deployed sandboxes:
$HOME/sandboxes/msb_5_7_25_new
Running $HOME/sandboxes/msb_5_7_25_new/stop
stop $HOME/sandboxes/msb_5_7_25_new
Running rm -rf $HOME/sandboxes/msb_5_7_25_new
Directory $HOME/sandboxes/msb_5_7_25_new deleted
0

This script teaches us the basics of starting and restarting a sandbox, with useful twists as running an SQL command before granting privileges.


There is a similar tutorial for replication operations, but we’ll have a look at a slightly different one.


$ ./recipes/master-slave-deployment.sh
+ dbdeployer deploy replication 5.7.25 --concurrent
$HOME/sandboxes/rsandbox_5_7_25/initialize_slaves
initializing slave 1
initializing slave 2
Replication directory installed in $HOME/sandboxes/rsandbox_5_7_25
run 'dbdeployer usage multiple' for basic instructions'

$ ./recipes/repl-operations.sh

# ****************************************************************************
# Running a simple command with the master in the sandbox.
# Notice the usage of the '-e', as if we were using the 'mysql' client
# ****************************************************************************
+ $HOME/sandboxes/rsandbox_5_7_25/m -e 'SHOW MASTER STATUS'
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 | 4089 | | | |
+------------------+----------+--------------+------------------+-------------------+

# ****************************************************************************
# Creating a table in the master
# ****************************************************************************
+ $HOME/sandboxes/rsandbox_5_7_25/m -e 'DROP TABLE IF EXISTS test.t1'
+ $HOME/sandboxes/rsandbox_5_7_25/m -e 'CREATE TABLE test.t1(id int not null primary key)'

# ****************************************************************************
# Inserting 3 lines into the new table
# ****************************************************************************
+ $HOME/sandboxes/rsandbox_5_7_25/m -e 'INSERT INTO test.t1 VALUES(1)'
+ $HOME/sandboxes/rsandbox_5_7_25/m -e 'INSERT INTO test.t1 VALUES(2)'
+ $HOME/sandboxes/rsandbox_5_7_25/m -e 'INSERT INTO test.t1 VALUES(3)'

# ****************************************************************************
# Getting the table contents from one slave
# ****************************************************************************
+ $HOME/sandboxes/rsandbox_5_7_25/s1 -e 'SELECT * FROM test.t1'
+----+
| id |
+----+
| 1 |
| 2 |
| 3 |
+----+

# ****************************************************************************
# Getting the table count from all nodes (NOTE: no '-e' is needed)
# $HOME/sandboxes/rsandbox_5_7_25/use_all 'SELECT COUNT(*) FROM test.t1'
# master
COUNT(*)
3
# server: 1
COUNT(*)
3
# server: 2
COUNT(*)
3

# ****************************************************************************
# Checking the status of all slaves
# ****************************************************************************
+ $HOME/sandboxes/rsandbox_5_7_25/check_slaves
master
port 19226 - server_id 100
File: mysql-bin.000001
Position: 5213
Executed_Gtid_Set:
slave1
port 19227 - server_id 200
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 5213
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Exec_Master_Log_Pos: 5213
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
slave2
port 19228 - server_id 300
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 5213
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Exec_Master_Log_Pos: 5213
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
0

# ****************************************************************************
# Running a multiple query in all slaves
# ****************************************************************************
+ $HOME/sandboxes/rsandbox_5_7_25/use_all_slaves 'STOP SLAVE; SET GLOBAL slave_parallel_workers=3; START SLAVE;show processlist '
# server: 1
Id User Host db Command Time State Info
11 msandbox localhost NULL Query 0 starting show processlist
12 system user NULL Connect 0 Checking master version NULL
13 system user NULL Connect 0 System lock NULL
14 system user NULL Connect 0 Waiting for an event from Coordinator NULL
15 system user NULL Connect 0 Waiting for an event from Coordinator NULL
16 system user NULL Connect 0 Waiting for an event from Coordinator NULL
# server: 2
Id User Host db Command Time State Info
10 msandbox localhost NULL Query 0 starting show processlist
11 system user NULL Connect 0 Checking master version NULL
12 system user NULL Connect 0 System lock NULL
13 system user NULL Connect 0 Waiting for an event from Coordinator NULL
14 system user NULL Connect 0 Waiting for an event from Coordinator NULL
15 system user NULL Connect 0 Waiting for an event from Coordinator NULL

By studying the commands mentioned in these samples, you will become proficient in dbdeployer components, allowing you to use it for advanced testing operations.

Replicating data between two MySQL Group Replication sets using “regular” asynchronous replication with Global Transaction Identifiers (GTID’s)

MySQL introduced Group Replication (GR) in version 5.7, and GR is part of the InnoDB Cluster high-availability solution. InnoDB Cluster consists of Group Replication, MySQL Shell and MySQL Router.

I am not going to explain InnoDB Cluster or Group Replication in this post. So, if you aren’t familiar with either one, I have some previous posts in which I have explained how to work with both. See:

MySQL 8.0 Group Replication – Three-server installation

MySQL 8.0 InnoDB Cluster – Creating a sandbox and testing MySQL Shell, Router and Group Replication

Adding a replicated MySQL database instance using a Group Replication server as the source

A customer wanted to know how to replicate data between two separate three-node Group Replication groups. They have two data centers in different states, and they want to use each GR group for a different application. And, they wanted to replicate the data between the two (for various reasons).

For Group Replication (GR), you must have a minimum of three nodes in the group, and you can have a maximum of nine nodes. The GR group can be either in single-primary or multiple-primary mode. A five-node Group Replication Group looks like this:

For this (fictional) example, I will be using two three-node Group Replication groups. One group is in Atlanta, Georgia, and the other is in Tampa, Florida. Besides the standard group replication requirements, there are a couple of items you need to take into consideration if you decide to try this. First, this will probably not work very well if you have a very write-heavy application. Group Replication does support multi-threading, but the replicated databases may not always be up-to-date with the source database. Also, please note that you need will need a very good network connection between the two groups. And, if one GR group completely fails, there is a good chance some of the data might not have had time to replicate over to the other group. Just remember – your replication performancee may vary. For this post, I am using six virtual machines on a single server.

The topology so far…

To replicate the data between two groups, you only need to use use “regular” MySQL asynchronous replication with Global Transaction Identifiers (GTIDs)

Note: To learn more about replicating with GTIDs, I wrote two posts explaining the “how to” – see part one and part two). GTID’s are required when using Group Replication.

For the Atlanta group, I have three instances of MySQL (version 8.0.15) with IP addresses 192.168.1.151, 192.168.1.152 and 192.168.1.153. For the Tampa group, I have three instances of MySQL (version 8.0.15) with IP addresses 192.168.1.161, 192.168.1.162 and 192.168.1.163. The groups look like this:

With our two three-node groups, I will use the primary-write node in the first group to be the source (master) database for the primary-write node (replica/slave) in the second group – and vice-versa.

In the Atlanta group, I will use the primary-write node with the IP address of 192.168.1.151 (MEMBER_HOST MacVM151.local) to be the source (master) database for the primary-write in the Tampa group (which will be the slave with IP address 192.168.1.161).

In the Tampa group, I will use the primary-write node with the IP address of 192.168.1.161 (MEMBER_HOST MacVM161.local) to be the source (master) database for the primary-write in the Tampa group (which will be the slave with IP address 192.168.1.151).

The other nodes in both groups are set to SUPER READ-ONLY, so they can’t be used as a replica (slave) database source, as the replica (slave) needs to be able to perform writes. The topology will looks something like this – with the arrows showing the way the data is replicated (or “flows”):

If that is too confusing, here is a simplified layout: (ATL = Atlanta, TPA = Tampa)

The members of each group

In the Atlanta Group, I have the following members in the Group Replication group:

mysql> SELECT * FROM performance_schema.replication_group_members order by member_host; +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ | group_replication_applier | c727957e-4cb6-11e9-abd5-f80a484a9c32 | MacVM151.local | 3306 | ONLINE | SECONDARY | 8.0.15 | | group_replication_applier | 247898e0-4cb7-11e9-97a9-12f28adcadd1 | MacVM152.local | 3306 | ONLINE | PRIMARY | 8.0.15 | | group_replication_applier | 49e388f6-4cb7-11e9-8ccf-a355c1c8a6e8 | MacVM153.local | 3306 | ONLINE | SECONDARY | 8.0.15 | +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec)

In this group, I want the first node in the list (MEMBER_HOST MacVM151.local) to be the primary, but for some reason, the second node (MacVM152.local) is the primary. To change the PRIMARY to be the first server in the list, I can issue this command:

SELECT group_replication_set_as_primary(‘member_uuid’); (where the member_uuid is equal to the MEMBER_ID of the first node (‘c727957e-4cb6-11e9-abd5-f80a484a9c32’) from the above output.)

mysql> SELECT group_replication_set_as_primary('c727957e-4cb6-11e9-abd5-f80a484a9c32'); +--------------------------------------------------------------------------+ | group_replication_set_as_primary('c727957e-4cb6-11e9-abd5-f80a484a9c32') | +--------------------------------------------------------------------------+ | Primary server switched to: c727957e-4cb6-11e9-abd5-f80a484a9c32 | +--------------------------------------------------------------------------+ 1 row in set (0.02 sec)

I can verify this change by checking the members again (notice the MEMBER_ROLE column):

mysql> SELECT * FROM performance_schema.replication_group_members order by member_host; +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ | group_replication_applier | c727957e-4cb6-11e9-abd5-f80a484a9c32 | MacVM151.local | 3306 | ONLINE | PRIMARY | 8.0.15 | | group_replication_applier | 247898e0-4cb7-11e9-97a9-12f28adcadd1 | MacVM152.local | 3306 | ONLINE | SECONDARY | 8.0.15 | | group_replication_applier | 49e388f6-4cb7-11e9-8ccf-a355c1c8a6e8 | MacVM153.local | 3306 | ONLINE | SECONDARY | 8.0.15 | +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec)

The Tampa Group is ready to go – as 192.168.1.161 is the PRIMARY server:

Tampa Group:

mysql> SELECT * FROM performance_schema.replication_group_members order by member_host; +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ | group_replication_applier | fdc89b12-50d7-11e9-bfa8-012cdcc95c70 | MacVM161.local | 3306 | ONLINE | PRIMARY | 8.0.15 | | group_replication_applier | 1ef93b16-50d8-11e9-b7da-7e47ebc51826 | MacVM162.local | 3306 | ONLINE | SECONDARY | 8.0.15 | | group_replication_applier | 46012962-50d8-11e9-8dc0-de7edddaaccd | MacVM163.local | 3306 | ONLINE | SECONDARY | 8.0.15 | +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec)

(Note: Yes, I could have used the second node with IP of 192.168.1.152, but I wanted to use the first node in each group)

Checking the state of each group

I am starting with a clean install of MySQL on all six instances. You can still do this if you have existing data in your groups, but you will need to get the other group to have a beginning set of data. When you begin replication, it won’t automatically export/backup the data and restore/import it to the other group. You will have to backup and restore the data from each group, so the new replicated server will have a starting point at a particular GTID. (I explain how to do this in this post – search for “What if my Group Replication (GR) group already has data?”).

I can see from the GET MASTER STATUS\G on both groups under the Executed_Gtid_Set that I have executed seven and three transactions on the two GR groups, respectively (these transactions were from creating and modifying the group replication views):

Atlanta Group

mysql> show master status\G *************************** 1. row *************************** File: mysql-bin.000001 Position: 2217 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:1-7 1 row in set (0.00 sec)

Tampa Group

mysql> show master status\G *************************** 1. row *************************** File: mysql-bin.000001 Position: 1515 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-3 1 row in set (0.00 sec)

But, if you want to look at the transactions (which are stored in the binary log mysql-bin.000001 on each server) for each group to verify that you didn’t have any transactions which changed data, you can use the mysqlbinlog tool:

# mysqlbinlog mysql-bin.000001 /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 .... SET TIMESTAMP=1553731529/*!*/; COMMIT ....

I checked both binary logs, and all of the transactions had to do with changing the state of the group replication view. All of these transactions were SET TIMESTAMP transactions (“SET TIMESTAMP=1553731529/*!*/;”). I am now ready to start replication.

Let’s get started

As shown in the above topology images, I am going to setup the Atlanta node (MacVM151.local – IP address 192.168.1.151) to be a source (master) for the Tampa node (MacVM161.local – IP address 192.168.1.161). Since I already have a replication user created for Group Replication, I can use the same user for this “regular” asynchronous replication. If you want more details on asynchronous replication using GTID’s – see this post.

Since I already have a replication channel open with the Group Replication, I will need to specify a new channel. I will name the channel “atl_tpa_replication” (for “Atlanta to Tampa replication”), meaning the Atlanta server will be the source (master) and the Tampa server will be the replica (slave). In replication, it is the job of the replica (slave) to contact the source (master) and retrieve all of the transactions (data) that has not been applied to its database. I will open a MySQL prompt on the Tampa instance with the IP address of 192.168.1.161, and issue this CHANGE MASTER TO statement:

On Tampa

CHANGE MASTER TO MASTER_HOST = '192.168.1.151', MASTER_PORT = 3306, MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'R3plic4tion!', MASTER_AUTO_POSITION = 1 FOR CHANNEL 'atl_tpa_replication'; mysql> CHANGE MASTER TO -> MASTER_HOST = '192.168.1.151', -> MASTER_PORT = 3306, -> MASTER_USER = 'rpl_user', -> MASTER_PASSWORD = 'R3plic4tion!', -> MASTER_AUTO_POSITION = 1 -> FOR CHANNEL 'atl_tpa_replication'; Query OK, 0 rows affected, 2 warnings (0.01 sec) mysql> show warnings\G *************************** 1. row *************************** Level: Note Code: 1759 Message: Sending passwords in plain text without SSL/TLS is extremely insecure. *************************** 2. row *************************** Level: Note Code: 1760 Message: Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information. 2 rows in set (0.00 sec)

I can now start the slave for the “atl_tpa_replication” replication channel, and take a look at the “slave status”:

mysql> start slave for channel 'atl_tpa_replication'; Query OK, 0 rows affected (0.01 sec) mysql> show slave status for channel 'ATL_TPA_REPLICATION'\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.1.151 Master_User: rpl_user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000003 Read_Master_Log_Pos: 893 Relay_Log_File: MacVM161-relay-bin-atl_tpa_replication.000004 Relay_Log_Pos: 1099 Relay_Master_Log_File: mysql-bin.000003 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 893 Relay_Log_Space: 1919 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 151 Master_UUID: c727957e-4cb6-11e9-abd5-f80a484a9c32 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7 Executed_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:1-3, 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: atl_tpa_replication Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0 1 row in set (0.00 sec)

On the Atlanta group, there were seven transactions which had been applied (GTIDs of 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7) – and I can tell this from when I did a “SHOW MASTER STATUS\G” from the Atlanta primary-write server (192.168.1.151):

On Atlanta

mysql> show master status\G *************************** 1. row *************************** File: mysql-bin.000001 Position: 2568 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7 1 row in set (0.00 sec)

I can see which GTID’s I have retrieved (Retrieved_Gtid_Set) and executed (Executed_Gtid_Set) from the SHOW SLAVE STATUS\G command, and these seven GTID’s (8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7) from the server are included:

mysql> show slave status for channel 'atl_tpa_replication'\G ... Retrieved_Gtid_Set: 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7 Executed_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:1-3, 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7 ...

Since I executed the SHOW SLAVE STATUS\G from the Tampa server, I can also see the existing Tampa GTIDs also fall under the Executed_Gtid_Set section – 160f4761-c55c-422f-8684-d086f6a1db0e:1-3.

Now I can turn on replication to go from Tampa to Atlanta, where the Tampa primary-write server is the source (master) for the Atlanta primary-write server replica (slave). After I run the CHANGE MASTER TO command, I will turn on the replica (slave) with the START SLAVE command. I will then check on the status with SHOW SLAVE STATUS\G.

NOTE: I have changed the CHANNEL name to be tpa_atl_replication (Tampa to Atlanta replication)

On Atlanta

CHANGE MASTER TO MASTER_HOST = '192.168.1.161', MASTER_PORT = 3306, MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'R3plic4tion!', MASTER_AUTO_POSITION = 1 FOR CHANNEL 'tpa_atl_replication'; mysql> CHANGE MASTER TO -> MASTER_HOST = '192.168.1.161', -> MASTER_PORT = 3306, -> MASTER_USER = 'rpl_user', -> MASTER_PASSWORD = 'R3plic4tion!', -> MASTER_AUTO_POSITION = 1 -> FOR CHANNEL 'tpa_atl_replication'; Query OK, 0 rows affected, 2 warnings (0.03 sec) mysql> show master status\G *************************** 1. row *************************** File: mysql-bin.000001 Position: 2568 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7 1 row in set (0.00 sec) mysql> CHANGE MASTER TO -> MASTER_HOST = '192.168.1.161', -> MASTER_PORT = 3306, -> MASTER_USER = 'rpl_user', -> MASTER_PASSWORD = 'R3plic4tion!', -> MASTER_AUTO_POSITION = 1 -> FOR CHANNEL 'tpa_atl_replication'; Query OK, 0 rows affected, 2 warnings (0.03 sec) mysql> start slave; Query OK, 0 rows affected (0.01 sec) mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.1.161 Master_User: rpl_user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 3581 Relay_Log_File: ic1-relay_log-tpa_atl_replication.000002 Relay_Log_Pos: 1413 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 3581 Relay_Log_Space: 1631 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 161 Master_UUID: fdc89b12-50d7-11e9-bfa8-012cdcc95c70 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:1-3 Executed_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:1-3, 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: tpa_atl_replication Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0 1 row in set (0.00 sec)

From the above, I can now see the three GTIDs from the Tampa server (160f4761-c55c-422f-8684-d086f6a1db0e:1-3) have been replicated over to the Atlanta server, and the seven GTIDs (8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7) from Atlanta also appear in the Executed_Gtid_Set:

Retrieved_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:1-3 Executed_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:1-3, 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7

Finally, since I am running Group Replication, these transactions which were replicated between the primary-write servers will also be replicated to the other two servers in each group. I execute the SHOW SLAVE STATUS\G on a secondary server (IP of 192.168.1.152) and see all of the GTID’s which have been replicated between the primary servers:

On secondary server (in Atlanta) with IP of 192.168.1.152:

mysql> show slave status\G ... Retrieved_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:1-3 Executed_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:1-3, 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7 ...

Replicate new transactions

Since both of these servers were new installs of MySQL, each instance only has these four databases:

mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.00 sec)

I will now create a database on each server, with the GR group location name as the name of the database. These databases will then replicate over to the other groups. And, you will see the executed GTIDs on the source (master) increase from seven (8e2f4761-c55c-422f-8684-d086f6a1db0e:1-7) to eight (8e2f4761-c55c-422f-8684-d086f6a1db0e:1-8).

On Atlanta:

mysql> create database Atlanta; Query OK, 1 row affected (0.01 sec) mysql> show master status\G *************************** 1. row *************************** File: mysql-bin.000003 Position: 2097 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:1-3, 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-8 1 row in set (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | Atlanta | | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec)

When I go to any of the group replication instances in Tampa, I can see the Atlanta database has already been replicated over to the group, and each node has executed this transaction. From node three – 192.168.1.163:

On Tampa

mysql> show databases; +--------------------+ | Database | +--------------------+ | Atlanta | | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec)

And the same happens when I create a database named Tampa – it will get replicated over to the Atlanta Group Replication group:

On Tampa:

mysql> create database Tampa; Query OK, 1 row affected (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | Atlanta | | information_schema | | mysql | | performance_schema | | sys | | Tampa | +--------------------+ 6 rows in set (0.00 sec)

On the Tampa Group Replication group, the GTIDs increased from three (160f4761-c55c-422f-8684-d086f6a1db0e:1-3) to four (160f4761-c55c-422f-8684-d086f6a1db0e:1-4). The other GTIDs – 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-8 – are from the Atlanta Group Replication group. I can confirm this with a SHOW MASTER\G statement:

mysql> show master status\G *************************** 1. row *************************** File: mysql-bin.000001 Position: 3957 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:1-4, 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-8 1 row in set (0.00 sec)

Also, I can check to see if the “Tampa” database (which was GTID 160f4761-c55c-422f-8684-d086f6a1db0e:4) has been replicated over to the Atlanta group by running a SHOW SLAVE STATUS\G on any of the Atlanta nodes. I will use the second node (IP address 192.168.1.152):

On Atlanta

mysql> show slave status\G ... Retrieved_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:4 Executed_Gtid_Set: 160f4761-c55c-422f-8684-d086f6a1db0e:1-4, 8e2f4761-c55c-422f-8684-d086f6a1db0e:1-8 ... 1 row in set (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | Atlanta | | information_schema | | mysql | | performance_schema | | sys | | Tampa | +--------------------+ 6 rows in set (0.00 sec)

I now have two different MySQL Group Replication groups replicating data between each other.

WARNING: This should only be done if you can ensure that neither group will be modifying the other group’s data. Since replication is not instant, you could make a change on one group at the same time another person is changing the same row – and this scenario would break the replication between the two groups. Or, you could try to modify data that was modified on another group, and you will be modifying stale data.

Can I change the primary-write server in this scenario?

Since I have replication channels on both of the primary-write nodes, what happens if I want to change the primary node to another server? For example, 192.168.1.151 is the PRIMARY for the Atlanta Group Replication, but I want 192.168.1.152 to be the PRIMARY.

On Atlanta, here are the members of the group replication, with 192.168.1.151 (MacVM151.local) as the PRIMARY: (see the MEMBER_ROLE column)

mysql> SELECT * FROM performance_schema.replication_group_members order by member_host; +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ | group_replication_applier | c727957e-4cb6-11e9-abd5-f80a484a9c32 | MacVM151.local | 3306 | ONLINE | PRIMARY | 8.0.15 | | group_replication_applier | 247898e0-4cb7-11e9-97a9-12f28adcadd1 | MacVM152.local | 3306 | ONLINE | SECONDARY | 8.0.15 | | group_replication_applier | 49e388f6-4cb7-11e9-8ccf-a355c1c8a6e8 | MacVM153.local | 3306 | ONLINE | SECONDARY | 8.0.15 | +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec)

If I try and change the second node (MEMBER_ID 247898e0-4cb7-11e9-97a9-12f28adcadd1) to be the PRIMARY server, I will get an error:

From the current PRIMARY node – which is the first node in the group – 192.168.1.151, I will try and make the second node the PRIMARY:

mysql> SELECT group_replication_set_as_primary('247898e0-4cb7-11e9-97a9-12f28adcadd1'); ERROR 13223 (HY000): The function 'group_replication_set_as_primary' failed. There is a slave channel running in the group's current primary member.

In order for me to change the PRIMARY to the second node, I will need to stop the asynchronous replication on the first node (192.168.1.151) by issuing the STOP SLAVE command. This stops the replication between 192.168.1.151 and 192.168.1.161.

I can then change the PRIMARY to be the second node (MEMBER_ID 247898e0-4cb7-11e9-97a9-12f28adcadd1 and IP address of 192.168.1.152). Once the second node is the new PRIMARY, then I can start replication on the second node by running the CHANGE MASTER TO command on 192.168.1.152.

On 192.168.1.151 (Atlanta Group): (Note: You can change the PRIMARY from any node in the Group Replication group)

mysql> stop slave; Query OK, 0 rows affected (0.01 sec) mysql> SELECT group_replication_set_as_primary('247898e0-4cb7-11e9-97a9-12f28adcadd1'); +--------------------------------------------------------------------------+ | group_replication_set_as_primary('247898e0-4cb7-11e9-97a9-12f28adcadd1') | +--------------------------------------------------------------------------+ | Primary server switched to: 247898e0-4cb7-11e9-97a9-12f28adcadd1 | +--------------------------------------------------------------------------+ 1 row in set (0.00 sec)

I can see the second node is the new PRIMARY: (see the MEMBER_ROLE column)

mysql> SELECT * FROM performance_schema.replication_group_members order by member_host; +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ | group_replication_applier | c727957e-4cb6-11e9-abd5-f80a484a9c32 | MacVM151.local | 3306 | ONLINE | SECONDARY | 8.0.15 | | group_replication_applier | 247898e0-4cb7-11e9-97a9-12f28adcadd1 | MacVM152.local | 3306 | ONLINE | PRIMARY | 8.0.15 | | group_replication_applier | 49e388f6-4cb7-11e9-8ccf-a355c1c8a6e8 | MacVM153.local | 3306 | ONLINE | SECONDARY | 8.0.15 | +---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec)

Now I can start replication on 192.168.1.152, with 192.168.1.161 as the source (master) by using the CHANGE MASTER TO command.

CHANGE MASTER TO MASTER_HOST = '192.168.1.161', MASTER_PORT = 3306, MASTER_USER = 'rpl_user', MASTER_PASSWORD = 'R3plic4tion!', MASTER_AUTO_POSITION = 1 FOR CHANNEL 'TPA_ATL_REPLICATION'; mysql> CHANGE MASTER TO -> MASTER_HOST = '192.168.1.161', -> MASTER_PORT = 3306, -> MASTER_USER = 'rpl_user', -> MASTER_PASSWORD = 'R3plic4tion!', -> MASTER_AUTO_POSITION = 1 -> FOR CHANNEL 'TPA_ATL_REPLICATION'; Query OK, 0 rows affected, 2 warnings (0.01 sec) mysql> start slave; Query OK, 0 rows affected (0.00 sec)

And, the SHOW SLAVE STATUS confirms the SQL and IO threads are running:

mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.1.161 Master_User: rpl_user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 3957 Relay_Log_File: ic1-relay_log-tpa_atl_replication.000002 Relay_Log_Pos: 400 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes ...

Note: With Group Replication, when the PRIMARY goes down, the group will automatically elect a new PRIMARY server, and the group will keep on processing transactions. If you are doing “regular” asynchronous replication from the PRIMARY and it goes down, there isn’t a way to automatically change the source (master), and replication will be broken. You will need to change the replica (slave) to another server in the Group Replication group. And, when the new PRIMARY comes back online, you will need to issue a STOP SLAVE command if the replication is still active on that server. You don’t want two replicas (slaves) attached at the same time.

 

Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn. Tony is the author of Twenty Forty-Four: The League of Patriots 
Visit http://2044thebook.com for more information. Tony is the editor/illustrator for NASA Graphics Standards Manual Remastered Edition 
Visit https://amzn.to/2oPFLI0 for more information.

Pages