Planet MySQL

MySQL to MongoDB - An Admin Cheat Sheet

Most software applications nowadays involve some dynamic data storage for extensive future reference in the application itself. We all know data is stored in a database which falls into two categories that are: Relational and Non-relational DBMS.

Your choice of selection from these two will fully depend on your data structure, amount of data involved, database performance and scalability.

Relational DBMS store data in tables in terms of rows such that they use Structured Querying Language (SQL) making them a good choice for applications involving several transactions. They include MySQL, SQLite, and PostgreSQL.

On the other hand, NoSQL DBMS such as MongoDB are document-oriented such that data is stored in collections in terms of documents. This gives a greater storage capacity for a large set of data hence a further advantage in scalability.

In this blog we are assuming you have a better knowledge for either MongoDB or MySQL and hence would like to know the correlation between the two in terms of querying and database structure.

Below is a cheat sheet to further familiarize yourself with the querying of MySQL to MongoDB.

MySQL to MongoDB Cheat Sheet - Terms MySQL Terms MongoDB Terms Explanation Table Collection This is the storage container for data that tends to be similar in the contained objects. Row Document Defines the single object entity in the table for MySQL and collection in the case of MongoDB. Column Field For every stored item, it has properties which are defined by different values and data types. In MongoDB, documents in the same collection, may have different fields from each other. In MySQL, every row must be defined with the same columns from the existing ones. Primary key Primary key Every stored object is identified with a unique field value in the case of MongoDB we have _id field set automatically whereas in MySQL you can define your own primary key which is incremental as you create new rows. Table Joins Embedding and linking documents Connection associated with an object in a different collection/table to data in another collection/table. where $match Selecting data that matches criteria. group $group Grouping data according to some criteria. drop $unset Removing a column/field from a row/document/ set $set Setting the value of an existing column/field to a new value. Severalnines   Become a MongoDB DBA - Bringing MongoDB to Production Learn about what you need to know to deploy, monitor, manage and scale MongoDB Download for Free Schema Statements MySQL Table Statements MongoDB Collection Statements Explanation

The database and tables are created explicitly through the PHP admin panel or defined within a script i.e

Creating a Database

CREATE DATABASE database_name

Creating a table

CREATE TABLE users ( id MEDIUMINT NOT NULL AUTO_INCREMENT, UserId Varchar(30), Age Number, Gender char(1), Name VarChar(222), PRIMARY KEY (id) )

The database can be created implicitly or explicitly. Implicitly during the first document insert the database and collection are created as well as an automatic _id field being added to this document.

db.users.insert( { UserId: "user1", Age: 55, Name: "Berry Hellington", Gender: "F", } )

You can also create the database explicitly by running this comment in the Mongo Shell

db.createCollection("users")

In MySQL, you have to specify the columns in the table you are creating as well as setting some validation rules like in this example the type of data and length that goes to a specific column. In the case of MongoDB, it is not a must to define neither the fields each document should hold nor the validation rules the specified fields should hold.

However, in MongoDB for data integrity and consistency you can set the validation rules using the JSON SCHEMA VALIDATOR

Dropping a table

DROP TABLE users db.users.drop()

This are statements for deleting a table for MySQL and collection in the case of MongoDB.

Adding a new column called join_date

ALTER TABLE users ADD join_date DATETIME

Removing the join_date column if already defined

ALTER TABLE users DROP COLUMN join_date DATETIME

Adding a new field called join_date

db.users.updateMany({},{$set:{‘join_date’: new Date()})

This will update all documents in the collection to have the join date as the current date.

Removing the join_date field if already defined

db.users.updateMany({},{$unset:{‘join_date’: “”})

This will remove the join_date field from all the collection documents.

Altering the structure of the schema by either adding or dropping a column/field.

Since the MongoDB architecture does not strictly enforce on the document structure, documents may have fields different from each other.

Creating an index with the UserId column ascending and Age descending

CREATE INDEX idx_UserId_asc_Age_desc ON users(UserId)

Creating an index involving the UserId and Age fields.

db.users.ensureIndex( { UserId: 1, Age: -1 } )

Indices are generally created to facilitate the querying process.

INSERT INTO users(UserId, Age, Gender) VALUES ("user1", 25, "M") db.users.insert( { UserId: "bcd001", Age: 25, Gender: "M", Name: "Berry Hellington", } )

Inserting new records.

DELETE FROM users WHERE Age = 25 db.users.deleteMany( { Age = 25 } )

Deleting records from the table/collection whose age is equal to 25.

DELETE FROM users db.users.deleteMany({})

Deleting all records from the table/collection.

SELECT * FROM users db.users.find()

Returns all records from the users table/collection with all columns/fields.

SELECT id, Age, Gender FROM users db.users.find( { }, { Age: 1, Gender: 1 } )

Returns all records from the users table/collection with Age, Gender and primary key columns/fields.

SELECT Age, Gender FROM users db.users.find( { }, { Age: 1, Gender: 1,_id: 0} )

Returns all records from the users table/collection with Age and Gender columns/fields. The primary key is omitted.

SELECT * FROM users WHERE Gender = “M” db.users.find({ Gender: "M"})

Returns all records from the users table/collection whose Gender value is set to M.

SELECT Gender FROM users WHERE Age = 25 db.users.find({ Age: 25}, { _id: 0, Gender: 1})

Returns all records from the users table/collection with only the Gender value but whose Age value is equal to 25.

SELECT * FROM users WHERE Age = 25 AND Gender = ‘F’ db.users.find({ Age: 25, Gender: "F"})

Returns all records from the users table/collection whose Gender value is set to F and Age is 25.

SELECT * FROM users WHERE Age != 25 db.users.find({ Age:{$ne: 25}})

Returns all records from the users table/collection whose Age value is not equal to 25.

SELECT * FROM users WHERE Age = 25 OR Gender = ‘F’ db.users.find({$or:[{Age: 25, Gender: "F"}]})

Returns all records from the users table/collection whose Gender value is set to F or Age is 25.

SELECT * FROM users WHERE Age > 25 db.users.find({ Age:{$gt: 25}})

Returns all records from the users table/collection whose Age value is greater than 25.

SELECT * FROM users WHERE Age <= 25 db.users.find({ Age:{$lte: 25}})

Returns all records from the users table/collection whose Age value is less than or equal to 25.

SELECT Name FROM users WHERE Name like "He%" db.users.find( { Name: /He/ } )

Returns all records from the users table/collection whose Name value happens to have He letters.

SELECT * FROM users WHERE Gender = ‘F’ ORDER BY id ASC db.users.find( { Gender: "F" } ).sort( { $natural: 1 } )

Returns all records from the users table/collection whose Gender value is set to F and sorts this result in the ascending order of the id column in case of MySQL and time inserted in the case of MongoDB.

SELECT * FROM users WHERE Gender = ‘F’ ORDER BY id DESC db.users.find( { Gender: "F" } ).sort( { $natural: -1 } )

Returns all records from the users table/collection whose Gender value is set to F and sorts this result in the descending order of the id column in case of MySQL and time inserted in the case of MongoDB.

SELECT COUNT(*) FROM users db.users.count()

or

db.users.find().count()

Counts all records in the users table/collection.

SELECT COUNT(Name) FROM users db.users.count({Name:{ $exists: true }})

or

db.users.find({Name:{ $exists: true }}).count()

Counts all records in the users table/collection who happen to have a value for the Name property.

SELECT * FROM users LIMIT 1 db.users.findOne()

or

db.users.find().limit(1)

Returns the first record in the users table/collection.

SELECT * FROM users WHERE Gender = ‘F’ LIMIT 1 db.users.find( { Gender: "F" } ).limit(1)

Returns the first record in the users table/collection that happens to have Gender value equal to F.

SELECT * FROM users LIMIT 5 SKIP 10 db.users.find().limit(5).skip(10)

Returns the five records in the users table/collection after skipping the first five records.

UPDATE users SET Age = 26 WHERE age > 25 db.users.updateMany( { age: { $gt: 25 } }, { $set: { Age: 26 } } )

This sets the age of all records in the users table/collection who have the age greater than 25 to 26.

UPDATE users SET age = age + 1 db.users.updateMany( {} , { $inc: { age: 1 } } )

This increases the age of all records in the users table/collection by 1.

UPDATE users SET age = age - 1 WHERE id = 1 db.users.updateMany( {} , { $inc: { age: -1 } } )

This decrements the age of the first record in the users table/collection by 1.

To manage MySQL and/or MongoDB centrally and from a single point, visit: https://severalnines.com/product/clustercontrol.

Related resources   ClusterControl for MongoDB Learn more   ClusterControl for MySQL Learn more Tags:  MySQL MongoDB mongo nosql commands

How to Install Cachet Status Page System on Ubuntu 18.04 LTS

Cachet is a beautiful and powerful open source status page system written in PHP that allows you to better communicate downtime and system outages to your customers, teams, and shareholders. In this tutorial, we will install Cachet status page system by utilizing PHP, Nginx, MySQL, and Composer on Ubuntu 18.04 LTS.

Extending WordPress Dockerfile to use MySQL 5.7 (or 8.0)

Oracle’s website shows End of life for MySQL 5.5 as of Jan 20th of 2019, so hurry up and upgrade!

I am working building some demos for Cloud SQL and one of the requirements I had was to run MySQL 5.7 and WordPress as my sample application. The demo consisted on migrating from a single VM environment with WordPress and MySQL running alongside. The narrative: the site got popular and the database became the bottle neck because of all the shared resources between them and the application. The proposed solution? A minimal downtime migration to Cloud SQL, moving the data layer to a dedicated server.

I am going to be doing this demo a lot of times, so I needed some way to automate it. I thought of doing through Docker. I am not Docker proficient, and to begin with I asked Anthony for help to get me to what I wanted, but there are so many nuances! Maybe someone will find a better solution to it than this one, but I decided to share what I got.

Let’s examine the two scenarios I faced. All examples assume Debian/Ubuntu.

I don’t run Docker, just have a VM and want to have MySQL 5.7

In this case it’s straightforward: you need to use the MySQL official APT repository available in https://dev.mysql.com/downloads/repo/apt/.

At this time the most recent version is mysql-apt-config_0.8.12-1_all.deb, keep an eye before continuing this because it may change the version until you use this tutorial.

In line 2 you can change from mysql-5.7 to mysql-8.0, if unspecified the command, version 8.0 will be installed.

I run Docker and want to have 5.7 or 8.0 installed on it

It’s a bit similar to the previous situation, you still need to go to the APT repository page to know which file to download and add this on your Dockerfile:

Notice, you can also change the version of MySQL here. Don’t forget to pass DB_ROOT_PASSWORD​ when doing your docker build using the --build-arg argument. More details here.

It works!

These are the workarounds to avoid using MySQL 5.5. After that I was able to finally automate my demo. Feel free here to share better examples of what I did, as I said, I don’t have proficiency in the subject.

Upcoming Webinar Wed 2/6: Percona Software News and Roadmap Update

Join Percona CEO Peter Zaitsev as he presents Percona Software News and Roadmap Update on Wednesday, February 6, 2019, at 11:00 AM PST (UTC-8) / 2:00 PM EST (UTC-5).

Register Now

Come and listen to Percona CEO Peter Zaitsev discuss what’s new in Percona open source software. Topics include Percona Server for MySQL and MongoDB, Percona XtraBackup, Percona Toolkit, Percona XtraDB Cluster and Percona Monitoring and Management.

During this webinar, Peter will talk about newly released features in Percona software. He will also show a few quick demos and share with you highlights from the Percona open source software roadmap.

Peter will also talk about new developments in Percona commercial services and finish with a Q&A.

Register today to join Peter for his Percona Software News and Roadmap Update.

The Left-Prefix Index Rule

There's an important heuristic in database indexing that I call the left-prefix rule. It helps you understand whether the database will be able to make the best use of a multi-column index to speed up a query. It's one of a small handful of really important things to understand about indexes!


The left-prefix rule can be confusing, but once you grok it, it's simple and easy to use. It applies to MySQL, PostgreSQL, MongoDB, and many other databases. In this post I'll explain what the left-prefix rule means and how it works.

How Multi-Column Indexes Work

Most database indexes are a sorted copy of some data from the table they index. Suppose you have a database of people and you want to find them by name. If you create an index on the first_name attribute, the database makes a sorted, searchable copy of everyone's first name, with pointers back to the original records.

Indexes can have more than a single column. Multi-column indexes are also called compound indexes or composite indexes. The underlying implementation is sorted by all of the columns, in order. Suppose you index (last_name, first_name): now the database creates a copy of both of those columns and sorts them first by the last name, then if there are any people with the same last name, it sorts them by first name.

Here's an example. Notice how there are two entries for Bach, and which one comes first.

The Left-Prefix Rule

The left-prefix rule says that a query can search an index efficiently if it provides search terms (such as values in a WHERE clause) that match the leading columns of the index, in left-to-right order.

In our example above, if we're looking for people named Bach, we can search the index efficiently and find all of the Bach's. If we're looking for {Bach, Carl Philipp Emanuel}, we can find that record efficiently too.

But if we're looking for people named Irving, we're going to have to scan the whole index, because there could be Irvings anywhere in the index. It's not sorted in a way that keeps all the first names together, unless you're looking within a single last name. So another way to state the left-prefix rule is that searches that don't constrain the leading index columns aren't very efficient. The index might not even be usable for such searches.

Inequalities, Ranges, and the Left-Prefix Rule

There's more to the left-prefix rule. The usable prefix of the index, in most databases and under most circumstances, is up to and including the first inequality or range search condition.

Suppose we have a four-column index on columns (a, b, c, d). If we search for records where a=X, and b=Y, and c>Q, and d=R, then the usable prefix is only the first three columns. That's because the database will be able to progressively narrow the search until it gets to the fourth column:

  • Narrow the search to a=X. Good.
  • Narrow the search to b=Y. Okay.
  • Narrow the search to c>Q. That's a set (or range) of potentially many different values of c, so the narrowing stops after this.
  • Look for d=R. This is a scan. Within each possible value of Q, the database must scan to find rows with d=R.
Gaps in the Prefix

There's one more case to consider: what if you provide search terms for some but not all of the columns in the index? What if you search for a=X, b=Y, and d=R?

The usable prefix of the index, that's constrained/narrowed by your criteria, is limited to the contiguous prefix for which you've provided search terms. In this case, that's just (a, b). That's your usable left-prefix for constraining the search. A "hole" or gap in the index, meaning that the index has a column without a search term to apply to it, ends the prefix.

Concluding Thoughts

Now you know the three important tenets of the left-prefix rule!

  1. A query can search an index efficiently if it constrains the leading columns of the index, in left-to-right order...
  2. Up to and including the first inequality or range search condition...
  3. Without any gaps in the column prefix.

Next time you're examining the EXPLAIN plan of a query that uses an index, and you see that it doesn't use all the columns of a multi-column (compound/composite) index, check whether the query satisfies the left-prefix rule. If it doesn't, that might explain why the index's full power isn't being used to speed up the query as much as it could!

Photo by Iñaki del Olmo on Unsplash

Updating triggers online with MySQL and MariaDB

This is what can happen if triggers are updated while applications are running

Updating triggers online means that we are able to do this in production, while our applications are running, without causing a disruption.

To achieve this, we need to run CREATE TRIGGER and DROP TRIGGER inside a transaction, but this is not supported by MySQL and MariaDB. If we try to do so, the current transaction is implicitly committed. MariaDB has a variable called in_transaction that allows us to demonstrate this clearly:

MariaDB [test]> START TRANSACTION; Query OK, 0 rows affected (0.000 sec) MariaDB [test]> SELECT @@in_transaction; +------------------+ | @@in_transaction | +------------------+ | 1 | +------------------+ 1 row in set (0.000 sec) MariaDB [test]> DROP TRIGGER bi_t; Query OK, 0 rows affected (0.001 sec) MariaDB [test]> SELECT @@in_transaction; +------------------+ | @@in_transaction | +------------------+ | 0 | +------------------+ 1 row in set (0.000 sec)

The problem is actually more generic: MySQL and MariaDB don’t support transactional DDL. But let’s focus on triggers. I’ll explain what problems this causes, and then I will propose a solution.

The problems

The problems are:

  • Phantom trigger – it’s about every single updated trigger;
  • Inconsistent triggers – it’s about a table’s triggers as a whole;
  • Atomicity – it’s about failures.
Phantom triggers

If I want to replace one trigger with another, as mentioned, I will need to run DROP TRIGGER followed by CREATE TRIGGER. This implies that, in the (hopefully brief) interval between these expressions, the trigger will simply not exist.

If I run these statements in a transaction, this is not a problem. In fact, for other users, both the statements will take effect together when I COMMIT the transaction. But again, this cannot happen in MySQL and MariaDB. On a heavily used server, we should definitely expect some users to write the table without activating a trigger.

However, MariaDB has a solution for this problem: CREATE OR REPLACE TRIGGER. In this way the existing trigger (if any) will be destroyed and the new one will be created, but this operation is atomic. No other user can see the table without a trigger.

Inconsistent triggers

Even with the first problem solved, in most cases we have a second problem: we have multiple triggers associated to a table, and we always want them to be consistent each other.

This is better explained with an example. Suppose that we have triggers BEFORE INSERT and UPDATE. These triggers perform some data validation, and return an error if some data in not valid. If you want to know how to do this, take a look at an article I wrote some years ago in my personal blog: Validating data using a TRIGGER.

Anyway, the problem here is that, if we change the validation rules in one trigger, we want to change them in the other trigger also. In no point in time we want the triggers to be inconsistent.

Atomicity

Our triggers operations should completely succeed or completely fail. If, for example, we lose our connection in the middle of these changes, all the changes we’ve done should be undone, to avoid inconsistencies.

The solution

If you used MySQL for centuries like me, you may remember the LOCK TABLES statement. And probably you remember it with horror. It was developed in ancient times as an alternative to transactions, when MyISAM (or probably its ancestor, ISAM) was the default storage engine and InnoDB simply didn’t exist. Basically, since you could not use a real transaction, you locked a whole table with a read lock or a write lock. Then you did some stuff, and then you ran UNLOCK TABLES. Operations executed inside theLOCK block were not atomic, but at least other users could not interfere with them. Nowadays, I only see that command in legacy code – but yes, I still see it.

Why doesLOCK TABLES help in this case? Because the lock survives statements like DROP TRIGGER and CREATE TRIGGER.

So basically what you would do is:

  • LOCK TABLE something WRITE;
  • the command is queued until all existing metadata locks (transactions or DDL in progress) are released; this would happen even with a DROP TRIGGERS;
  • you DROP old triggers;
  • you CREATE new triggers;
  • UNLOCK TABLES.
Conclusions

What are the drawbacks of this solution?

  • If the connection is lost at some point, no rollback happens. In other word, the Atomicity problem mentioned above is not solved in any way.
  • If a long transaction is running, our LOCK TABLE statement will be queued for a long time. And it will cause several other statements (including SELECTs) to be queue. But this also happens with a simple DROP TRIGGER.
  • For a brief moment, the table will be completely write-locked.

What are the pro’s?

  • No disruption.
  • Isolation between concurrent sessions.
  • Once the lock is acquired, the operations on triggers will be fast.

Other methods are possible, but they are more complicated and not necessarily less risky. For example, if you use row-based replication, you could update the triggers on a slave, promote it to master, and then update the triggers on the master. But then you have a risk of conflicts during the failover.

All in all, I believe that this method is good enough for most situations.

Toodle pip,
Federico Razzoli

Photo credit: Stefano Trucco

Share

The post Updating triggers online with MySQL and MariaDB appeared first on Federico Razzoli.

What is LOAD DATA and how does it work

I would like to highlight a less known MySQL SQL command, analyze how it works and understand some of the decisions that need to be made when using it.

Let’s start with the basics: The Reference Manual.

It does have a lot to say about LOAD DATA.…

Announcing MySQL Server 8.0.15

MySQL Server 8.0.15, a new version of the popular Open Source Database Management System, has been released. This release is recommended for use on production systems. For an overview of what’s new, please see http://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html For information on installing the release on new servers, please see the MySQL installation documentation at http://dev.mysql.com/doc/refman/8.0/en/installing.html This server release […]

Building an Online Retail Dashboard in React

In this tutorial, we’ll be focusing on creating a dashboard for an Online Retail Store. Online Retail Stores generate revenue across other e-commerce giants, apart from their own website. This dashboard focuses on showcasing a monthly performance of an online store across three e-commerce aggregators - Amazon, Ebay and Etsy. Further, we have used a combination of Charts, Maps, and KPI elements to showcase how different sources of revenue are performing, and from which geo locations are maximum orders are coming in from. Hope you have as much fun building this dashboard, as I did!

Before we go ahead setting up, I wanted to show you a preview of the Online Retail Dashboard you’ll be able to create, once you’ve gone through this tutorial. Here’s a live link to dashboard in action.

Setting up the Environment

Including Dependencies We’ll be using the following dependencies for this project. To create the dashboard, you’ll need to set-up these dependencies as explained below:

  1. React, Bootstrap
  2. FusionCharts Core Package, its React Component and FusionMaps package

Including React We’ll be using Facebook’s React boilerplate today, which will set-up React along with all the utilities that we need. So, in your terminal, go ahead and enter:

$ npx create-react-app ecommerce-dashboard

To know more about create-react-app, please refer to this link. Now, we will be adding a few more components that we will need for this tutorial, as explained below:

Including Bootstrap We will be using Bootstrap to create the layout and user-interface for our app. So, in the terminal go ahead and enter:

$ npm install --save bootstrap

Including FusionCharts core package, it's React Component and FusionMaps package We will be using FusionCharts to render charts in our dashboard app. You can go ahead and check out more about it here.

There are multiple ways to install FusionCharts, for general instructions you can check out this documentation page.

FusionCharts Direct Download You can directly download JS files from FusionCharts website using this link and include them in your application using <script> tag in /public/index.html of application.

Using NPM (we will be using this!) So, in the terminal, navigate to our working directory i.e. ecommerce-dashboard and enter:

$ npm install --save fusioncharts

To render maps, we will need also FusionMaps definition files. There are multiple ways to install them, for more details you can check out this developer documentation page.

Direct Download You can directly download JS files for all the maps from FusionCharts website using this link and include the respective map file which you want to render using <script> tag in /public/index.html of application.

Using NPM (we will be using this!) So, in the terminal, navigate to our working directory i.e. ecommerce-dashboard and enter:

$ npm install --save fusionmaps

FusionCharts also provides a lightweight and simple-to-use component for React that can be used to add JS charts in react app without any hassle. We will be using it in our app. Let’s install it using the command below:

$ npm install --save react-fusioncharts

You can learn more about FusionCharts react component using this link. Now that we have included all the dependencies, we will go ahead and set-up Google Sheets API.

Google Sheets API Setup We’ll start by creating a new project for our application on developer console to fetch data from Google Sheets. For this article, I’m naming it ecommerce-dashboard.

You can create a new project through this link.

Once the project is created, you’ll be redirected to Google Developer API dashboard. Now, we will enable Google Sheets API for our app. For this in the APIs box, click “Go to APIs overview”. Once you click “Enable APIs and Services” you’ll be presented with the API Library and we’ll go ahead and search for “Google Sheets API”.

Once you find it, click “Enable” and after it is processed you should be seeing the page below.

In the sidebar, head over to “Credentials” and click the “Create credentials” button and select “API Key”. Click the “Restrict Key” and set a name for it (I’ve named it as EcommerceDashboardAPIKey).

Save the key generated, as we’ll need it to pull data from our Google Sheet later.

Under “API Restrictions” select the “Google Sheets API” and save. Now, we are good to go for our next step where we’ll connect Google Sheets API and fetch the data.

Connecting Google Sheets and Fetching data

Now, let’s head to the Google Sheet that we will be using for this application. Here’s a screenshot of how it looks. I’ve collected random data for 13 months focusing on KPIs of for online retail business.

Now, we will make the sheet public so that anyone can see it. For this in the File menu, click “Share”. Once you click “Get shareable link” and after it’s processed, the sheet will be shared for “Anyone with link can view” access by default.

Since we want to make the sheet public, head over to “Anyone with link can view” and click the “More” option in the drop-down. Select “On - Public on the web” option and save.

You can access the sheet I’ll be using from this link. We’ll keep a note of the spreadsheet ID (this can be found in the URL for Google Sheets, for me its 1sCgmzBIq2K9jUckLuYSWbDq4CuNUfdtuE6a5xI3I5Hw).

We will be using batchGet method for our dashboard app. It returns one or more ranges of values from a spreadsheet. You can learn more about it here.

Let’s open up our working directory (ecommerce-dashboard for me) in a code editor and create a new file with name config.js. Input your API key and spreadsheet ID in it.

export default { apiKey: 'YOUR-API-KEY', spreadsheetId: '1sCgmzBIq2K9jUckLuYSWbDq4CuNUfdtuE6a5xI3I5Hw' }

Now let’s head over to App.js file. We’ll be adding everything directly to App.js, which came with the boilerplate. This is not ideally the best architecture, rather just a display of the concepts.

Now let’s look at the steps below to show how I’ll connect our dashboard app to Google Sheets API and fetch data from it:

  1. Import config.js we created using the code below and declare a variable with request URL for Google Sheets API.
import config from './config'; const url = `https://sheets.googleapis.com/v4/spreadsheets/${ config.spreadsheetId }/values:batchGet?ranges=Sheet1&majorDimension=ROWS&key=${ config.apiKey }`;
  1. Now, we’ll set an empty array in this.state as shown in the code below:
constructor() { super(); this.state = { items:[] }; }
  1. Fetch the JSON data from React’s lifecycle componentDidMount method:
componentDidMount() { fetch(url).then(response => response.json()).then(data => { let batchRowValues = data.valueRanges[0].values; const rows = []; for (let i = 1; i < batchRowValues.length; i++) { let rowObject = {}; for (let j = 0; j < batchRowValues++[++i].length; j++) { rowObject[batchRowValues[0][j]] = batchRowValues[i][j]; } rows.push(rowObject); } this.setState({ items: rows }); }); }

Awesome! Now that we have established a connection with our Google Sheets, we will start building the layout for our dashboard.

Note: You can verify the connection by logging the items variable inside state.

Building the Dashboard Layout

We’ll be using Bootstrap to build the layout for our application. We have already installed it, now let’s import it in our application. Head over to index.js and import the Bootstrap’s CSS file:

import "bootstrap/dist/css/bootstrap.css";

Now let’s divide our application’s layout into 4 parts:

  1. Navigation Section
  2. KPI Section
  3. KPI and Mini Charts Section
  4. Charts Section

We will also over-write some of the default styles provided by Bootstrap using our own CSS that will be present in the file style.css which is included in index.js file.

Creating the Navigation Section To create this, we will consume nav component provided by Bootstrap. You can check it out here. Below is the HTML snippet for the same:

<Nav className="navbar navbar-expand-lg fixed-top is-white is-dark-text"> <div className="navbar-brand h1 mb-0 text-large font-medium"> Online Retail Dashboard </div> <div className="navbar-nav ml-auto"> <div className="user-detail-section"> <span className="pr-2">Hi, Sean</span> <span className="img-container"> <-- add image uri here --> <img src="" className="rounded-circle" alt="user" /> </span> </div> </div> </Nav>

Since we have two navigation sections in our application we will repeat the process above and customize the second navigation section using CSS.

Now, that our navigation section is ready, we’ll create a container to house the next three sections of our application. Here’s the HTML snippet for the same:

<div className="container-fluid"> <!-- kpi section --> <!-- kpi + mini charts section --> <!-- charts section --> </div>

You can learn more about Bootstrap containers here.

Creating the KPI Section To create this, we’ll use rows, columns and cards provided by Bootstrap to create the layout for our KPI section as shown in the image above. Below is the HTML snippet for the same:

<div className="row"> <div className="col-lg-3 col-sm-6"> <div className="card"> <div className="card-heading"> <div> Total Revenue </div> </div> <div className="card-value"> <span>$</span> </div> </div> </div> </div>

The above snippet will create one KPI card (for “Total Revenue”). Similarly, we will create cards for all 4 KPIs that we want to showcase. You can learn more about rows, columns and cards from Bootstrap’s documentation using this link.

Creating the KPI and Mini-Charts Section To create this, we’ll again use rows, columns and cards provided by Bootstrap as we did in the previous step to create the layout for our KPI section as shown in the image above. Below is the HTML snippet for the same:

<div className="row"> <div className="col-md-4 col-lg-3"> <!-- kpi layout as in previous step --> </div> <div className="col-md-8 col-lg-9"> <div className="card"> <div className="row"> <!-- row to include all mini-charts --> <div className="col-sm-4"> <div className="chart-container"> <!-- chart will come here --> </div> </div> </div> </div> </div> </div>

The above snippet will add one KPI to the left and one mini-chart section inside the card to the right. Similarly, we will add other two mini-charts layout inside the card as shown in the dashboard snapshot at the beginning of the article.

Creating the Mini-Charts Section To create this, we’ll again use rows, columns and cards provided by Bootstrap as we did in previous steps to create the layout for our charts as shown in the dashboard snapshot at the beginning of the article. Below is the HTML snippet for the same:

<div className="row"> <div className="col-md-6"> <div className="card"> <div className="chart-div"></div> <!-- chart will come here --> </div> </div> </div>

The above snippet will add one card. We can repeat the above step to create another card. If you’ve followed the above steps till now you should have a similar layout as in the dashboard snapshot at the beginning of the article. If not - don’t worry I’ll be adding the link to Github repo for this dashboard at the end of this tutorial.

Creating KPI’s

Now that our layout is ready, we will define functionality for elements and feed data to them from Google Sheets. For this, we will define a function called getData in our component which will take the month as an argument to de-structure google sheets data present in the app’s state.

Now, we’ll loop through the data to calculate values as needed for KPIs. Below is the code to create the KPI for “Revenue from Amazon”.

getData = arg => { // google sheet data const arr = this.state.items; const arrLen = arr.length; // kpi's // amazon revenue let amRevenue = 0; for (let i = 0; i < arrLen; i++) { if (arg === arr[i]["month"]) { if (arr[i]["source"] === "AM") { amRevenue += parseInt(arr[i].revenue); } } } // setting state this.setState({ amRevenue: amRevenue }); };

Similarly, we will define variables for other KPIs and assign a value to them upon looping through the data using the above code snippet.

Once the value for KPI is calculated we will set its value in app’s state to consume it in the layout as needed. Below is an example to consume the value from the state.

<div className="row"> <div className="col-lg-3 col-sm-6"> <div className="card"> <div className="card-heading"> <div> Revenue from Amazon </div> </div> <div className="card-value"> <span>$</span> {this.state.amRevenue} </div> </div> </div> </div> Creating Charts

Now that our KPI’s are ready, we will loop through data and prepare JSON arrays for our charts and use FusionCharts and its React component to render the charts.

For this, we will be using getData function that we created in the previous step.

Now, we will create a file called chart-theme.js to create a theme for charts that we will be using. This theme will have cosmetics options for all our charts so that we don’t have to define them each time we create one. Here’s how it looks like:

window.FusionCharts.register("theme", { name: "ecommerce", theme: { base: { chart: { bgAlpha: "0", canvasBgAlpha: "0", baseFont: "basefont-regular", baseFontSize: "14", baseFontBold: "0", chartBottomMargin: "0", chartTopMargin: "0", chartLeftMargin: "0", chartRightMargin: "0", canvasBottomMargin: "0", canvasTopMargin: "0", canvasRightMargin: "0", canvasLeftMargin: "0", showBorder: "0", showCanvasBorder: "0", baseFontColor: "#ffffff", captionFontBold: "0", captionFontSize: "14", subCaptionFontSize: "14", tooltipColor: "#ffffff", tooltipBgColor: "#000000", tooltipBgAlpha: "60", tooltipPadding: "5", toolTipBorderAlpha: "10", toolTipBorderRadius: "3", showValues: "0", legendBgAlpha: "0", legendBorderAlpha: "0", legendBorderThickness: "0" } }, bar2d: { chart: { placeValuesInside: "0", usePlotGradientColor: "0", showAlternateVGridColor: "0", chartLeftMargin: "5", canvasLeftMargin: "5", divLineAlpha: "10", divLineColor: "#ffffff", captionFontColor: "#8091ab", paletteColors: "#1D91C0", valuePadding: "5", yAxisName: "Orders", yAxisNameAlpha: "50", yAxisNameFontSize: "12", yAxisNamePadding: "20", valueFontBold: "0", valueFontSize: "12", plotToolText: "<div>$label<br><b>$value orders</b>", captionAlignment: "left", captionPadding: "20" } }, doughnut2d: { chart: { captionFontSize: "14", captionFontColor: "#8091ab", showLabels: "0", showValues: "0", use3DLighting: "0", showPlotBorder: "0", defaultCenterLabel: "75%", pieRadius: "45", doughnutRadius: "33", showTooltip: "0", enableRotation: "0", enableSlicing: "0", startingAngle: "90" } }, geo: { chart: { captionFontSize: "14", captionFontColor: "#8091ab", legendScaleLineThickness: "0", legendaxisborderalpha: "0", legendShadow: "0", plotFillAlpha: "85", showBorder: "1", borderColor: "#ffffff", borderThickness: "0.3", nullEntityColor: "#17202e", nullEntityAlpha: "50", entityFillHoverColor: "#17202e", captionAlignment: "left", entityToolText: "<div>$lname<br><b>$value orders</b>", chartLeftMargin: "40" } } } });

You can learn more about FusionCharts themes from this documentation page.

Now let’s begin with creating mini-charts. We will be using Doughnut Chart (doughtnut2d) for this. You can learn more about this chart here.

getData = arg => { // google sheets data const arr = this.state.items; const arrLen = arr.length; let purchaseRate = 0; for (let i = 0; i < arrLen; i++) { if (arg === arr[i]["month"]) { purchaseRate += parseInt(arr[i].purchase_rate / 3); } } // setting state this.setState({ purchaseRate: purchaseRate }); };

Now that our mini-chart’s value is set in the state we will initiate chart instance via FusionCharts’ React component and form JSON array to render the chart. Below is code for the same:

<div className="chart-container full-height"> <ReactFC {...{ type: "doughnut2d", width: "100%", height: "100%", dataFormat: "json", dataSource: { chart: { caption: "Purchase Rate", theme: "ecommerce", defaultCenterLabel: `${this.state.purchaseRate}%`, paletteColors: "#3B70C4, #000000" }, data: [ { label: "active", value: `${this.state.purchaseRate}` }, { label: "inactive", alpha: 5, value: `${100 - this.state.purchaseRate}` } ] } }} /> </div>

This will create a mini-chart for “Purchase Rate”. Similarly, we can follow the above steps to create the other two mini-charts.

Now, let’s move to our charts section.

For this, we will define an empty array inside getData function and push data to it, after looping through Google Sheets data. Below is code snippet for the same:

getData = arg => { // google sheets data const arr = this.state.items; const arrLen = arr.length; // order trend by brand let ordersTrendStore = []; for (let i = 0; i < arrLen; i++) { if (arg === arr[i]["month"]) { if (arr[i]["source"] === "AM") { ordersTrendStore.push({ label: "Amazon", value: arr[i].orders, displayValue: `${arr[i].orders} orders` }); } else if (arr[i]["source"] === "EB") { ordersTrendStore.push({ label: "Ebay", value: arr[i].orders, displayValue: `${arr[i].orders} orders` }); } else if (arr[i]["source"] === "ET") { ordersTrendStore.push({ label: "Etsy", value: arr[i].orders, displayValue: `${arr[i].orders} orders` }); } } } // setting state this.setState({ ordersTrendStore: ordersTrendStore }); };

Now that our chart’s data array is ready, we will initiate the charts instance via FusionCharts’ React component and form JSON array to render the chart.

<div className="chart-container"> <ReactFC {...{ type: "bar2d", width: "100%", height: "100%", dataFormat: "json", dataSource: { chart: { theme: "ecommerce", caption: "Orders Trend", subCaption: "By Stores" }, data: this.state.ordersTrendStore } }} /> </div>

This will create Bar Chart (bar2d) chart in our application. You can learn more about this chart here.

Now that our first chart is ready, we will create our last chart which is a USA region map. For this, we have to import respective map definition file from FusionMaps package that we installed earlier. Below is code for the same:

import Maps from "fusioncharts/fusioncharts.maps"; import USARegion from "fusionmaps/maps/es/fusioncharts.usaregion";

To render the map, we will again define an empty array inside getData function and push data to it corresponding to respective ID which can be taken from this map specification sheet, after looping through Google Sheets data. Below is code snippet for the same:

getData = arg => { // google sheets data const arr = this.state.items; const arrLen = arr.length; // order trend by region let ordersTrendRegion = []; let orderesTrendnw = 0; let orderesTrendsw = 0; let orderesTrendc = 0; let orderesTrendne = 0; let orderesTrendse = 0; for (let i = 0; i < arrLen; i++) { orderesTrendnw += parseInt(arr[i].orders_nw); orderesTrendsw += parseInt(arr[i].orders_sw); orderesTrendc += parseInt(arr[i].orders_c); orderesTrendne += parseInt(arr[i].orders_ne); orderesTrendse += parseInt(arr[i].orders_se); } ordersTrendRegion.push({ id: "01", value: orderesTrendne }, { id: "02", value: orderesTrendnw },{ id: "03", value: orderesTrendse }, { id: "04", value: orderesTrendsw }, { id: "05", value: orderesTrendc }); // setting state this.setState({ ordersTrendRegion: ordersTrendRegion }); };

Now that our map’s data array is ready, we will initiate the charts instance via FusionCharts’ React component and form JSON array to render the map.

<div className="chart-container"> <ReactFC {...{ type: "usaregion", width: "100%", height: "100%", dataFormat: "json", dataSource: { chart: { theme: "ecommerce", caption: "Orders Trend", subCaption: "By Region" }, colorrange: { code: "#F64F4B", minvalue: "0", gradient: "1", color: [ { minValue: "10", maxvalue: "15", code: "#EDF8B1" }, { minvalue: "15", maxvalue: "20", code: "#18D380" } ] }, data: this.state.ordersTrendRegion } }} /> </div>

This will render our map. You can know more about maps and how to use them here.

We will now call getData function with Jan 2019 as an argument from componentDidMount method so that our dashboard loads with Jan 2019 data present in Google Sheet by default. If you’ve followed the above steps till now you should have a functional dashboard as in the image below:

I hope this tutorial will help you create this dashboard using Google Sheets! Now you can work your magic on adding more UI elements, charts, KPIs and more features.

I have added some styling and functionality myself and also used Styled Components for ease. You can know more about Styled Components here. And, you can check out the final live dashboard here.

For any references, you can check out the source code from this Github repository. If you have any questions/feedback, comment below or yell at me on Twitter!

Monitoring your 5.7 InnoDB Cluster status

Recently we had a customer who came to us for help with monitoring their InnoDB cluster setup. They run a 5.7 InnoDB cluster and suffered from a network outage that split up their cluster leaving it with no writeable primaries. As part of their incident followup, they asked us for methods to monitor this cluster.

I had tested before with InnoDB clusters (in both 8.0 and 5.7 variants) and I was confident that we could parse the cluster node “role” (read-write aka primary vs. read-only aka secondary) from the performance_schema tables. As it turned out, this feature is not in 5.7 but only 8.0. However, the docs on this are wrong for 5.7 as these docs suggest that the performance_schema.group_replication_members table would show PRIMARY and SECONDARY role of each cluster node. I have submitted a bug report to Oracle to request this MEMBER_ROLE information to be back-ported from 8.0 (or to at least have the docs updated).

Initial monitoring attempts

Our initial monitoring approach is to check the current number of nodes in the cluster. To have a writeable cluster we need at least two nodes, preferably three, for fault-tolerance.

I started exploring the options in the current situation. My test cluster looked like this at the start:

MySQL node1:3306 ssl JS &gt; cluster.status(); { "clusterName": "innodb_cluster", "defaultReplicaSet": { "name": "default", "primary": "node1:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "node1:3306": { "address": "node1:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "node2:3306": { "address": "node2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "node3:3306": { "address": "node3:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "node1:3306" }

Which looks like this in performance_schema:

mysql&gt; SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ | group_replication_applier | 3811bcf1-1e85-11e9-bcae-5254008481d5 | node2 | 3306 | ONLINE | | group_replication_applier | 381400ec-1e85-11e9-b322-5254008481d5 | node1 | 3306 | ONLINE | | group_replication_applier | 39380c64-1e85-11e9-b2d2-5254008481d5 | node3 | 3306 . | ONLINE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ 3 rows in set (0.00 sec)

The client mentioned that their cluster issues were triggered by a network issue so I simulated a network issue by blocking all traffic on my eth1 adapter (the adapter over which the cluster communicates) using this iptables command:

iptables -A INPUT -p tcp -i eth1 -j DROP

My error log shows the node splitting of the cluster:

# tail -3 /var/log/mysqld.log 2019-01-23T09:36:11.223618Z 0 [Warning] Plugin group_replication reported: 'Member with address node3:3306 has become unreachable.' 2019-01-23T09:36:12.226226Z 0 [Warning] Plugin group_replication reported: 'Members removed from the group: node3:3306' 2019-01-23T09:36:12.226309Z 0 [Note] Plugin group_replication reported: 'Group membership changed to node2:3306, node1:3306 on view 15481913107442210:14.'

 

And in performance_schema this looks like this:

mysql&gt; SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ | group_replication_applier | 3811bcf1-1e85-11e9-bcae-5254008481d5 | node2 | 3306 | ONLINE | | group_replication_applier | 381400ec-1e85-11e9-b322-5254008481d5 | node1 | 3306 | ONLINE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ 2 rows in set (0.00 sec)

So what you could do is run a query like this to get the number of online nodes:

mysql&gt; SELECT COUNT(*) FROM performance_schema.replication_group_members WHERE MEMBER_STATE = 'ONLINE'; +----------+ | COUNT(*) | +----------+ | 2 | +----------+ 1 row in set (0.00 sec)

Here, you need to make sure that you have more than one member-count as one node would be a partitioned node without a quorum and thus not writeable by design. In my cluster, value 2 could be a WARNING level alert as your cluster is still operational but it’s no longer tolerating node failures. Value 3 or higher would be considered a “good” value. If your cluster is larger than three nodes, the appropriate values can be found here.

Alternative approach

What we really want to monitor is that there is at least one PRIMARY node in the cluster. InnoDB Cluster is designed to allow for multiple primaries, however, this is not a default approach. Therefore we would check for “at least one” and not “just one” primary.

In this approach, we could attempt to find the current primary and write a check to see if this node is “ONLINE”. This could be done like this:

mysql&gt; SELECT COUNT(*) FROM performance_schema.replication_group_members WHERE member_id = (SELECT variable_value FROM performance_schema.global_status WHERE variable_name = 'group_replication_primary_member') AND MEMBER_STATE = 'ONLINE'; +----------+ | COUNT(*) | +----------+ | 1 . | +----------+ 1 row in set (0.00 sec)

Note: I have not tested how this would look on a multi-primary cluster. So I would check for value 1 at this point.

Let’s go JSON

The cluster is designed to always promote a new primary as long as there is a quorum in the cluster. So what you really want to monitor is the cluster status from the JSON output in the `cluster.status()` command. I’ve been testing with the `mysqlsh` tool to see if you can script that out. I’ve been using MySQL shell 8.0 (even on a 5.7 cluster this is the recommended version to run, so my suggestion would be to upgrade the mysql-shell package).

To script this out you can use this command `mysqlsh root@127.0.0.1 — cluster status`, this provides a JSON output which you can easily parse using the command line `jq` tool:

With three active nodes:

# mysqlsh root@127.0.0.1 -- cluster status | jq .defaultReplicaSet.status "OK"

With two active nodes:

# mysqlsh root@127.0.0.1 -- cluster status | jq .defaultReplicaSet.status "OK_NO_TOLERANCE"

When you are in a NO_QUORUM state, there are some messages that prevent the output from being valid JSON:

# mysqlsh root@127.0.0.1 -- cluster status WARNING: Cluster has no quorum and cannot process write transactions: 2 out of 3 members of the InnoDB cluster are unreachable from the member we’re connected to, which is not sufficient for a quorum to be reached. WARNING: You are connected to an instance in state 'Read Only' Write operations on the InnoDB cluster will not be allowed. ...

However, we can grep these messages out to become valid JSON:

# mysqlsh root@127.0.0.1 -- cluster status | grep -v WARNING | grep -v 'Write operations on the InnoDB cluster will not be allowed.' | jq .defaultReplicaSet.status "NO_QUORUM" Conclusion

These three options can be implemented in almost any monitoring solution and will help you to provide some insight into the status of your InnoDB cluster.

Tungsten Clustering 5.3.6 and Tungsten Replicator 5.3.6 Released

Continuent is pleased to announce that Tungsten Clustering 5.3.6 and Tungsten Replicator 5.3.6 are now available!

Our 5.3.6 release fixes a number of bugs and has been released to improve stability.

Highlights common to both products:

  • Now, instead of searching for a master with appropriate role (i.e. matching the slave preferred role) until timeout is reached, the replicator will loop twice before accepting connection to any host no matter what its role is. (CT-712)
  • Changing the state machine so that RESTORING is not a substate of OFFLINE:NORMAL, but instead of OFFLINE so that a transition from OFFLINE:NORMAL:RESTORING to ONLINE is not possible any longer. Now it will not be possible to transition from
    OFFLINE:RESTORING to ONLINE (CT-797)
  • When an heartbeat is inserted, it will now use the JVM timezone instead of hardcoded UTC. (CT-803)
  • Now skipping files that are not valid backup properties files (i.e. 0-byte store* files or with invalid dates). (CT-820)

Highlights in the clustering product:

  • Don’t try to backup a witness server. (CT-669)
  • Create a Nagios script to check policy. (CT-675)
  • Fixing the output of tpm diag to include the simple form of cctrl ls for each dataservice. (CT-681)
  • Validation check MySQLMyISAMCheck is faulty and has been removed. (CT-756)
  • When installing from an RPM, the installation would automatically restart the connector during the installation. This behavior can now be controlled by setting the parameter no-connectors within the ini configuration. This will prevent tpm from restarting the connectors during the automated update processing. (CT-792)
  • Make an INI-based role change during upgrade an error rather than a warning, since this is not best practice. (CT-801)
  • The Connector has been modified to get the driver and JDBC URL of the datasource from the Connector-specific configuration, overriding the information normally distributed to it by the manager. This prevents the Connector from using incorrect settings, or empty values. (CT-802)
  • Obfuscating clear text Manager passwords found in tpm diag. (CT-822)
  • Fixed the script template to avoid failures with signed / unsigned timestamps difference that occurs under edge conditions. (CT-824)
  • Prevent cctrl from hanging by putting the replicator online asynchronously. If the master is offline it will go to the SYNCHRONIZING state. As the master comes online the slaves will come online also. (CT-825)

Release notes:

https://docs.continuent.com/tungsten-clustering-5.3/release-notes-5-3-6.html

https://docs.continuent.com/tungsten-replicator-5.3/release-notes-5-3-6.html

 

MySQL Connector/Node.js 8.0.15 has been released

Dear MySQL users,

MySQL Connector/Node.js is a new Node.js driver for use with the X
DevAPI. This release, v8.0.15, is a maintenance release of the
MySQL Connector/Node.js 8.0 series.

The X DevAPI enables application developers to write code that combines
the strengths of the relational and document models using a modern,
NoSQL-like syntax that does not assume previous experience writing
traditional SQL.

MySQL Connector/Node.js can be downloaded through npm (see
https://www.npmjs.com/package/@mysql/xdevapi for details) or from
https://dev.mysql.com/downloads/connector/nodejs/.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information
about how the X DevAPI is implemented in MySQL Connector/Node.js, and
its usage, see http://dev.mysql.com/doc/dev/connector-nodejs/.

Please note that the X DevAPI requires at least MySQL Server version
8.0 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

Changes in MySQL Connector/Node.js 8.0.15 (2019-02-01, General Availability)

This release contains no functional changes and is published
to align version number with the MySQL Server 8.0.15 release.

On Behalf of Oracle/MySQL Release Engineering Team,
Hery Ramilison

MySQL Connector/C++ 8.0.15 has been released

Dear MySQL users,

MySQL Connector/C++ 8.0.15 is a new release version of the MySQL
Connector/C++ 8.0 series.

Connector/C++ 8.0 can be used to access MySQL implementing Document
Store or in a traditional way, using SQL queries. It allows writing
both C++ and plain C applications using X DevAPI and X DevAPI for C.
It also supports the legacy API of Connector/C++ 1.1 based on JDBC4.

To learn more about how to write applications using X DevAPI, see
“X DevAPI User Guide” at

https://dev.mysql.com/doc/x-devapi-userguide/en/

See also “X DevAPI Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/devapi_ref.html

and “X DevAPI for C Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/xapi_ref.html

For generic information on using Connector/C++ 8.0, see

https://dev.mysql.com/doc/dev/connector-cpp/

For general documentation about how to get started using MySQL
as a document store, see

http://dev.mysql.com/doc/refman/8.0/en/document-store.html

To download MySQL Connector/C++ 8.0.15, see the “Generally Available (GA)
Releases” tab at

https://dev.mysql.com/downloads/connector/cpp/

==================================================

Changes in MySQL Connector/C++ 8.0.15 (2019-02-01, General
Availability)

This release contains no functional changes and is published
to align version number with the MySQL Server 8.0.15 release.

Enjoy and thanks for the support!

On Behalf of Oracle/MySQL Release Engineering Team,
Piotr Obrzut

MySQL Connector/Python 8.0.15 has been released

Dear MySQL users,

MySQL Connector/Python 8.0.15 is the latest GA release version of the
MySQL Connector Python 8.0 series. The X DevAPI enables application
developers to write code that combines the strengths of the relational
and document models using a modern, NoSQL-like syntax that does not
assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information
about how the X DevAPI is implemented in MySQL Connector/Python, and its
usage, see http://dev.mysql.com/doc/dev/connector-python.

Please note that the X DevAPI requires at least MySQL Server version 8.0
or higher with the X Plugin enabled. For general documentation about how
to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

To download MySQL Connector/Python 8.0.15, see the “General Available
(GA) releases” tab at http://dev.mysql.com/downloads/connector/python/

Enjoy!

Changes in MySQL Connector/Python 8.0.15 (2019-02-01, General Availability) Bugs Fixed * The default value of the allow_local_infile option changed from True to False. (Bug #29260128)

Enjoy and thanks for the support!

On behalf of the MySQL Release Team,
Nawaz Nazeer Ahamed

MySQL Connector/J 8.0.15 has been released

Dear MySQL users,

MySQL Connector/J Version 8.0.15 is the GA release of the 8.0
branch of MySQL Connector/J. It is suitable for use with MySQL Server
versions 8.0, 5.7, 5.6, and 5.5. It supports the Java Database
Connectivity (JDBC) 4.2 API, and implements the X DevAPI.

This release includes the following new features and changes, also
described in more detail on

https://dev.mysql.com/doc/relnotes/connector-j/8.0/en/news-8-0-15.html

As always, we recommend that you check the “CHANGES” file in the
download archive to be aware of changes in behavior that might affect
your application.

To download MySQL Connector/J 8.0.15 GA, see the “Generally Available
(GA) Releases” tab at http://dev.mysql.com/downloads/connector/j/

Enjoy!

Changes in MySQL Connector/J 8.0.15 (2019-02-01, General Availability)

Functionality Added or Changed

* Default value of the connection property
allowLoadLocalInfile has been changed to false.
Applications that use the LOAD DATA LOCAL INFILE
(http://dev.mysql.com/doc/refman/8.0/en/load-data.html)
statement on MySQL Server needs to set this property to
true explicitly. (Bug #29261254)

Enjoy and thanks for the support!

On Behalf of MySQL/ORACLE RE Team
Gipson Pulla

MySQL Workbench 8.0.15 has been released

Dear MySQL users,

The MySQL developer tools team announces 8.0.15 as our general available (GA) for
MySQL Workbench 8.0.

For the full list of changes in this revision, visit
http://dev.mysql.com/doc/relnotes/workbench/en/changes-8-0.html

For discussion, join the MySQL Workbench Forums:
http://forums.mysql.com/index.php?152

The release is now available in source and binary form for a number of
platforms from our download pages at:

http://dev.mysql.com/downloads/tools/workbench/

Enjoy!

Covering Indexes in MySQL, PostgreSQL, and MongoDB

A covering index is a fantastic query performance optimization. An index covers a query when the index has all the data needed to execute the query, so the server can retrieve the query’s data without reading any rows or documents. Covered queries are usually SELECT queries, but in addition to reads, indexes can cover portions of what a write query needs to do its work.

In this article I’ll explain what a covering index is in a bit more detail; the technical implementation and caveats in MySQL, PostgreSQL, and MongoDB; and how to check that a query is using a covering index.

 

 

What’s a Covering Index?

A covering index, or index-covered query, doesn’t refer just to an index. Instead, it’s the combination of a query and an index: the index covers the query. An index that covers one query might not cover another query, so “covering” doesn’t solely describe the index, it describes the index and query together.

To “cover” a query, an index must contain all of the data mentioned in the query. That is, the index has the data in the columns the query returns, as well as the columns the query uses for WHERE clauses, grouping, or any other part of the query.

Covering indexes make queries run faster! That’s because the server is accessing less data, and potentially in a more optimized way. This is because of how indexes work. The most common implementation of indexing is basically a sorted copy of the original data, with a quick-lookup data structure to navigate it, and pointers back to the original rows or documents. It’s a lot like the index in a physical book: there’s a keyword in sorted order, followed by page numbers where you can find the keyword.

Covering indexes optimize query speed because accessing rows or documents through an index is slow, just like finding page numbers in the book’s index and then flipping to the right page to find the related text. When the index covers the query, the server doesn’t have to go elsewhere to find more data. The index has everything that’s needed. And a lot of times it has it in the order that’s needed, too, which helps avoid random jumping around from place to place to find data.

How do Databases Support Covering Indexes?

Not all databases offer covering-index support, and those that do, usually have some caveats and small details to be aware of, lest the covering index be disabled or disallowed for helping optimize the query. Fortunately, there’s support for covering indexes in MySQL, PostgreSQL, and MongoDB. Here’s an overview of each of those.

MySQL

  • Supported since: early versions
  • Terminology: covering index, index-covered query
  • Caveats: works best with the most common storage engines, like InnoDB and MyISAM. Some storage engines like HEAP don’t support covering indexes. With InnoDB, sometimes data that’s been modified by a transaction will be ineligible for a covering index optimization.
  • Extras: with the InnoDB storage engine, the primary key columns are always invisibly included in secondary keys, and can be used for index-covered queries.
  • Documentation for more details: https://dev.mysql.com/doc/refman/8.0/en/explain-output.html#jointype_index

PostgreSQL

  • Supported since: 9.2, with more robust support in version 9.6 and newer
  • Terminology: index-only scan
  • Caveats: works best with B-Tree indexes, but sometimes works with other index types. GIN indexes don’t work. Some rows’ visibility might not be set in the visibility map if they’ve been modified recently, so access to the original rows might still be required. The planner doesn’t always know how to use index-only scans in all theoretically possible cases.
  • Extras: can work with expression indexes too, but the rules are complicated. Since version 11, indexes can INCLUDE extra payload columns that aren’t indexed, but are useful for allowing index-only scans.
  • Documentation for more details: https://www.postgresql.org/docs/current/indexes-index-only-scans.html

MongoDB

  • Supported since: very old versions, with improvements over time
  • Terminology: covered query
  • Caveats: Doesn’t work with array fields or geospatial indexes. Doesn’t work when querying sharded collections through mongos unless the index contains the shard key.
  • Extras: in version 3.6, covering can work with fields within embedded documents.
  • Documentation for more details: https://docs.mongodb.com/manual/core/query-optimization/#covered-query
How To Use EXPLAIN To Tell If A Query Uses A Covering Index

The best way to figure out whether a query benefits from a covered index optimization is to explain the query and examine its execution plan. The output of explaining a query is pretty complicated and different for each database, so I’ll discuss this only at a pretty high level here, with links to documentation.

  • MySQL puts a note in the Extra field of the EXPLAIN output. In older-style tabular EXPLAIN, it’s “Using index”. There are a few other notes that can go in Extra that begin with “Using index” but have different meaning. Covering indexes are notated by “Using index” without any further details. In newer-style JSON format, there’s a property named using_index which is true or false.
  • PostgreSQL indicates the access type prominently in the EXPLAIN output. It’ll appear something like “Index Only Scan using tbl1_pkey on tbl1”.
  • MongoDB is a bit trickier. In earlier versions of MongoDB, there’s an indexOnly true/false property in the JSON document that you get as a result of running a query with explain. In more recent versions, quoting the manual, “When an index covers a query, the explain result has an IXSCAN stage that is not a descendant of a FETCH stage, and in the executionStats, the totalDocsExamined is 0.”
Concluding Thoughts

Covering indexes are such a powerful optimization that if the query is a heavy hitter in general in your server, then creating a new index is often well worth the extra cost and disk space. You can use a workload profile to figure this out. For example, in this screenshot from the VividCortex profiler, you can see the first two SELECT queries are a huge proportion of the overall workload, making them potentially good candidates for optimizing with a covering index.

Using an index to cover a query can often speed it up by orders of magnitude. It's not always that dramatic an optimization, but I've personally seen those types of results many times. Of course your mileage may vary, etc.

Now you know what a covering index (or covered query) is, why it's faster than a query that's not covered by an index, and how to check whether a query is covered by an index. Next time you're looking at a query that's slower than you want, check whether an index could potentially cover it. Happy query optimization!

Photo by James Sutton on Unsplash

MySQL Connector/NET 8.0.15 has been released

Dear MySQL users,

MySQL Connector/NET 8.0.15 is the third version to support
Entity Framework Core 2.1 and the fifth general availability release
of MySQL Connector/NET to add support for the new X DevAPI, which
enables application developers to write code that combines the
strengths of the relational and document models using a modern,
NoSQL-like syntax that does not assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/index.html. For more information about how the X DevAPI is implemented in Connector/NET, see
http://dev.mysql.com/doc/dev/connector-net. NuGet packages provide functionality at a project level. To get the
full set of features available in Connector/NET such as availability
in the GAC, integration with Visual Studio’s Entity Framework Designer
and integration with MySQL for Visual Studio, installation through
the MySQL Installer or the stand-alone MSI is required.

Please note that the X DevAPI requires at least MySQL Server version
8.0 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

To download MySQL Connector/NET 8.0.15, see
http://dev.mysql.com/downloads/connector/net/

Installation instructions can be found at
https://dev.mysql.com/doc/connector-net/en/connector-net-installation.html
Changes in MySQL Connector/NET 8.0.15 (2019-02-01)
Bugs Fixed

* The client library has been modified to initialize the
MySqlBulkLoader class with the local-infile capability
disabled by default (see Using the BulkLoader Class
(http://dev.mysql.com/doc/connector-net/en/connector-net-programming-bulk-loader.html)).
(Bug #29259767)

On Behalf of MySQL Release Engineering Team,
Surabhi Bhat

IPv6 support in Group Replication

In 8.0.14, we add to Group Replication (GR) the ability to use IPv6 in all of its network-related configuration parameters. This means that now you can take advantage of this technology and “rock” those billion addresses using MySQL Group Replication.

What’s New

When you operate a Group Replication group, some variables use network addresses, mainly:

  • group_replication_local_address;
  • group_replication_ip_whitelist
  • group_replication_group_seeds;
  • group_replication_force_members.

Pages