Planet MySQL

Dynimize Quickstart Tutorial

Duration: 10 min

Level: Beginner

 

This tutorial will show you how to install Dynimize. We'll then use it to optimize a live MySQL process that's been running the Sysbench OLTP benchmark, obtaining a 46% increase in performance by applying Dynimize.

 

Part 1:  Quickstart

 

Here's a quick overview of how to use Dynimize.

To install Dynimize, run the following commands.

wget https://dynimize.com/install -O install
wget https://dynimizecloud.com/install.sha256 -O install.sha256
sha256sum -c install.sha256; if [ $? -eq 0 ]; then sudo bash ./install -d; fi

 

Use your access token to start a subscription license for your host.

$ sudo dyni -license=start -token=<your-access-token>

 

To start Dynimize, run

$ sudo dyni -start

 

To check the status of Dynimize and any optimization targets, run:

$ sudo dyni -status

 

Once a target process has been fully optimized, the above command will report:

Dynimize is running
mysqld, pid: 21091, dynimized

 

It could take anywhere from 30 to 300s of CPU intensive activity from the target process before Dynimize has completed profiling and optimizing that process.

Deciding which applications to target can be done by editing the list of supported target exes in /etc/dyni.conf under the [exeList] section.

 

To stop Dynimize, run:

$ sudo dyni -stop

 

To stop the subscription license for this host, run:

$sudo dyni -license=stop -token=<your-access-token>

 

Part 2:   Benchmarking with Dynimize

 

In the remainder of this tutorial we'll demonstrate how to benchmark MySQL using the Sysbench OLTP benchmark, and then improve on the benchmark results by installing and running Dynimize alongside MySQL. We try to keep things as simple as possible, and so by no means is this meant to demonstrate the most realistic or optimal way to benchmark MySQL with Sysbench, or accurately measure the improvements Dynimize can achieve. This tutorial can be completed with any version of MySQL, MariaDB, or Percona Server, and it is assumed that one of these is already installed.

Note that the benchmark is run using a CPU bound setup, with --oltp-read-only=on so that the IO subsystem is not involved, and making sure the workload fits into RAM. This example was run on a cloud based VPS from OVH, with 2 vCPUs (2.4 Ghz Haswell), with 8 GB ram, and a 40 GB SSD. You can recreate this exact VM on OVH's public cloud by selecting the VPS-SSD-3 server option in their Beauharnois data center. We used Ubuntu 16.04.2 LTS with MySQL 5.7.18, and Sysbench 0.4.12. Note that Sysbench 0.4 is used in this tutorial because it can easily be installed from most Linux repositories. Note that Dynimize speedups can be more pronounced when running on KVM hypervisors, as is the case here.

 

1. Install Sysbench

apt-get based install:

$ sudo apt-get install sysbench

 

OR

yum based install:

$ sudo yum install sysbench

 

2. Run sysbench

In the following commands, make sure to replace user and password with something appropriate. Here we use the database name testDB.

Lets start off with a fresh MySQL process:

$ sudo service mysql restart

 

Load a table for the Sysbench OLTP workload. Make sure to replace user and password as appropriate:

$ mysql -u user --password=password -e "DROP DATABASE testDB;"
$ mysql -u user --password=password -e "CREATE DATABASE testDB;"
$ sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=testDB --mysql-user=user --mysql-password='password' prepare

 

Perform a warm-up run:

$ sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=testDB --mysql-user=user --mysql-password='password' --max-time=120 --oltp-read-only=on --max-requests=0 --num-threads=8 run

 

Perform a measurement run (command is the same as warm-up):

$ sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=testDB --mysql-user=user --mysql-password='password' --max-time=120 --oltp-read-only=on --max-requests=0 --num-threads=8 run

 

These runs will produce results in the following format:

OLTP test statistics:
queries performed:
read: 1203062
write: 0
other: 171866
total: 1374928
transactions: 85933 (1432.12 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 1203062 (20049.73 per sec.)
other operations: 171866 (2864.25 per sec.)

Test execution summary:
total time: 60.0039s
total number of events: 85933
total time taken by event execution: 479.6911
per-request statistics:
min: 1.19ms
avg: 5.58ms
max: 41.07ms
approx. 95 percentile: 7.85ms

Threads fairness:
events (avg/stddev): 10741.6250/233.97
execution time (avg/stddev): 59.9614/0.00

 

In this case we can see transactions per second of 1432.12 as highlighted above.

 

3. Repeat with Dynimize

Let's repeat that with Dynimize running. First install Dynimize and start a host license if you haven't already.

If you haven't already done so, install Dynimize:

wget https://dynimize.com/install -O install
wget https://dynimizecloud.com/install.sha256 -O install.sha256
sha256sum -c install.sha256; if [ $? -eq 0 ]; then sudo bash ./install -d; fi

 

If you haven't already started a subscription license for your host then use your access token to start one :

$sudo dyni -license=start -token=<your-access-token>

 

Now start Dynimize:

$ sudo dyni -start

 

Perform another warmup run to get the mysqld process in the "dynimized" state:

$ sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=testDB --mysql-user=user --mysql-password='password' --max-time=120 --oltp-read-only=on --max-requests=0 --num-threads=8 run

 

After the warmup make sure mysqld is dynimized:

$ sudo dyni -status
Dynimize is running
mysqld, pid: 25072, dynimized

 

If the mysqld process is not yet dynimized then repeatedly run the warm-up followed by sudo dyni -status until mysqld shows up as dynimized.

 

Now that mysqld is dynimized, let's measure its performance again:

$ sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=testDB --mysql-user=user --mysql-password='password' --max-time=120 --oltp-read-only=on --max-requests=0 --num-threads=8 run

 

Which will output the following:

OLTP test statistics:
queries performed:
read: 1759352
write: 0
other: 251336
total: 2010688
transactions: 125668 (2094.36 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 1759352 (29321.06 per sec.)
other operations: 251336 (4188.72 per sec.)

Test execution summary:
total time: 60.0030s
total number of events: 125668
total time taken by event execution: 479.5530
per-request statistics:
min: 0.77ms
avg: 3.82ms
max: 492.75ms
approx. 95 percentile: 5.52ms

Threads fairness:
events (avg/stddev): 15708.5000/166.91
execution time (avg/stddev): 59.9441/0.00

 

Here we see transactions per second of 2094.36, which is a 46% speedup from the baseline of 1432.12 tps.

That concludes this tutorial. Happy Dynimizing!

MySQL: The Impact of Transactions on Query Throughput

Recently I had a customer where every single query was running in a transaction, as well as even the simplest selects. Unfortunately, this is not unique and many connectors like Java love to do that.

In their case, the Java connector changed autocommit=off for the connection itself at the beginning, and as these were permanent connections they never or almost never reconnected.

In the slow query log we could see after every select there was a commit. So why is this a problem?

Test Case

Like always, the best way to deal with a problem to test it. I have created two EC2 instances t3.xlarge with Ubuntu, one for application and one for the databases.  I have used sysbench to run my tests.

I have created a table with 1 million records and was running simple primary key point selects against the database. I was using three threads and running only 1 select per transactions,  and every test was running for 60 seconds. I ran every test 3 times and took the average number of these three runs. These are the only MySQL variables I have changed:

innodb_buffer_pool_size=5G innodb_adaptive_hash_index=off query_cache_size=0 query_cache_type=0

The dataset could fit in memory as there were no disk reads.

I have tested four cases:

  1. PK selects – without transaction (select only)
  2. PK selects – inside a transaction (begin,select,commit)
  3. PK selects – autocommit=off (select,commit)
  4. PK selects – Read-Only transaction (START TRANSACTION READ ONLY, select, commit)
Disclaimer

I was not trying to do a proper performance test to see the maximum performance on the server. I was only trying to demonstrate the impact on the executed query number if we are running every single query in a transaction in a limited time window.

Test results on MySQL 5.6

Because my customer was running on MySQL 5.6 I did my first tests on that version as well.

Here we can see the average transactions per second (inside InnoDB everything is a transaction even if we do not start it explicitly). In the same time window, sysbench could run more than twice as many PK lookups without transactions than in transactions. You can see there are big differences here.

But what does that mean? My Selects are slower?

It depends on your point of view, but if you measure the whole transaction time, yes it takes more time, but if you measure the single select statement that should take the same amount of time. So your single selects are not going to be slower.

Let me try to oversimplify this. Let’s say your database server can run only 1 query per second. That means in 100 seconds it can run 100 queries. What if we are using transaction? In the first second the database server will run the begin, then it runs the select , and after that in the third second, it will run the commit.  So it will run 1 select in every three seconds. In 100 seconds 33 begin, 33 select, 33 commit. Your query time is not going to be higher but your throughput is going to be impacted.

If there is no real reason (example repeatable read) why your selects should run in transactions, I would recommend to avoid them, because you can save a lot of extra roundtrip between the application and the database server, some CPU time, and even begin commits very fast, but in large scale you can save time as well.

What is a transaction?

From InnoDB manual:

Transactions are atomic units of work that can be committed or rolled back. When a transaction makes multiple changes to the database, either all the changes succeed when the transaction is committed, or all the changes are undone when the transaction is rolled back.

Database transactions, as implemented by InnoDB, have properties that are collectively known by the acronym ACID, for atomicity, consistency, isolation, and durability.

InnoDB associates every transaction with a transaction ID (TRX_ID field is part of every single row). That transaction ID is only needed for operations that might perform writes or locks like Select ... for update.  Also, this transaction ID is used to create a read view, a snapshot of the database, as this is part of the MVCC mechanism. MVCC keeps information of the old pages until the transaction is running, an example is to be able to rollback. These pages also used to provide consistent reads. Even if this is just a single long-running select, because we started a transaction, InnoDB has to keep track of these changes. There are other internal processes happening when we explicitly start a transaction, which might be not necessary for a simple select query.

This is one of the reasons why they developed Read-only transactions to avoid these side effects.

When autocommit is enabled and we do not start transactions explicitly, the insert/delete/update operations are still handled inside MySQL like transactions, but InnoDB tries to detect non-locking select queries which should be handled like Read-only transactions.

I was curious if is there any difference/improvements between the versions so I re-ran my tests on multiple MySQL versions.

Testing different versions

I have tested:

  • MySQL 5.6
  • Percona Server 5.6
  • MySQL 5.7
  • Percona Server 5.7
  • MySQL 8.0
  • Percona Server 8.0

What we can see here? We can see that a big difference is still there in all versions, and actually, I was not expecting it to disappear because from the application point of view it still has to do the round trips to the database server.

We knew there is a regression in the newer version of MySQL regarding the performance of point lookups. Mark Callaghan and many other people already blogged about this, but as we can see the regression is quite big. I am already working on another blog post where I am trying to dig deeper on what is causing this degradation.

The newer version of MySQL servers are more optimized on high concurrent workloads, and they can handle more threads at the same time as the previous versions. There are many detailed blog posts in this area as well.

We can also see, for these simple queries starting Read-only transactions, they are actually a lit bit slower than starting normal transactions. The difference is between 1-4% while in MySQL 8.0 it is around 1% slower. That is a very small difference but is continuous through all the tests.

Conclusion

If you are running every single select query in a transaction your throughput could be definitely lower than without transactions. If your application requires higher throughput, you should not use explicit transactions for every select query.

How to fix error when MySQL client fails to load SQL file with Blob data

In one of my latest database restore jobs, I was helping a MySQL client with issues related to mysqlbinlog and I wanted to share it here. In case you didn’t know, MySQL is a simple SQL shell with input line editing capabilities, while mysqlbinlog is a utility for processing binary logs a MySQL server. In this case, the server was MariaDB, but the utilities are the same. The database version was 10.1.38-MariaDB.

So, why use mysqlbinlog?

There are many reasons for using mysqlbinlog to process binary log files, but in this case, it was used for point-in-time recovery.

Let’s say you have an erroneous transaction that you run at 3:05 p.m. and your last full backup was run at 12 p.m. To be able to restore your database up to 3:05 p.m., you will need to restore the full backup that you took at 12 p.m. and then apply the events from your binary logs up to the time before you ran the erroneous transaction. This procedure is only possible if you still have the binary logs generated between 12 p.m. and 3:05 p.m. on the server. Applying the events to the database can be done by using MySQL client.

Let’s say you have restored the full backup and it’s time to apply the missing events after 12 p.m., but before the erroneous transaction. Using mysqlbinlog, you can parse the binlog and confirm the position that is just before your erroneous transaction. After you find the position, your command to generate SQL from the binary log will look something like this:

mysqlbinlog mysql-bin.000072 --stop-position=16208282 --result-file=binlog72.sql

Now that you have the SQL file that will apply all missing events since your last backup, you will want to run this on the MySQL server. We use MySQL client for the same. Typically, it is something like:

mysql -uadmin -p < binlog72.sql

But in our case, there were errors after running this command. The error is shown below:

Enter **************** at line 66848: ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected. Query: 'INSERT into pythian_msq(ID, KEY, REGION, CREATED_DATE, PRIORITY, BYTES) values ('b67d8bd1-f8f7-8ffb-d2a4-88da8cf14b21', 'd4d0a754-019b-39af-b568-2f2bf93b2845', '2.8.0', 1553862137605, NULL, _binary '??.

Obviously, something was wrong with binary data as the error message was showing “–binary-mode is not enabled” but the other message was also confusing –  “unless MySQL is run in non-interactive mode”. In our case, per my understanding, we were running MySQL in non-interactive mode. Still, the error message was not easy to understand.

ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode

From the manual on –binary-mode


By default, ASCII ‘\0’ is disallowed and ‘\r\n’ is translated to ‘\n’. This switch turns off both features, and also turns off parsing of all client commands except \C and DELIMITER, in non-interactive mode (for input piped to MySQL or loaded using the ‘source’ command). This is necessary when processing output from mysqlbinlog that may contain blobs.

The table was showing an error with the below design:

CREATE TABLE `pythian_msq` (
`ID` char(36) COLLATE utf8_unicode_ci NOT NULL,
`KEY` char(36) COLLATE utf8_unicode_ci NOT NULL,
`CREATED_DATE` bigint(20) NOT NULL,
`PRIORITY` int(11) DEFAULT NULL,
`SEQUENCE` bigint(20) NOT NULL AUTO_INCREMENT,
`BYTES` blob,
`REGION` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`KEY`,`ID`,`REGION`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

The column that was failing in the insert is defined as “blob”. According to quick research on this and based on the error message, the next command we used was with –binary-mode=1

mysql -uadmin -p --binary-mode=1 < binlog72.sql

Again, there was an error when trying to execute the SQL file with MySQL client. The error is now showing bad SQL syntax:

Enter **************** 1064 (42000) at line 66849: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'VJKF3BY7', 'solution-center', 0, NULL, 'idPrefixPersist', '44a6a325-d17d-4ce9-b7' at line 1

In another try for the same restore, we used the command where we pipe the output from mysqlbinlog to MySQL client:

mysqlbinlog mysql-bin.000072 --stop-position=16208282 | mysql -uadmin -p --binary-mode=1

After some more tries on this, we were not able to make this SQL file run. We did some checks to see if the binary log was corrupted, in case there was some corruption in the blob column, but none of that was the case. Our next try was to source the file after we connect to MySQL as client.

mysql -uadmin -p
MariaDB [test]> source binlog72.sql

The server starts processing the events from the SQL file and then it errors again:
......
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
ERROR at line 66848 in file: 'binlog72.sql': ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected. Query: 'INSERT into pythian_msq(ID, KEY, REGION, CREATED_DATE, PRIORITY, BYTES) values ('b67d8bd1-f8f7-8ffb-d2a4-88da8cf14b21', 'd4d0a754-019b-39af-b568-2f2bf93b2845', '2.8.0', 1553862137605, NULL, _binary '??.

In between each of these tests, we had to restore the latest backup and try again. Our next try was to start the MySQL client with –binary-mode=1 and then source the SQL file. The commands are:

mysql -uadmin -p --binary-mode=1
MariaDB [test]> source binlog72.sql

With the last command, the database does not produce any errors and we were able to restore the database up to the point of time before an erroneous transaction was run.

Conclusion

While most of the MySQL tools seem to be working with no major issues and are very straightforward for most of the DBAs, we may run into a use case that requires a more careful review of what is wrong and where we got the errors. In most of the cases, the error is showing some information about what is wrong and we can just follow the solution. In other cases, the error may be misleading or verbose.

The only way of making the MySQL client work with Blob data in non-interactive mode was to generate the SQL file using mysqlbinlog and then source the SQL file while connected to MySQL client with –binary-mode=1. I hope this will save you some time if you are running into the same issues while doing PITR for your database.

Debian 10 released with MariaDB 10.3

The Debian project announced their 10th release, code name Buster, on July 6th 2019. Debian 10 ships with MariaDB 10.3 and Galera. There is no separate MariaDB Connector C package, but instead MariaDB 10.3 includes MariaDB Connector C 3.0. Like most other popular Linux distributions, Debian prefers MariaDB over the Oracle owned alternative and this […]

The post Debian 10 released with MariaDB 10.3 appeared first on MariaDB.org.

Hibernate Tip: How does Hibernate’s native ID generator work

The post Hibernate Tip: How does Hibernate’s native ID generator work appeared first on Thoughts on Java.

Hibernate Tips is a series of posts in which I describe a quick and easy solution for common Hibernate questions. If you have a question for a future Hibernate Tip, please post a comment below.

 

Question:

One of the readers of my article about using Hibernate with a MySQL database asked the following question:

What is the difference between the following two recommended approaches concerning the primary keys?

@GeneratedValue(strategy = GenerationType.IDENTITY) @GeneratedValue(generator = "native") @GenericGenerator(name = "native", strategy = "native")

 

Solution:

As so often in software development, the answer to that question is: “It depends …”. In this case, it depends on the Dialect that you configured in your persistence.xml.

 

The native strategy

When you use a @GenericGenerator that references the native strategy, Hibernate uses the strategy natively supported by the configured Dialect. You can find the corresponding code in the of the Dialect class. Here’s is the code that’s used in Hibernate 5.4.

public String getNativeIdentifierGeneratorStrategy() { if ( getIdentityColumnSupport().supportsIdentityColumns() ) { return "identity"; } else { return "sequence"; } }

For all commonly used databases, except MySQL, this method returns the String “sequence”. If you’re using a MySQL dialect, it returns “identity”.

 

Using the native strategy with different dialects

Let’s use the following, simple entity with a MySQL and a PostgreSQL database.

@Entity public class Author { @Id @GeneratedValue(generator = "native") @GenericGenerator(name = "native", strategy = "native") private Long id; @Version private int version; private String firstName; private String lastName; ... }

When you persist a new Author entity using the PostgreSQL dialect, you can see that Hibernate uses the sequence native to generate the primary key value. And in the next step, it inserts the new record.

14:03:27,709 DEBUG [org.hibernate.SQL] - select nextval ('native') 14:03:27,742 INFO [org.thoughts.on.java.model.TestIdentityStrategy] - After persist 14:03:27,758 DEBUG [org.hibernate.SQL] - insert into Author (firstName, lastName, version, id) values (?, ?, ?, ?)

If you use a MySQL database instead, you can see that Hibernate utilizes an auto-incremented column instead.

14:05:15,739 DEBUG [org.hibernate.SQL] - insert into Author (firstName, lastName, version) values (?, ?, ?) 14:05:15,760 DEBUG [org.hibernate.id.IdentifierGeneratorHelper] - Natively generated identity: 1

 

The differences between native and identity strategy

So, as long as you use both mappings with a MySQL database, the result will be the same. But there are still a few differences:

  1. The behavior of the native strategy changes if your database dialect returns a different strategy as the natively supported one. That could happen because you now use a different database dialect or the internal implementation of the dialect changed.
  2. The @GenericGenerator mapping is much harder to read because it depends on the implementation of your database dialect.
  3. The @GenericGenerator annotation is Hibernate-specific. So, you can’t use this mapping with any other JPA implementation.

 

Mapping Recommendations

If you’ve read some of my other tutorials, you can probably guess my preferred mapping. I strongly recommend using the IDENTITY mapping instead of the native one. There are multiple reasons for that. The most obvious ones are the better readability and better portability to other JPA implementations.

But you also need to keep in mind that the generation of primary keys can have a significant impact on the performance of your application. I, therefore, want to define the strategy explicitly and don’t rely on any internal implementations.

I don’t see the adaptability of the native strategy to different database dialects as a huge advantage. For almost all dialects, Hibernate uses the SEQUENCE strategy anyways. And if you need to support MySQL databases, you can easily override the primary key strategy in your mapping file.

Get this Hibernate Tip as a printable PDF!Join the free Thoughts on Java Library to get access to lots of member-only content, like a printable PDF for this post, lots of cheat sheets and 2 ebooks about Hibernate.
Join Now!
Already a member? Login here. Learn more:

If you are interested in primary key mappings, you should also read the following articles:

 
Hibernate Tips Book
Get more recipes like this one in my new book Hibernate Tips: More than 70 solutions to common Hibernate problems.

It gives you more than 70 ready-to-use recipes for topics like basic and advanced mappings, logging, Java 8 support, caching and statically and dynamically defined queries.

Get it now as a paperback, ebook or PDF.

The post Hibernate Tip: How does Hibernate’s native ID generator work appeared first on Thoughts on Java.

SQL Views Example | Views In SQL Tutorial Explained In Detail

SQL Views Example | Views In SQL Tutorial is today’s topic. SQL views is a kind of virtual tables which have rows and columns as they are in a real database. A view can be accompanied with all the rows of a particular table or selected rows based on a certain condition. In Structured Query Language, a view is a virtual table based on the result-set of an SQL statement.

SQL VIEWS

SQL view contains rows and columns, just like the real table. The fields in the view are fields from one or more real tables in the database. You can add the SQL functions, WHERE, and JOIN statements to the view and present a data as if that data were coming from a single table.

Let’s learn to create, deleting, and updating views.

Consider two tables:

Student:

ID NAME CITY 1 Shubh Kolkata 2 Karan Allahabad 3 Suraj Kota 4 Akash Vizag

 

Marks:

ID NAME MARKS Age 1 Shubh 90 21 2 Rohit 91 21 3 Suraj 92 22 4 Akash 93 22

 

#CREATING VIEWS

Let’s create views using single tables or multiple tables.

#SYNTAX Create View View_name AS Select column(s) From table_name Where condition; #PARAMETERS:
  1. View_name: Name of the View.
  2. Column(s): Name of the column.
  3. Table_name: Name of the table.
  4. Where: This is used for specifying some conditions.

Let’s clear this with the help of an example.

#QUERY: (Creating View With a single table) Create view Details AS Select name, city From Student Where id>=1; #Output

Type the following query to get the output.

Select * from Details;

See the below output.

NAME CITY Shubh Kolkata Karan Allahabad Suraj Kota Akash Vizag

 

#QUERY: (Creating Views from Multiple Tables) Create View Marks AS Select Student.Name, Student.City, Marks.Marks From Student, Marks Where Student.Name=Marks.Name; OUTPUT

Fire the following query.

Select * from Marks;

See the output.

NAME CITY Marks Shubh Kolkata 90 Suraj Kota 92 Akash Vizag 93

 

So, until now, we have learned how to create views.

Let’s Learn how to delete views.

#DELETING VIEWS

Views can be removed with the help of DROP Statement.

#SYNTAX Drop VIEW view_name;

See the following parameters.

#PARAMETERS

View_name: This is the name of the View which you want to delete.

#QUERY Drop VIEW Details;

So, the View Details will be deleted which we had created above.

So, we have learned how to delete views.

Let’s learn how to update views.

#UPDATING VIEWS

Views can be updated with the help of Create or replace statement.

#SYNTAX: Create or Replace View View_Name AS Select column(s) From table_name Where condition; #PARAMETERS:
  1. View_name: Name of the View.
  2. Column(s): Name of the column.
  3. Table_name: Name of the table.
  4. Where: This is used for specifying some conditions.

Let’s clear this with the help of an example.

Suppose, we want to add student Age as well in Marks view then following queries has to be written.

#QUERY Create or replace View MARKS AS Select Student.Name, Student.City, Marks.Marks, Marks.Age From Student, Marks Where Student.Name=Marks.Name; #OUTPUT Select * from MARKS;

See the output.

NAME CITY Marks Age Shubh Kolkata 90 21 Suraj Kota 92 22 Akash Vizag 93 22

 

#Inserting a row in a view.

We can insert values in view as that of we insert into the table.

#SYNTAX Insert into view_name (column(s)) VALUES (values); #PARAMETERS

View_name: This is the name of the View which you want to delete.

#QUERY INSERT INTO DETAILS (Name, City) VALUES(‘Aman’,’Patna’); #OUTPUT Select * from DETAILS; NAME CITY Shubh Kolkata Karan Allahabad Suraj Kota Akash Vizag Aman Patna

 

We have considered here the view details which was previously created in CREATE VIEW Query.

#Deleting a row from a view

We can Delete the details from the view using the DELETE statement.

#SYNTAX DELETE FROM view_name where Condition; #PARAMETERS

View_name: This is the name of the View which you want to delete.

#QUERY: Delete from details where name=”Aman”; #OUTPUT:

Fire the following query.

Select * from Details; NAME CITY Shubh Kolkata Karan Allahabad Suraj Kota Akash Vizag

 

# SQL Views Keynote:

One should keep in his/her mind to consider these following points before updating the views.

  1. Group by Clause and Order By clause should not be included with the create statement.
  2. DISTINCT keyword should not be present with the select statement.
  3. The view should not contain any NULL values.
  4. The views should not be created with Nested Queries or Complex Queries.
  5. Views should not be created with multiple views.

Finally, SQL Views Example | Views In SQL Tutorial Explained In Detail is over.

The post SQL Views Example | Views In SQL Tutorial Explained In Detail appeared first on AppDividend.

Before You Stop Using MySQL, Read This

Author: Robert Agar

An organization’s databases contain information that is essential for its survival. This may encompass sensitive customer data, employee records, online sales catalogs, and intellectual capital to name just a few uses of a database. The responsibility of keeping these vital resources available falls to the enterprise’s team of DBAs. Failure to properly maintain these systems can lead to serious negative consequences that can cripple a company’s ability to do business.

Almost every application of any real utility is backed by a database, which in turn is administered through a database management system (DBMS). MySQL is an extremely widely-used DBMS whose popularity is only exceeded by that of Oracle. Countless business-critical applications rely on the availability and performance of MySQL databases. A common characteristic all of these databases share is the need for a DBA to maintain the system and tune it to optimal levels of performance.

The Challenge of Finding Skilled DBAs for MySQL

The explosion of eCommerce and the cloud computing paradigm have often made it necessary as well as relatively easy for any size organization to put together a database for a variety of purposes. MySQL makes perfect sense as the DBMS of choice. It’s a popular, open-source solution that is supported by many tools and online resources. MySQL databases can be housed on all of the major cloud platforms, making it possible for companies to make the most of their current provider.

It can be challenging to find a team of DBAs with the necessary level of experience and proper skillset required to optimize and maintain a MySQL system. Changing business requirements may necessitate multiple new database implementations without a commensurate increase in the staff charged with maintaining them.

Perhaps managing the database was simply an added function which a system administrator learned through on-the-job training. It could simply be that your current DBAs need a little help in navigating some of the complexities of the MySQL system for which they are responsible. In some cases, this has led organizations to consider switching from MySQL to a different DBMS.

Don’t Give Up on MySQL

The temptation to give up on MySQL became too much for the crew at GitLab who announced they are removing support for MySQL. The change will be implemented later this month with the release of GitLab 12.1. A number of reasons have been given for moving away from MySQL. Comments from GitLab developer Yorick Peterse indicated that one of them was the small size of the DBA team which was stretched thin supporting multiple databases.

Neither the size of your DBA team nor their relative lack of MySQL skills should be determining factors in moving away from the platform. When viewing the big picture, you are liable to find that the better course of action is to provide your team with additional training and ensuring they have the best tools available. A tool that will improve the life of any MySQL DBA is SQL Diagnostic Manager for MySQL (formerly Monyog) from Webyog. It can assist both highly-skilled and struggling DBAs perform their jobs in a more effective and efficient way.

Monitoring to the Rescue with SQL Diagnostic Manager for MySQL

Monitoring the performance of your MySQL system is the first step in identifying potential areas prime for optimization. It can also be a powerful tool that enables you to take proactive corrective measures when a problem arises to limit or prevent any impact to your users. Lack of proper monitoring leaves you in the dark regarding the inner-workings of your database. This is not where you want to be when dealing with your company’s valuable digital assets.

SQL Diagnostic Manager for MySQL (formerly Monyog) offers an agentless monitoring tool that is fully customizable. It enables you to gain a deeper insight into your MySQL databases and exposes the most likely targets for modification to improve performance. You can use the tool to optimize queries in real-time. Some specific features of the application include the ability to:

  • Monitor in real-time to view each query and find out what’s causing that sudden spike in your database usage. This knowledge enables you to address the issue before it causes major impacts to the system and its users.

  • Track MySQL configuration changes to pinpoint the source of performance issues. Use the app to track and compare changes and quickly identify the reason that your system is experiencing problems.

  • Generate alerts and automatically kill long-running queries if you choose. You can also simply have an alert sent when a query exceeds an allowable period of time. These alerts can be useful in addressing excessive CPU usage.

  • Create custom dashboards and charts to facilitate the monitoring process. Quickly expand a section of the chart for a detailed view of the queries responsible for the system’s behavior.

This comprehensive monitoring application for MySQL databases has over 600 monitors that can be configured to provide the custom platform you need to observe the details impacting their performance. Before making a rash move to another database platform, see how an excellent monitoring tool can increase the skill level and decrease the pressure on your DBA team.

The post Before You Stop Using MySQL, Read This appeared first on Monyog Blog.

SLEEP thread causing "Waiting for table metadata lock"

SLEEP thread causing "Waiting for table metadata lock"
We might have faced some scenarios once our db connection threads are getting locked and we try to find the culprit thread id which is locking the thread, to kill and release the connections. 
Here I am going to focus on one of the scenarios when db connection threads are locked at state "waiting for table metadata lock" and when we see the processlist, we can't find any thread which is running and using the same table, i.e., all threads are in SLEEP state except the locked threads.

Such as : 
mysql [localhost:5726] {msandbox} (world) > show processlist\G *************************** 1. row ***************************            Id: 5          User: rsandbox          Host: localhost:60624            db: NULL       Command: Binlog Dump          Time: 111519         State: Master has sent all binlog to slave; waiting for more updates          Info: NULL     Rows_sent: 0 Rows_examined: 0 *************************** 2. row ***************************            Id: 9          User: msandbox          Host: localhost            db: NULL       Command: Sleep          Time: 1901         State:          Info: NULL     Rows_sent: 1 Rows_examined: 0 *************************** 3. row ***************************            Id: 10          User: msandbox          Host: localhost            db: world       Command: Query          Time: 282         State: Waiting for table metadata lock          Info: alter table city engine=innodb     Rows_sent: 0 Rows_examined: 0 *************************** 4. row ***************************            Id: 13          User: msandbox          Host: localhost            db: NULL       Command: Sleep          Time: 645         State:          Info: NULL     Rows_sent: 0 Rows_examined: 0 *************************** 5. row ***************************            Id: 14          User: msandbox          Host: localhost            db: world       Command: Query          Time: 0         State: starting          Info: show processlist     Rows_sent: 0 Rows_examined: 0 *************************** 6. row ***************************            Id: 17          User: msandbox          Host: localhost            db: world       Command: Sleep          Time: 285         State:          Info: NULL     Rows_sent: 0 Rows_examined: 4079 6 rows in set (0.00 sec)
mysql [localhost:5726] {msandbox} (world) >                                                        Below I will explain about how we can troubleshoot this issue, release the locked thread and also get the transaction details of thread which was present in SLEEP state and was causing the "waiting for table metadata lock" to other threads.


Above we can see our thread id 10 is locked at state "waiting for table metadata lock" but on processlist, we can't see any other active thread which is running for table city. All other threads are present in the SLEEP state.

Here We can use below ways to find the culprit thread id which is locking the thread id 10.
A) Using InnoDB Engine Status -- Once we will run "show engine innodb status", here we can see there is one active transaction/thread present, and it has "24 lock struct(s), heap size 3520, 4102 row lock(s), undo log entries 1". 
mysql [localhost:5726] {msandbox} (world) > show engine innodb status\G * * * * ------------ TRANSACTIONS ------------ Trx id counter 1876 Purge done for trx's n:o < 1875 undo n:o < 0 state: running but idle History list length 24 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 421869282924072, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421869282922944, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 421869282921816, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 1875, ACTIVE 137 sec 24 lock struct(s), heap size 3520, 4102 row lock(s), undo log entries 1 MySQL thread id 17, OS thread handle 140394578265856, query id 5478 localhost msandbox Trx read view will not see trx with id >= 1875, sees < 1875 * * * mysql [localhost:5726] {msandbox} (world) > 
B) Using the information_schema database tables-- We can also find the culprit thread id using the INFORMATION_SCHEMA.INNODB_TRX table.

mysql [localhost:5726] {msandbox} (information_schema) > select ps.id,ps.user,ps.host,ps.db,ps.command from information_schema.processlist ps join information_schema.INNODB_TRX itx on itx.trx_mysql_thread_id=ps.id and ps.command='Sleep'; +----+----------+-----------+-------+---------+ | id | user     | host      | db    | command | +----+----------+-----------+-------+---------+ | 17 | msandbox | localhost | world | Sleep   | +----+----------+-----------+-------+---------+ 1 row in set (0.00 sec) 
By running the above query, we can see we have thread id 17 which is active under the INFORMATION_SCHEMA.INNODB_TRX table and currently present in SLEEP state on processlist.

C) Using information_schema and performance_schema database tables Here it is assumed we have performance_schema enabled. 

-- By running the below query we can check the current active/open transaction as well as the list of queries executed by the current active/open transaction which will help the developer to troubleshoot the application effectively once they have the culprit thread detail with the list of queries executed by the open transaction. 
mysql [localhost:5726] {msandbox} ((none)) > SELECT   pslst.id 'PROCESS ID'  ,itrx.trx_id 'TRX_ID'     ,pesh.event_name 'EVENT NAME'     ,pesh.sql_text 'SQL'     ,th.processlist_user 'USER'     ,th.processlist_host 'HOST' FROM   information_schema.innodb_trx itrx JOIN information_schema.processlist pslst ON   itrx.trx_mysql_thread_id = pslst.id JOIN performance_schema.threads th ON   th.processlist_id = itrx.trx_mysql_thread_id JOIN performance_schema.events_statements_history pesh ON   pesh.thread_id = th.thread_id WHERE   itrx.trx_started < CURRENT_TIME - INTERVAL 100 SECOND  AND pslst.USER != 'SYSTEM_USER' GROUP BY  pesh.sql_text ORDER BY   pesh.EVENT_ID\G *************************** 1. row *************************** PROCESS ID: 17     TRX_ID: 1875 EVENT NAME: statement/sql/select        SQL: SELECT DATABASE()       USER: msandbox       HOST: localhost *************************** 2. row *************************** PROCESS ID: 17     TRX_ID: 1875 EVENT NAME: statement/com/Init DB        SQL: NULL       USER: msandbox       HOST: localhost *************************** 3. row *************************** PROCESS ID: 17     TRX_ID: 1875 EVENT NAME: statement/sql/show_databases        SQL: show databases       USER: msandbox       HOST: localhost *************************** 4. row *************************** PROCESS ID: 17     TRX_ID: 1875 EVENT NAME: statement/sql/show_tables        SQL: show tables       USER: msandbox       HOST: localhost *************************** 5. row *************************** PROCESS ID: 17     TRX_ID: 1875 EVENT NAME: statement/sql/begin        SQL: start transaction       USER: msandbox       HOST: localhost *************************** 6. row *************************** PROCESS ID: 17     TRX_ID: 1875 EVENT NAME: statement/sql/select        SQL: select * from city limit 2       USER: msandbox       HOST: localhost *************************** 7. row *************************** PROCESS ID: 17     TRX_ID: 1875 EVENT NAME: statement/sql/update        SQL: update city set Population=1780001 where Name='Kabul'       USER: msandbox       HOST: localhost 7 rows in set (0.01 sec)

mysql [localhost:5726] {msandbox} ((none)) > 





Above we can see we have active/open transaction id 17 which have started the transaction but not yet committed, and currently, we can see this thread is present in SLEEP state on processlist.

                                       So above we saw a couple of ways where we can find and confirm the culprit thread id which is causing for the "waiting for table metadata lock" issue.

Now once we have culprit thread id details from above, We can now KILL the culprit thread id and we can release the locked thread id.
mysql [localhost:5726] {msandbox} ((none)) > kill 17;
Query OK, 0 rows affected (0.01 sec) mysql [localhost:5726] {msandbox} (world) > alter table city engine=innodb; Query OK, 0 rows affected (26 min 8.17 sec) Records: 0  Duplicates: 0  Warnings: 0 Hope this blog post will help to troubleshoot the "waiting for table metadata lock" issue once we have all other threads present in SLEEP state on processlist. 

Photo by drmakete lab on Unsplash

Reasoning around the recent conferences in 2019

During the last conferences, I had the chance to discuss a lot with many colleagues. Some of them, like me, feel we have lost ourselves, not totally but a bit. 

I had start to think why, and this is my reasoning, not saying is right, but I think is not far from the truth.

Let us start just mentioning few events in the 2019 schedule, February Fosdem, end of May Percona live US, mid-June DataOps Barcelona, September Oracle open World, end of September-October Percona Live Europe, plus an undefined number of secondary events or MariaDB roadshows.

Where is the MySQL (and related) conference?

Do not get me wrong, we talk about MySQL a lot during each one of the above (or MariaDB, tometo/tomato). But it seems to me, that in a subtle way, the community had lost his conference.

There is nothing wrong if each company is trying to get the most out of their investment, so implicitly driving the show to get the best return for their own business.

But … as community are we getting what we need?

If we take a look to the past, we see that the MySQL conference in Santa Clara, was a real milestone. It is true that MySQL was a less mature product, and as such a lot of innovation and discussion were presented every year. But it was not only because that.

In my opinion the singularity of the event was making it a more precious moment, with a lot of focus and will to share.

Not only, the fact that we did not have dozens of events was grouping more people, as speakers/experts and attendee. This was generating more discussions and exchanges, resulting in ideas that we as community had the chance to develop in the forthcoming months.

Here another very important factor, we as part of the community, as engineers, respect each other and we have no limit in sharing knowledge and help each other. No matter if one of us make a mistake, next time he will do it right, and we had always help to go in that direction.

Having the chance to discuss face to face during the event, was great, having the chance to continue the discussion after the event even better, in theory having the chance to do that multiple time in the year should be the best thing. But… there is a but, all these smalls (or smaller) events are companies’ events, not community, sorry to say that but is the reality. What it means is that the interactions are, often, limited, restricted, penalized by the competition between the companies.

Mark Callaghan, some times ago, mention in a short post, that was sad to see so many bright engineers sparse cross companies, and often not able to collaborate to make things better in a common effort, following a shared path (Mark that was my interpretation if that was not your intention, my apology, but also if, I think this is true).

 

This is sad indeed, and a waste of energy. It is also very sad, because we know each other, directly or indirectly. We bring no ressentiment to each other, and we like to discuss, share, help and get help.

Damn we love to work together! No matter the company we are working with, I am sure Oracle developers would love to work with MariaDB guys, as well as Percona’s, and vice versa.

I am not naïve, I understand the need to have sane competition and the need to have “some” differences. But maybe we are not considering the long term, we are a small community, we are a very small fish ball, if we do not help each other we will lose in long term. This is already happening with Postgres growing day by day evolving under our blind eyes.

Looking our small ball, what I see is fragmentation, I see companies trying to find new magic word, or magic trends, to package the same old shit, to survive in the market.

That is not innovation, that is not evolution, that is giving up to what is the most important concept of an opensource community.

Going back to the number of events and their relevance. This increasing number of events, is the natural consequence of the lack of coordination and proper interaction. I do not care how many we will have, if they are bringing us, food for brain.

If they are going to be useful to make MySQL (and related) better.

But if the scope is only promoting the company business, that is ok… just it is not a community event, and in that case, we need to recognize and accept, that we do not have an open conference any longer. And we need one, well two. One for Americas, and another one for EurAsia (get use to the geo definition please, there is where the market is evolving).

We need a strong, unified, focused community. We need to have all the actors collaborate on a common strategy, not doing chicken fights for crumbs. We need a steering committee and the company’s commitment to adhere to the committee indications.

We need that, or we will be forgotten soon because obsolete.

That is what I think… please let me know your ideas, we are a community... let us discuss how to do things better!

{jcomments on}

Reasoning around the recent conferences in 2019

During the last conferences, I had the chance to discuss a lot with many colleagues. Some of them, like me, feel we have lost ourselves, not totally but a bit. 

I had start to think why, and this is my reasoning, not saying is right, but I think is not far from the truth.

Let us start just mentioning few events in the 2019 schedule, February Fosdem, end of May Percona live US, mid-June DataOps Barcelona, September Oracle open World, end of September-October Percona Live Europe, plus an undefined number of secondary events or MariaDB roadshows.

Where is the MySQL (and related) conference?

Do not get me wrong, we talk about MySQL a lot during each one of the above (or MariaDB, tometo/tomato). But it seems to me, that in a subtle way, the community had lost his conference.

There is nothing wrong if each company is trying to get the most out of their investment, so implicitly driving the show to get the best return for their own business.

But … as community are we getting what we need?

If we take a look to the past, we see that the MySQL conference in Santa Clara, was a real milestone. It is true that MySQL was a less mature product, and as such a lot of innovation and discussion were presented every year. But it was not only because that.

In my opinion the singularity of the event was making it a more precious moment, with a lot of focus and will to share.

Not only, the fact that we did not have dozens of events was grouping more people, as speakers/experts and attendee. This was generating more discussions and exchanges, resulting in ideas that we as community had the chance to develop in the forthcoming months.

Here another very important factor, we as part of the community, as engineers, respect each other and we have no limit in sharing knowledge and help each other. No matter if one of us make a mistake, next time he will do it right, and we had always help to go in that direction.

Having the chance to discuss face to face during the event, was great, having the chance to continue the discussion after the event even better, in theory having the chance to do that multiple time in the year should be the best thing. But… there is a but, all these smalls (or smaller) events are companies’ events, not community, sorry to say that but is the reality. What it means is that the interactions are, often, limited, restricted, penalized by the competition between the companies.

Mark Callaghan, some times ago, mention in a short post, that was sad to see so many bright engineers sparse cross companies, and often not able to collaborate to make things better in a common effort, following a shared path (Mark that was my interpretation if that was not your intention, my apology, but also if, I think this is true).

 

This is sad indeed, and a waste of energy. It is also very sad, because we know each other, directly or indirectly. We bring no grunge to each other, and we like to discuss, share, help and get help.

Damn we love to work together! No matter the company we are working with, I am sure Oracle developers would love to work with MariaDB guys, as well as Percona’s, and vice versa.

I am not naïve, I understand the need to have sane competition and the need to have “some” differences. But maybe we are not considering the long term, we are a small community, we are a very small fish ball, if we do not help each other we will lose in long term. This is already happening with Postgres growing day by day evolving under our blind eyes.

Looking our small ball, what I see is fragmentation, I see companies trying to find new magic word, or magic trends, to package the same old shit, to survive in the market.

That is not innovation, that is not evolution, that is giving up to what is the most important concept of an opensource community.

Going back to the number of events and their relevance. This increasing number of events, is the natural consequence of the lack of coordination and proper interaction. I do not care how many we will have, if they are bringing us, food for brain.

If they are going to be useful to make MySQL (and related) better.

But if the scope is only promoting the company business, that is ok… just it is not a community event, and in that case, we need to recognize and accept, that we do not have an open conference any longer. And we need one, well two. One for Americas, and another one for EurAsia (get use to the geo definition please, there is where the market is evolving).

We need a strong, unified, focused community. We need to have all the actors collaborate on a common strategy, not doing chicken fights for crumbs. We need a steering committee and the company’s commitment to adhere to the committee indications.

We need that, or we will be forgotten soon because obsolete.

That is what I think… please let me know your ideas, we are a community... let us discuss how to do things better!

{jcomments on}

MySQL 8.0 Shell Utilities – How can you use them for good ?

Two important MySQL 8.0 Shell Utilities we use regularly are the upgrade checker utility and  JSON import utilityThe upgrade checker utility simplifies the pre-upgrade compatibility audit (whether MySQL server instances are ready for upgrade), We have blogged about MySQL upgrade checker utility here . The upgrade checker utility does not support checking MySQL Server instances at a version earlier than MySQL 5.7. From MySQL Shell 8.0.16, the upgrade checker utility can check the configuration file (my.cnf or my.ini) for the server instance. The utility checks for any system variables that are defined in the configuration file but have been removed in the target MySQL Server release, and also for any system variables that are not defined in the configuration file and will have a different default value in the target MySQL Server release. The upgrade checker utility can generate its output in text format, which is the default, or in JSON format. MySQL Shell’s JSON import utility (first introduced in MySQL Shell 8.0.13) to import JSON documents from a file (or FIFO special file) or standard input to a MySQL Server collection or relational table.  To access the utilities from within MySQL Shell, use the util global object, which provides the following functions:

  • checkForServerUpgrade() – An upgrade checker utility that enables you to verify whether MySQL server instances are ready for upgrade.
  • importJSON() – A JSON import utility that enables you to import JSON documents to a MySQL Server collection or table.
MySQL Upgrade Checker Utility

The following command checks the MySQL server at URI root@11.147.56.14:3306 for upgrade to the first MySQL Server 8.0 GA status release (8.0.11). The user password and the configuration file path are supplied as part of the options dictionary, and the output is returned in the default text format:

mysqlsh> util.checkForServerUpgrade('root@11.147.56.14:3306', {"password":"UpgradeMySQL@1867", "targetVersion":"8.0.11", "configPath":"/home/mysql/my.cnf"})

The following command checks the same MySQL server for upgrade to the MySQL Server release number that matches the current MySQL Shell release number (the default), and returns JSON output for further processing:

mysqlsh> util.checkForServerUpgrade('user@11.147.56.14:3306', {"password":"UpgradeMySQL@1867", "outputFormat":"JSON", "configPath":"/home/mysql/my.cnf"})

From MySQL 8.0.13, you can start the upgrade checker utility from the command line using the mysqlsh command interface:

mysqlsh -- util checkForServerUpgrade root@11.147.56.14:3306 --target-version=8.0.15 --output-format=JSON --config-path=/home/mysql/my.cnf

Want help to use MySQL upgrade checker ? Issue util.help:

mysqlsh> util.help("checkForServerUpgrade")

JSON Import Utility

MySQL Shell 8.0.13 introduced MySQL Shell’s JSON import utility, From MySQL Shell 8.0.14, the import utility can process BSON (binary JSON) data types that are represented in JSON documents. The data types used in BSON documents are not all natively supported by JSON, but can be represented using extensions to the JSON format. The import utility can process documents that use JSON extensions to represent BSON data types, convert them to an identical or compatible MySQL representation, and import the data value using that representation. The resulting converted data values can be used in expressions and indexes, and manipulated by SQL statements and X DevAPI functions. The JSON import utility requires an existing X Protocol connection to the server. The utility cannot operate over a classic MySQL protocol connection.

The following examples import the JSON documents in the file /tmp/customers.json to the customer_master collection in the business database:

mysql-js> util.importJson("/JSON/customers.json", {schema: "business", collection: "customer_master"})

mysql-py> util.import_json("/JSON/customers.json", {"schema": "business", "collection": "customer_master"})

The following example has no options specified, so the dictionary is omitted. billing is the active schema for the MySQL Shell session. The utility therefore imports the JSON documents in the file /tmp/billing.json to a collection named billing in the billing database:

mysql-js> \use billing mysql-js> util.importJson("/billing/billing.json")

Conclusion

Our consultants sometimes spend several weeks (onsite & remote) to define the technical scope of Major upgrades between the versions to avoid unpleased experiences in the future, This task is very expensive for customer and exhaustive for the consultants as they have to spend weeks on MySQL version dependency duedeligenoe , Thanks to MySQL Shell “Upgrade Checker Utility” . ” JSON import utility” was much awaited feature in MySQL, we can now seamlessly do JSON data loading activity more efficiently in a shortest duration.

The post MySQL 8.0 Shell Utilities – How can you use them for good ? appeared first on MySQL Consulting, Support and Remote DBA Services.

MySQL Aggregate Functions

This tutorial explains the use of MySQL aggregate functions like AVG, COUNT, SUM, MAX, MIN with the help of simple examples. Aggregate functions are a bunch of methods that operate on a set of values. They can do calculations for us and then returns one final value. For example, you may like to compute the sum of the data values in a given field. The following are aggregate functions that we are covering in this tutorial. 1. COUNT function 2. MIN function 3. MAX function 4. SUM function 5. AVG function MySQL Aggregate Functions with Examples Before we go through

The post MySQL Aggregate Functions appeared first on Learn Programming and Software Testing.

MySQL OPTIMIZE TABLE Statement

This tutorial explains MySQL OPTIMIZE TABLE statement which defrags tables and recovers unused space. We’ll describe the complete usage of this method with the help of simple examples. If your database is receiving a lot of deletes and updates calls, then it might lead to fragmentation in your MySQL data files. Therefore, a lot of unused space would go in vain, and also put a high impact on the performance. So, experts recommend that you must defrag your MySQL tables regularly. Hence, we’ll today explain how to use the MySQL OPTIMIZE TABLE to defrag tables and free up space. MySQL OPTIMIZE

The post MySQL OPTIMIZE TABLE Statement appeared first on Learn Programming and Software Testing.

MySQL CONCAT to Concatenate Strings

This tutorial explains MySQL CONCAT() which is a built-in String function. It takes variable no. of strings as input and concatenates them together. We’ll describe the usages of this method with the help of simple examples. MySQL string concatenation is more user-friendly than other databases such as PostgreSQL or Oracle. They provide a string concatenation operator “||” instead of a proper function. However, MS SQL server does the same job using the addition arithmetic operator (+). 1. CONCAT() Syntax 2. CONCAT() Simple Examples 4. CONCAT() with Tables Let’s now go through each of the section one by one. MySQL CONCAT()

The post MySQL CONCAT to Concatenate Strings appeared first on Learn Programming and Software Testing.

MySQL DROP TABLE with Simple Examples

This tutorial explains MySQL DROP TABLE statement which deletes the specified tables from the in-use database. We’ll describe several usages of this method with the help of simple examples. 1. DROP TABLE Syntax 2. DROP TABLE If Exist 3. DROP TABLE Matching Pattern 4. DROP TABLE Full Example Let’s now read and understand each of the section one by one. MySQL DROP TABLE statement As stated initially, the DROP TABLE is a built-in MySQL statement which removes a specified table from the database. So, let’s first check the details and see how to use the DROP command. But first, create

The post MySQL DROP TABLE with Simple Examples appeared first on Learn Programming and Software Testing.

MySQL ABS Function with Simple Examples

This tutorial explains MySQL ABS() which is a Math/Trig function. It takes a number as input and determines its absolute (positive) value. We’ll describe the usages of this method with the help of simple examples. 1. ABS() Syntax 2. ABS() Simple Examples 3. ABS() for Expression 4. ABS() with Tables Let’s now go through each of the section one by one. MySQL ABS() Function As stated initially, ABS() is a built-in MySQL function which does some math calculation to derive the absolute value of a number. In Maths term, an absolute value represents the length of a number on the

The post MySQL ABS Function with Simple Examples appeared first on Learn Programming and Software Testing.

MySQL DATE_ADD Function with Simple Examples

This tutorial explains MySQL DATE_ADD function which adds up a slice of time (in days/hours, e.g., 1 day or 10 days) to the given date. We’ll describe the complete date arithmetic of this method with the help of simple examples. 1. DATE_ADD() Syntax 2. DATE_ADD() with -ve Interval 3. DATE_ADD() for Invalid Date 4. DATE_ADD() for Auto Adjustment Let’s now go through each of the section one by one. MySQL DATE_ADD() Function As stated initially, DATE_ADD() is a built-in MySQL function which adds the specified no. of days to a given date. So, let’s now see the details and check

The post MySQL DATE_ADD Function with Simple Examples appeared first on Learn Programming and Software Testing.

MySQL How do you restore tablespace

MySQL How do you restore tablespace?

This is not new information but I have not covered it much so addressing it now for those that need it.

If you lose your ibd files... you lose your data. So if you have a copy of one available.. or even if you are syncing from another database you can still import it.  What/how do you lose tablespace?

Here is a simple example to recover tablespace.



mysql> Create database demo;

mysql> use demo;

mysql> CREATE TABLE `demotable` (
    ->   `id` int(11) NOT NULL AUTO_INCREMENT,
    ->   `dts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
    ->   PRIMARY KEY (`id`)
    -> ) ENGINE=InnoDB;


Now we store some data...


mysql> INSERT INTO demotable (id) VALUES (NULL);
Query OK, 1 row affected (0.10 sec)

mysql> INSERT INTO demotable (id) VALUES (NULL);
Query OK, 1 row affected (0.08 sec)

mysql> SELECT * FROM demotable;
+----+---------------------+
| id | dts                 |
+----+---------------------+
|  1 | 2019-07-12 23:31:34 |
|  2 | 2019-07-12 23:31:35 |
+----+---------------------+
2 rows in set (0.00 sec)


OK now lets break it..


# systemctl stop mysqld
# cd /var/lib/mysql/demo/
# ls -ltr
total 80
-rw-r-----. 1 mysql mysql 114688 Jul 12 23:31 demotable.ibd
# mv demotable.ibd /tmp/

# systemctl start mysqld
# mysql demo

mysql> show tables;
+----------------+
| Tables_in_demo |
+----------------+
| demotable      |
+----------------+
1 row in set (0.00 sec)

mysql> desc demotable;
+-------+-----------+------+-----+-------------------+-----------------------------------------------+
| Field | Type      | Null | Key | Default           | Extra                                         |
+-------+-----------+------+-----+-------------------+-----------------------------------------------+
| id    | int(11)   | NO   | PRI | NULL              | auto_increment                                |
| dts   | timestamp | NO   |     | CURRENT_TIMESTAMP | DEFAULT_GENERATED on update CURRENT_TIMESTAMP |
+-------+-----------+------+-----+-------------------+-----------------------------------------------+
2 rows in set (0.01 sec)

mysql> INSERT INTO demotable (id) VALUES (NULL);
ERROR 1812 (HY000): Tablespace is missing for table `demo`.`demotable`.


Broken and lost tablespace... Now we can recover it..


demo]# cp /tmp/demotable.ibd .

mysql> ALTER TABLE demotable DISCARD TABLESPACE;

demo]# cp /tmp/demotable.ibd .
demo]# ls -ltr
total 112
-rw-r-----. 1 root root 114688 Jul 12 23:50 demotable.ibd
demo]# chown mysql:mysql demotable.ibd
demo]# mysql demo
mysql> ALTER TABLE demotable IMPORT TABLESPACE;
ERROR 1034 (HY000): Incorrect key file for table 'demotable'; try to repair it

mysql> REPAIR TABLE demotable;
+----------------+--------+----------+---------------------------------------------------------+
| Table          | Op     | Msg_type | Msg_text                                                |
+----------------+--------+----------+---------------------------------------------------------+
| demo.demotable | repair | note     | The storage engine for the table doesn't support repair |
+----------------+--------+----------+---------------------------------------------------------+


Notice now we also got another error.. This is usually tied to space available to tmpdir, and repair doesn't work for .ibd anyway.


mysql> select @@tmpdir;
+----------+
| @@tmpdir |
+----------+
| /tmp     |
+----------+

# vi /etc/my.cnf
tmpdir=/var/lib/mysql-files/

# systemctl restart mysqld
# mysql demo


OK used the mysql-files directory just for example.
Now we can try again.


mysql> ALTER TABLE demotable IMPORT TABLESPACE;
Query OK, 0 rows affected, 1 warning (0.61 sec)

mysql>  INSERT INTO demotable (id) VALUES (NULL);
Query OK, 1 row affected (0.11 sec)

mysql>  SELECT * FROM demotable;
+----+---------------------+
| id | dts                 |
+----+---------------------+
|  1 | 2019-07-12 23:31:34 |
|  2 | 2019-07-12 23:31:35 |
|  3 | 2019-07-12 23:56:08 |
+----+---------------------+


OK worked.
Now, this is all nice and simple if you just have one table. But what about 100s...

Automate it, of course, and use your information_schema to help.

Make a few more copies for test.

mysql> create table demotable1 like demotable;
Query OK, 0 rows affected (0.51 sec)

mysql> create table demotable2 like demotable;
Query OK, 0 rows affected (1.04 sec)

mysql> create table demotable3 like demotable;
Query OK, 0 rows affected (0.74 sec)

mysql> create table demotable4 like demotable;
Query OK, 0 rows affected (2.21 sec)


break them all..

demo]# mv *.ibd /tmp/


Now using your information_schema.tables table, you can build out all the commands you will need.

# vi build_discard.sql
# cat build_discard.sql
SELECT CONCAT(" ALTER TABLE ",TABLE_SCHEMA,".",TABLE_NAME," DISCARD TABLESPACE;  ") as CMD FROM information_schema.TABLES WHERE TABLE_SCHEMA='demo';

# vi build_import.sql
# cat build_import.sql
SELECT CONCAT(" ALTER TABLE ",TABLE_SCHEMA,".",TABLE_NAME," IMPORT TABLESPACE;  ") as CMD FROM information_schema.TABLES WHERE TABLE_SCHEMA='demo';



# mysql -N < build_import.sql > import_tablespace.sql
# mysql -N < build_discard.sql  | mysql demo

demo]# cp /tmp/*.ibd .
demo]# chown mysql:mysql *.ibd
# systemctl restart mysqld
# mysql demo < import_tablespace.sql
# mysql demo

mysql> INSERT INTO demotable (id) VALUES (NULL);
Query OK, 1 row affected (0.08 sec)

mysql> INSERT INTO demotable1 (id) VALUES (NULL);
Query OK, 1 row affected (0.05 sec)

mysql> INSERT INTO demotable2 (id) VALUES (NULL);
Query OK, 1 row affected (0.09 sec)

mysql> INSERT INTO demotable3 (id) VALUES (NULL);
^[[AQuery OK, 1 row affected (0.37 sec)

mysql> INSERT INTO demotable4 (id) VALUES (NULL);
Query OK, 1 row affected (0.12 sec)



And it worked.


MySQL Binlogs:: How to recover

So I realized I had not made a post about this after this situation that recently came up.

Here is the scenario: A backup was taken at midnight, they used MySQL dumps per database. Then at ten am the next day the database crashed. A series of events happened before I was called in, but they got it up to a version of the database with MyISAM tables and the IBD files missing from the tablespace.

So option 1, restoring from backup would get us to midnight and we would lose hours of data. Option 2, we reimport the 1000's of ibd files and keep everything. Then we had option 3, restore from backup, then apply the binlogs for recent changes.

To make it more interesting, they didn't have all of the ibd files I was told, and I did see some missing. So not sure how that was possible but option 2 became an invalid option. They, of course, wanted the least data loss possible, so we went with option 3.

To do this safely I started up another instance of MySQL under port 3307. This allowed me a safe place to work while traffic had read access to the MyISAM data on the port 3306 instance.

Once all the backup dump files uncompressed and imported into the 3307 instance I was able to focus on the binlog files.

At first this concept sounds much harder risky than it really is. It is actually pretty straight forward and simple.

So first you have to find the data your after. A review of the binlog files gives you a head start as to what files are relevant. In my case, somehow they managed to reset the binlog so the 117 file had 2 date ranges within it.

First for binlog review, the following command outputs the data in human-readable format.
mysqlbinlog --defaults-file=/root/.my.cnf  --base64-output=DECODE-ROWS  --verbose mysql-bin.000117 >   review_mysql-bin.000117.sql

*Note... Be careful running the above command. Notice I have it dumping the file directly in same location as binlog. So VALIDATE that your file name is valid.  This mysql-bin.000117.sql is different than this mysql-bin.000117 .sql  . You will loose your binlog with the 2nd option and a space before .sql.

Now to save the data so it can be applied. Since I had several binlogs I created a file and I wanted to double-check the time ranges anyway.


mysqlbinlog --defaults-file=/root/.my.cnf --start-datetime="2019-07-09 00:00:00" --stop-datetime="2019-07-10 00:00:00" mysql-bin.000117 > binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf mysql-bin.000118 >> binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf mysql-bin.000119 >> binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf --start-datetime="2019-07-10 00:00:00" --stop-datetime="2019-07-10 10:00:00" mysql-bin.000117 >> binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf --stop-datetime="2019-07-10 10:00:00" mysql-bin.000120 >> binlog_restore.sql
mysqlbinlog --defaults-file=/root/.my.cnf --stop-datetime="2019-07-10 10:00:00" mysql-bin.000121 >> binlog_restore.sql

mysql --socket=/var/lib/mysql_restore/mysql.sock -e "source /var/lib/mysql/binlog_restore.sql"


Now I applied all the data from those binlogs for the given time ranges. The client double-checked all data and was very happy to have it all back.

Several different options existed for this situation, this happened to workout best with the client.

Once the validated all was ok on the restored version it was a simple stop both databases, moved the data directories (wanted to keep the datadir defaults intact) , chown the directories just to be safe and start up MySQL. Now the restored instance was up on port 3306.


Pages