Feed aggregator

One Million Tables in MySQL 8.0

Planet MySQL -

In my previous blog post, I talked about new general tablespaces in MySQL 8.0. Recently MySQL 8.0.3-rc was released, which includes a new data dictionary. My goal is to create one million tables in MySQL and test the performance.

Background questions

Q: Why million tables in MySQL? Is it even realistic? How does this happen?

Usually, millions of tables in MySQL is a result of “a schema per customer” Software as a Service (SaaS) approach. For the purposes of customer data isolation (security) and logical data partitioning (performance), each “customer” has a dedicated schema. You can think of a WordPress hosting service (or any CMS based hosting) where each customer has their own dedicated schema. With 10K customers per MySQL server, we could end up with millions of tables.

Q: Should you design an application with >1 million tables?

Having separate tables is one of the easiest designs for a multi-tenant or SaaS application, and makes it easy to shard and re-distribute your workload between servers. In fact, the table-per-customer or schema-per-customer design has the quickest time-to-market, which is why we see it a lot in consulting. In this post, we are not aiming to cover the merits of should you do this (if your application has high churn or millions of free users, for example, it might not be a good idea). Instead, we will focus on if the new data dictionary provides relief to a historical pain point.

Q: Why is one million tables a problem?

The main issue results from the fact that MySQL needs to open (and eventually close) the table structure file (FRM file). With one million tables, we are talking about at least one million files. Originally MySQL fixed it with table_open_cache and table_definition_cache. However, the maximum value for table_open_cache is 524288. In addition, it is split into 16 partitions by default (to reduce the contention). So it is not ideal. MySQL 8.0 has removed FRM files for InnoDB, and will now allow you to create general tablespaces. I’ve demonstrated how we can create tablespace per customer in MySQL 8.0, which is ideal for “schema-per-customer” approach (we can move/migrate one customer data to a new server by importing/exporting the tablespace).

One million tables in MySQL 5.7

Recently, I’ve created the test with one million tables. The test creates 10K databases, and each database contains 100 tables. To use a standard benchmark I’ve employed sysbench table structure.

mysql> select count(*) from information_schema.schemata where schema_name like 'test_sbtest%'; +----------+ | count(*) | +----------+ | 10000 | +----------+ 1 row in set (0.01 sec) mysql> select count(*) from information_schema.tables where table_schema like 'test_sbtest%'; +----------+ | count(*) | +----------+ | 1000000 | +----------+ 1 row in set (4.61 sec)

This also creates a huge overhead: with one million tables we have ~two million files. Each .frm file and .ibd file size sums up to 175G:

# du -sh /ssd/mysql_57 175G /ssd/mysql_57

Now I’ve used sysbench Lua script to insert one row randomly into one table

pathtest = "/usr/share/sysbench/tests/include/oltp_legacy/" if pathtest then dofile(pathtest .. "common.lua") else require("common") end function event() local table_name local i local c_val local k_val local pad_val local oltp_tables_count = 100 local oltp_db_count = 10000 table_name = "test_sbtest_" .. sb_rand_uniform(1, oltp_db_count) .. ".sbtest".. sb_rand_uniform(1, oltp_tables_count) k_val = sb_rand(1, oltp_table_size) c_val = sb_rand_str([[ ###########-###########-###########-###########-###########-###########-###########-###########-###########-###########]]) pad_val = sb_rand_str([[ ###########-###########-###########-###########-###########]]) rs = db_query("INSERT INTO " .. table_name .. " (id, k, c, pad) VALUES " .. string.format("(%d, %d, '%s', '%s')", i, k_val, c_val, pad_val)) end end

With:

local oltp_tables_count = 100 local oltp_db_count = 10000

Sysbench will choose one table randomly out of one million. With oltp_tables_count = 1 and oltp_db_count = 100, it will only choose the first table (sbtest1) out of the first 100 databases (randomly).

As expected, MySQL 5.7 has a huge performance degradation when going across one million tables. When running a script that only inserts data into 100 random tables, we can see ~150K transactions per second. When the data is inserted in one million tables (chosen randomly) performance drops to 2K (!) transactions per second:

Insert into 100 random tables:

SQL statistics: queries performed: read: 0 write: 16879188 other: 0 total: 16879188 transactions: 16879188 (140611.72 per sec.) queries: 16879188 (140611.72 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.)

Insert into one million random tables:

SQL statistics: queries performed: read: 0 write: 243533 other: 0 total: 243533 transactions: 243533 (2029.21 per sec.) queries: 243533 (2029.21 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.)

This is expected. Here I’m testing the worse case scenario, where we can’t keep all table open handlers and table definitions in cache (memory) since the table_open_cache and table_definition_cache both have a limit of 524288.

Also, normally we can expect a huge skew between access to the tables. There can be only 20% active customers (80-20 rule), meaning that we can only expect an active access to 2K databases. In addition, there will be old or unused tables so we can expect around 100K or less of active tables.

Hardware and config files

The above results are from this server:

Processors | 64xGenuine Intel(R) CPU @ 2.00GHz Memory Total | 251.8G Disk | Samsung 950 Pro PCIE SSD (nvme)

Sysbench script:

sysbench $conn --report-interval=1 --num-threads=32 --max-requests=0 --max-time=600 --test=/root/drupal_demo/insert_custom.lua run

My.cnf:

innodb_buffer_pool_size = 100G innodb_io_capacity=20000 innodb_flush_log_at_trx_commit = 0 innodb_log_file_size = 2G innodb_flush_method=O_DIRECT_NO_FSYNC skip-log-bin open_files_limit=1000000 table_open_cache=524288 table_definition_cache=524288

One million tables in MySQL 8.0 + general tablespaces

In MySQL 8.0 is it easy and logical to create one general tablespace per each schema (it will host all tables in this schema). In MySQL 5.7, general tablespaces are available – but there are still .frm files.

I’ve used the following script to create 100 tables in one schema all in one tablespace:

mysql test -e "CREATE TABLESPACE t ADD DATAFILE 't.ibd' engine=InnoDB;" for i in {1..10000} do mysql test -e "create table ab$i(i int) tablespace t" done

The new MySQL 8.0.3-rc also uses the new data dictionary, so all MyISAM tables in the mysql schema are removed and all metadata is stored in additional mysql.ibd file.

Creating one million tables

Creating InnoDB tables fast enough can be a task by itself. Stewart Smith published a blog post a while ago where he focused on optimizing time to create 30K tables in MySQL.

The problem is that after creating an .ibd file, MySQL needs to “fsync” it. However, when creating a table inside the tablespace, there is no fsync. I’ve created a simple script to create tables in parallel, one thread per database:

#/bin/bash function do_db { mysql -vvv -e "create database $db"; mysql -vvv $db -e "CREATE TABLESPACE $db ADD DATAFILE '$db.ibd' engine=InnoDB;" for i in {1..100} do table="CREATE TABLE sbtest$i ( id int(10) unsigned NOT NULL AUTO_INCREMENT, k int(10) unsigned NOT NULL DEFAULT '0', c varchar(120) NOT NULL DEFAULT '', pad varchar(60) NOT NULL DEFAULT '', PRIMARY KEY (id), KEY k_1 (k) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 tablespace $db;" mysql $db -e "$table" done } c=0 for m in {1..100} do for i in {1..100} do let c=$c+1 echo $c db="test_sbtest_$c" do_db & done wait done

That script works perfectly in MySQL 8.0.1-dmr and creates one million tables in 25 minutes and 28 seconds (1528 seconds). That is ~654 tables per second. That is significantly faster than ~30 tables per second in the original Stewart’s test and 2x faster than a test where all fsyncs were artificially disabled using libeat-my-data library.

Unfortunately, in MySQL 8.0.3-rc some regression was introduced. In MySQL 8.0.3-rc I can see heavy mutex contention, and the table creation speed dropped from 25 minutes to ~280 minutes. I’ve filed a bug report: performance regression: “create table” speed and scalability in 8.0.3.

Size on disk

With general tablespaces and no .frm files, the size on disk decreased:

# du -h -d1 /ssd/ 147G /ssd/mysql_801 119G /ssd/mysql_803 175G /ssd/mysql_57

Please note though that in MySQL 8.0.3-rc, with new native data dictionary, the size on disk increased as it needs to write additional information (Serialized Dictionary Information, SDI) to the tablespace files:

InnoDB: Serialized Dictionary Information (SDI) is now present in all InnoDB tablespace files except for temporary tablespace and undo tablespace files. SDI is serialized metadata for schema, table, and tablespace objects. The presence of SDI data provides metadata redundancy. ... The inclusion of SDI data in tablespace files increases tablespace file size. An SDI record requires a single index page, which is 16k in size by default. However, SDI data is compressed when it is stored to reduce the storage footprint.

The general mysql data dictionary in MySQL 8.0.3 is 6.6Gb:

6.6G /ssd/mysql/mysql.ibd

Benchmarking the insert speed in MySQL 8.0 

I’ve repeated the same test I’ve done for MySQL 5.7 in MySQL 8.0.3-rc (and in 8.0.1-dmr), but using general tablespace. I created 10K databases (=10K tablespace files), each database has100 tables and each database resides in its own tablespace.

There are two new tablespace level caches we can use in MySQL 8.0: tablespace_definition_cache and schema_definition_cache:

tablespace_definition_cache = 15000 schema_definition_cache = 524288

Unfortunately, with one million random table accesses in MySQL 8.0 (both 8.0.1 and 8.0.3), we can still see that it stalls on opening tables (even with no .frm files and general tablespaces):

mysql> select conn_id, current_statement, state, statement_latency, lock_latency from sys.processlist where current_statement is not null and conn_id <> CONNECTION_ID(); +---------+-------------------------------------------------------------------+----------------+-------------------+--------------+ | conn_id | current_statement | state | statement_latency | lock_latency | +---------+-------------------------------------------------------------------+----------------+-------------------+--------------+ | 199 | INSERT INTO test_sbtest_9749.s ... 8079-53209333270-93105555128') | Opening tables | 4.45 ms | 0 ps | | 198 | INSERT INTO test_sbtest_1863.s ... 9574-29782886623-39251573705') | Opening tables | 9.95 ms | 5.67 ms | | 189 | INSERT INTO test_sbtest_3948.s ... 9365-63502117132-66650064067') | Opening tables | 16.29 ms | 15.38 ms | | 190 | INSERT INTO test_sbtest_6885.s ... 8436-41291265610-60894472357') | Opening tables | 13.78 ms | 9.52 ms | | 191 | INSERT INTO test_sbtest_247.sb ... 7467-89459234028-92064334941') | Opening tables | 8.36 ms | 3.18 ms | | 192 | INSERT INTO test_sbtest_9689.s ... 8058-74586985382-00185651578') | Opening tables | 6.89 ms | 0 ps | | 193 | INSERT INTO test_sbtest_8777.s ... 1900-02582963670-01868315060') | Opening tables | 7.09 ms | 5.70 ms | | 194 | INSERT INTO test_sbtest_9972.s ... 9057-89011320723-95018545652') | Opening tables | 9.44 ms | 9.35 ms | | 195 | INSERT INTO test_sbtest_6977.s ... 7902-29158428721-66447528241') | Opening tables | 7.82 ms | 789.00 us | | 196 | INSERT INTO test_sbtest_129.sb ... 2091-86346366083-87657045906') | Opening tables | 13.01 ms | 7.30 ms | | 197 | INSERT INTO test_sbtest_1418.s ... 6581-90894769279-68213053531') | Opening tables | 16.35 ms | 10.07 ms | | 208 | INSERT INTO test_sbtest_4757.s ... 4592-86183240946-83973365617') | Opening tables | 8.66 ms | 2.84 ms | | 207 | INSERT INTO test_sbtest_2152.s ... 5459-55779113235-07063155183') | Opening tables | 11.08 ms | 3.89 ms | | 212 | INSERT INTO test_sbtest_7623.s ... 0354-58204256630-57234862746') | Opening tables | 8.67 ms | 2.80 ms | | 215 | INSERT INTO test_sbtest_5216.s ... 9161-37142478639-26288001648') | Opening tables | 9.72 ms | 3.92 ms | | 210 | INSERT INTO test_sbtest_8007.s ... 2999-90116450579-85010442132') | Opening tables | 1.33 ms | 0 ps | | 203 | INSERT INTO test_sbtest_7173.s ... 2718-12894934801-25331023143') | Opening tables | 358.09 us | 0 ps | | 209 | INSERT INTO test_sbtest_1118.s ... 8361-98642762543-17027080501') | Opening tables | 3.32 ms | 0 ps | | 219 | INSERT INTO test_sbtest_5039.s ... 1740-21004115002-49204432949') | Opening tables | 8.56 ms | 8.44 ms | | 202 | INSERT INTO test_sbtest_8322.s ... 8686-46403563348-31237202393') | Opening tables | 1.19 ms | 0 ps | | 205 | INSERT INTO test_sbtest_1563.s ... 6753-76124087654-01753008993') | Opening tables | 9.62 ms | 2.76 ms | | 213 | INSERT INTO test_sbtest_5817.s ... 2771-82142650177-00423653942') | Opening tables | 17.21 ms | 16.47 ms | | 216 | INSERT INTO test_sbtest_238.sb ... 5343-25703812276-82353892989') | Opening tables | 7.24 ms | 7.20 ms | | 200 | INSERT INTO test_sbtest_2637.s ... 8022-62207583903-44136028229') | Opening tables | 7.52 ms | 7.39 ms | | 204 | INSERT INTO test_sbtest_9289.s ... 2786-22417080232-11687891881') | Opening tables | 10.75 ms | 9.01 ms | | 201 | INSERT INTO test_sbtest_6573.s ... 0106-91679428362-14852851066') | Opening tables | 8.43 ms | 7.03 ms | | 217 | INSERT INTO test_sbtest_1071.s ... 9465-09453525844-02377557541') | Opening tables | 8.42 ms | 7.49 ms | | 206 | INSERT INTO test_sbtest_9588.s ... 8804-20770286377-79085399594') | Opening tables | 8.02 ms | 7.50 ms | | 211 | INSERT INTO test_sbtest_4657.s ... 4758-53442917995-98424096745') | Opening tables | 16.62 ms | 9.76 ms | | 218 | INSERT INTO test_sbtest_9672.s ... 1537-13189199316-54071282928') | Opening tables | 10.01 ms | 7.41 ms | | 214 | INSERT INTO test_sbtest_1391.s ... 9241-84702335152-38653248940') | Opening tables | 21.34 ms | 15.54 ms | | 220 | INSERT INTO test_sbtest_6542.s ... 7778-65788940102-87075246009') | Opening tables | 2.96 ms | 0 ps | +---------+-------------------------------------------------------------------+----------------+-------------------+--------------+ 32 rows in set (0.11 sec)

And the transactions per second drops to ~2K.

Here I’ve expected different behavior. With the .frm files gone and with tablespace_definition_cache set to more than 10K (we have only 10K tablespace files), I’ve expected that MySQL does not have to open and close files. It looks like this is not the case.

I can also see the table opening (since the server started):

mysql> show global status like '%open%'; +----------------------------+-----------+ | Variable_name | Value | +----------------------------+-----------+ | Com_ha_open | 0 | | Com_show_open_tables | 0 | | Innodb_num_open_files | 10040 | | Open_files | 0 | | Open_streams | 0 | | Open_table_definitions | 524288 | | Open_tables | 499794 | | Opened_files | 22 | | Opened_table_definitions | 1220904 | | Opened_tables | 2254648 | | Slave_open_temp_tables | 0 | | Table_open_cache_hits | 256866421 | | Table_open_cache_misses | 2254643 | | Table_open_cache_overflows | 1254766 | +----------------------------+-----------+

This is easier to see on the graphs from PMM. Insert per second for the two runs (both running 16 threads):

  1. The first run is 10K random databases/tablespaces and one table (sysbench is choosing table#1 from a randomly chosen list of 10K databases). This way there is also no contention on the tablespace file.
  2. The second run is a randomly chosen table from a list of one million tables.

As we can see, the first run is dong 50K -100K inserts/second. Second run is only limited to ~2.5 inserts per second:

“Table open cache misses” grows significantly after the start of the second benchmark run:

As we can see, MySQL performs ~1.1K table definition openings per second and has ~2K table cache misses due to the overflow:

When inserting against only 1K random tables (one specific table in a random database, that way we almost guarantee that one thread will always write to a different tablespace file), the table_open_cache got warmed up quickly. After a couple of seconds, the sysbench test starts showing > 100K tps. The processlist looks much better (compare the statement latency and lock latency to the above as well):

mysql> select conn_id, current_statement, state, statement_latency, lock_latency from sys.processlist where current_statement is not null and conn_id <> CONNECTION_ID(); +---------+-------------------------------------------------------------------+----------------+-------------------+--------------+ | conn_id | current_statement | state | statement_latency | lock_latency | +---------+-------------------------------------------------------------------+----------------+-------------------+--------------+ | 253 | INSERT INTO test_sbtest_3293.s ... 2282-95400708146-84684851551') | starting | 22.72 us | 0 ps | | 254 | INSERT INTO test_sbtest_3802.s ... 4030-35983148190-23616685226') | update | 62.88 us | 45.00 us | | 255 | INSERT INTO test_sbtest_5290.s ... 2361-58374942527-86207214617') | Opening tables | 36.07 us | 0 ps | | 256 | INSERT INTO test_sbtest_5684.s ... 4717-34992549120-04746631452') | Opening tables | 37.61 us | 37.00 us | | 257 | INSERT INTO test_sbtest_5088.s ... 5637-75275906887-76112520982') | starting | 22.97 us | 0 ps | | 258 | INSERT INTO test_sbtest_1375.s ... 8592-24036624620-65536442287') | query end | 98.66 us | 35.00 us | | 259 | INSERT INTO test_sbtest_8764.s ... 8566-02569157908-49891861265') | Opening tables | 47.13 us | 37.00 us | | 260 | INSERT INTO test_sbtest_560.sb ... 2605-08226572929-25889530906') | query end | 155.64 us | 38.00 us | | 261 | INSERT INTO test_sbtest_7776.s ... 0243-86335905542-37976752368') | System lock | 46.68 us | 32.00 us | | 262 | INSERT INTO test_sbtest_6551.s ... 5496-19983185638-75401382079') | update | 74.07 us | 40.00 us | | 263 | INSERT INTO test_sbtest_7765.s ... 5428-29707353898-77023627427') | update | 71.35 us | 45.00 us | | 265 | INSERT INTO test_sbtest_5771.s ... 7065-03531013976-67381721569') | query end | 138.42 us | 39.00 us | | 266 | INSERT INTO test_sbtest_8603.s ... 7158-66470411444-47085285977') | update | 64.00 us | 36.00 us | | 267 | INSERT INTO test_sbtest_3983.s ... 5039-55965227945-22430910215') | update | 21.04 ms | 39.00 us | | 268 | INSERT INTO test_sbtest_8186.s ... 5418-65389322831-81706268892') | query end | 113.58 us | 37.00 us | | 269 | INSERT INTO test_sbtest_1373.s ... 1399-08304962595-55155170406') | update | 131.97 us | 59.00 us | | 270 | INSERT INTO test_sbtest_7624.s ... 0589-64243675321-62971916496') | query end | 120.47 us | 38.00 us | | 271 | INSERT INTO test_sbtest_8201.s ... 6888-31692084119-80855845726') | query end | 109.97 us | 37.00 us | | 272 | INSERT INTO test_sbtest_7054.s ... 3674-32329064814-59707699237') | update | 67.99 us | 35.00 us | | 273 | INSERT INTO test_sbtest_3019.s ... 1740-35410584680-96109859552') | update | 5.21 ms | 33.00 us | | 275 | INSERT INTO test_sbtest_7657.s ... 4985-72017519764-59842283878') | update | 88.91 us | 48.00 us | | 274 | INSERT INTO test_sbtest_8606.s ... 0580-38496560423-65038119567') | freeing items | NULL | 37.00 us | | 276 | INSERT INTO test_sbtest_9349.s ... 0295-94997123247-88008705118') | starting | 25.74 us | 0 ps | | 277 | INSERT INTO test_sbtest_3552.s ... 2080-59650597118-53885660147') | starting | 32.23 us | 0 ps | | 278 | INSERT INTO test_sbtest_3832.s ... 1580-27778606266-19414961452') | freeing items | 194.14 us | 51.00 us | | 279 | INSERT INTO test_sbtest_7685.s ... 0234-22016898044-97277319766') | update | 62.66 us | 40.00 us | | 280 | INSERT INTO test_sbtest_6026.s ... 2629-36599580811-97852201188') | Opening tables | 49.41 us | 37.00 us | | 281 | INSERT INTO test_sbtest_8273.s ... 7957-39977507737-37560332932') | update | 92.56 us | 36.00 us | | 283 | INSERT INTO test_sbtest_8584.s ... 7604-24831943860-69537745471') | starting | 31.20 us | 0 ps | | 284 | INSERT INTO test_sbtest_3787.s ... 1644-40368085836-11529677841') | update | 100.41 us | 40.00 us | +---------+-------------------------------------------------------------------+----------------+-------------------+--------------+ 30 rows in set (0.10 sec)

What about the 100K random tables? That should fit into the table_open_cache. At the same time, the default 16 table_open_cache_instances split 500K table_open_cache, so each bucket is only ~30K. To fix that, I’ve set table_open_cache_instances = 4 and was able to get ~50K tps average. However, the contention inside the table_open_cache seems to stall the queries:

There are only a very limited amount of table openings:

 

Conclusion

MySQL 8.0 general tablespaces looks very promising. It is finally possible to create one million tables in MySQL without the need to create two million files. Actually, MySQL 8 can handle many tables very well as long as table cache misses are kept to a minimum.

At the same time, the problem with “Opening tables” (worst case scenario test) still persists in MySQL 8.0.3-rc and limits the throughput. I expected to see that MySQL does not have to open/close the table structure file. I also hope the create table regression bug is fixed in the next MySQL 8.0 version.

I’ve not tested other new features in the new data dictionary in 8.0.3-rc: i.e., atomic DDL (InnoDB now supports atomic DDL, which ensures that DDL operations are either committed in their entirety or rolled back in case of an unplanned server stoppage). That is the topic of the next blog post.

MySQL Enterprise Monitor 4.0.0 GA has been released

Planet MySQL -

We are very happy to announce the general availability of MySQL Enterprise Monitor, Version 4.0. MySQL Enterprise Monitor is the best-in-class tool for monitoring and management of your MySQL assets and is included with your MySQL Enterprise Edition and MySQL Enterprise Carrier Grade subscriptions.

You can find more information on the contents of this release in the change log.

Highlights of MySQL Enterprise Monitor 4.0 include:

  • Modern look and feel: a redesigned user interface delivers a vastly improved overall user experience. The visible changes--the layout, the icons, the and the overall aesthetics--provide a more natural and intuitive experience. Views dynamically change and adjust to your current context and the assets you've selected, everything from individual MySQL instances or hosts to your custom Groups, to your complex replication and clustered topologies. Additional enhancements include a much more responsive UI and a greater capacity to scale, allowing you to more effectively manage thousands of MySQL related assets.
  • MySQL Cluster monitoring: we now auto-discover your MySQL Cluster installations and give you visibility into the performance, availability, and health of each MySQL instance and NDB process, as well as the health of the MySQL Cluster instance as a single logical system. The Overview dashboard displays detailed instrumentation available for each MySQL Cluster and the Topology dashboard displays the current configuration of your MySQL Cluster enabling you to quickly see the status of the MySQL Cluster instance as a whole and each individual process. The Topology dashboard allows you to easily see how your MySQL Cluster installations are currently functioning.
  • A User Statistics report: which provides an easy way to monitor MySQL resource utilization broken down by user.

You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet.

You will also find the binaries on the Oracle Software Delivery Cloud soon.  Type "MySQL Enterprise Monitor 4.0.0" in the search box, or enter a license name to find Enterprise Monitor along with other MySQL products: "MySQL Enterprise Edition" or "MySQL Cluster Carrier Edition".  Then select your platform.

Please open a bug or a ticket on My Oracle Support to report problems, request features, or give us general feedback about how this release meets your needs.

If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact.

Thanks and Happy Monitoring!

- The MySQL Enterprise Tools Development Team

Useful URLs

This Week in Data with Colin Charles 8: Percona Live Europe 2017 Is a Wrap!

Planet MySQL -

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Percona Live Europe 2017 Dublin

We’ve spent a lot of time in the last few months organizing Percona Live Europe Dublin. I want to thank all the speakers, sponsors and attendees for helping us to pull off yet another great event. While we’ll provide some perspectives, thoughts and feedback soon, all the early mornings, jam-packed meetings and the 4 am bedtimes means I’ll probably talk about this event in my next column!

In the meantime, save the date for Percona Live Santa Clara, April 23-25 2018. The call for papers will open in October 2017.

Releases Link List Upcoming appearances

Percona’s website keeps track of community events, so check out where to listen to a Perconian speak. My upcoming appearances are:

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

Percona Monitoring and Management 1.3.1 Is Now Available

Planet MySQL -

Percona announces the release of Percona Monitoring and Management 1.3.1. This release only contains bug fixes related to usability.

For install and upgrade instructions, see Deploying Percona Monitoring and Management.

Bug fixes
  • PMM-1271: In QAN, when the user selected a database host with no queries, the query monitor could still show metrics.
  • PMM-1512: When clicking the QAN in GrafanaQAN would open the home page. Now, QAN opens and automatically selects the database host and time range active in Grafana.
  • PMM-1523: User-defined Prometheus memory settings were not honored, potentially causing performance issues in high load environments.

Other bug fixes in this release: PMM-1452PMM-1515.

About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

We’re always happy to help! Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, please use the PMM project in JIRA.

How to set up and use MariaDB Connector C

Planet MySQL -

I just wanted to provide quick tutorial on using MariaDB’s Connector C.

I downloaded the latest Connector C (3.0.2), running MariaDB 10.1.28, and was able to get it to work fine with a simple C program using the following commands:

1. Downloaded the Connector C .msi file (32-bit, since my VS is 32-bit), extracted, and installed, which placed it at:

C:\Program Files (x86)\MariaDB\MariaDB Connector C\lib

2. You need to add the directory that contains libmaria.dll to your $PATH Environment LIB PATH variable. In my case, it was:

Control Panel -> System -> Advanced System Settings -> Environment Variables -> Choose "LIB" from under "System variables" -> then add the Connector C lib path, like: C:\Program Files (x86)\MariaDB\MariaDB Connector C\lib

3. Then just run the following command, where my c/c++ program name is “mysql1.c”:

cl /I "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\include" /I "C:\Program Files (x86)\mariadb-10.1.25\include\mysql" /I "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Include" /MD "C:\Program Files (x86)\MariaDB\MariaDB Connector C\lib\libmariadb.lib" mysql1.c

Note the first path is to include all normal C headers like stdio.h, the second for mysql.h, and the third is for windows.h, and the last for the Connector C .lib.

Here is the actual session output:

C:\chris\mysql1> cl /I "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\include" /I "C:\Program Files (x86)\mariadb-10.1.25\include\mysql" /I "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Include" /MD "C:\Program Files (x86)\MariaDB\MariaDB Connector C\lib\libmariadb.lib" mysql1.c Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 16.00.40219.01 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. mysql1.c Microsoft (R) Incremental Linker Version 10.00.40219.01 Copyright (C) Microsoft Corporation. All rights reserved. /out:mysql1.exe C:\chris\mysql1\libmariadb.lib mysql1.obj

4. If all is successful, as the above, you can invoke your newly created c/c++ program, mysql1.exe, in this case (mine just obtains the version number from the instance):

C:\chris\mysql1>mysql1 MySQL Version = 100128

For reference, here is my mysql1.c code:

#include #include #include MYSQL *conn; int version = 1; int main ( int argc, char *argv[] ) { conn = mysql_init ( NULL ); mysql_real_connect ( conn, "localhost", "root", "xxxxx", "test", 3316, NULL, 0 ); version = mysql_get_server_version( conn ); printf("\nMySQL Version = %d\n",version); mysql_close ( conn ); return 0; }

Previous related posts, if interested:

  1. Creating a basic C/C++ Program to Interact with MySQL and MariaDB
  2. Common Errors and Resolutions for Building your own MySQL or MariaDB C/C++ Program on Windows
  3. Setting Up Connector/C and SkySQL C Connector for MySQL and MariaDB

I hope this helps.

Watch the Replay: MySQL on Docker - Understanding the Basics

Planet MySQL -

Thanks to everyone who joined us this week as we broadcast our MySQL on Docker webinar live from the Percona Live Conference in Dublin!

Our colleague Ashraf Sharif discussed how Docker containers work through to running a simple MySQL container as well as the ClusterControl Docker image (amongst other things)

If you missed the session or would like to watch it again, it’s now online for viewing.

Watch replay

Related resources  MySQL on Docker - Blog Series  Severalnines on Docker Hub  ClusterControl Docker Image

Here’s the full agenda of the topics that were covered during the webinar. The content is aimed at MySQL users who are Docker beginners and who would like to understand the basics of running a MySQL container on Docker.

  • Docker and its components
  • Concept and terminology
  • How a Docker container works
  • Advantages and disadvantages
  • Stateless vs stateful
  • Docker images for MySQL
  • Running a simple MySQL container
  • The ClusterControl Docker image
  • Severalnines on Docker Hub

Watch replay

And if you’re not following our Docker blog series yet, we encourage you to do so: MySQL on Docker.

Tags:  mysql on docker docker mysql containers MySQL

Handling Encrypted InnoDB Backups

Planet MySQL -

      ​In this post, we are going to see how we can backup encrypted tables using Xtrabackup. InnoDB supports data encryption for InnoDB tables stored in file per table tablespaces. For the application to access encrypted tablespace, InnoDB will use master encryption key to decrypt the tablespace key. The master encryption key is stored in a keyring file in the location specified by the key_ring_file_data configuration option. We have already discussed on enabling encrypted tablespace. Here, we will try full and incremental backups of encrypted tablespace.

Percona xtrabackup supports encrypted innodb tablespace backups. While taking backup, we have to add –keyring-file-data and –server-id. After the completion of the backup, we have to use the same options to prepare the backup. Below is an example of encrypted table,

Backup can be taken without using –keyring-file-data and –server-id and the same can be done for apply log as well. But, after restoring the backup we cant access the encrypted table. It will throw error like below,

FULL BACKUP

So, proper encrypted backup should be taken by specifying the path to keyring file and server-id. 

innobackupex –user=root –password=pass –safe-slave-backup  –keyring-file-data=/var/lib/mysql_keyring/keyring –server-id=176457 /path/to/backup

Once the backup is completed, we have prepare the backup by passing the same options. Its needed to specify keyring file data but not server id  as it can be obtained from backup-my.cnf. Xtrabackup will not copy keyring file into the backup directory. In order to prepare the backup, you must make a copy of keyring file yourself.

innobackupex –apply-log –use-memory=20G –keyring-file data=/var/lib/mysql_keyring/keyring  /path/to/backup

After finishing applying the logs, you should can see the below message,

InnoDB: Shutdown completed; log sequence number 21647652
170528 19:23:15 completed OK!

Once the backup is prepared, it can be restored either using –move-back or –copy-back. If the keyring file is rotated during the backup, we have to use the same keyring file which was used to backup and prepare backup. After successful restoration, we can see the data.

If the master key file is deleted or moved, then the encrypted table cant be accessed. So, it must be stored properly.

INCREMENTAL BACKUP

To perform incremental backup, we need a full backup. The Xtrabackup binary writes a file called xtrabackup_checkpoints  in the backup directory.

After the full backup, the incremental backup will be based on that.Even here the xtrabackup will not copy keyring file into the backup directory. In order to be prepare the backup, you must make a copy of keyring file yourself. If the keyring hasn’t been rotated you can use the same as the one you’ve backed-up with the base backup. If the keyring has been rotated you’ll need to back it up otherwise you won’t be able to prepare the backup.

innobackupex–user=root –password=pass –incremental –incremental-basedir=/path/to/base/backup/  –keyring-file-data=/var/lib/mysql_keyring/keyring –server-id=176457 /path/to/inc/backup

Now the incremental backup directory will have the delta files, which represents the changes.

The apply log phase in incremental backup doesn’t work like full backup. Here, you must use –apply-log-only to rollback phase. Then we have to prepare the base and incremental backup.

innobackupex –user=root –password=pass –apply-log-only –use-memory=10G –keyring-file-data=/var/lib/mysql_keyring/keyring/  /path/to/base/backup/

After preparing the base backup, the incremental backup should applied from the base backup.

 innobackupex–user=root –password=pass –apply-log-only –incremental-dir=/path/to/inc/backup –keyring-file-data=/var/lib/mysql_keyring/keyring/  /path/to/base/backup/

Once it is successful, it can be restored. If there is any issue in accessing the encrypted tables, then either the file should have rotated or it might have removed/moved. So, it us necessary to check the encrypted tables after the restoration.


Lesson 05: Basic SQL (in MySQL)

Planet MySQL -

Notes/errata/updates for Chapter 5:
See the official book errata at http://tahaghoghi.com/LearningMySQL/errata.php – Chapter 5 is pages 135-177.

Notes:
If you did not load the sample databases in Chapter 3 as per page 102, you can download the file to import from http://tahaghoghi.com/LearningMySQL/Downloads/Files/Data/SQL_files_with_foreign_key_references/music.sql

On page 162, it says “In Chapter 6, we’ll discuss how to load data from existing tables or from external data sources.” but this content is in Chapter 8.

On page 169, it has an example of “DELETE FROM played;” which deletes all the rows of a table. This will not work if safe_updates are enabled. I always advise that if you truly want to do that, ALWAYS use a WHERE clause, for example: DELETE FROM played WHERE 1=1;. In this way, your query is self-documenting – it tells the reader that you intended to delete all rows. If you just have “DELETE FROM played” it is possible you meant to put a WHERE clause in but forgot.

Similarly, on page 171, it shows an example where an UPDATE with no WHERE clause is used to update all rows. I prefer the syntax UPDATE tbl SET column=value WHERE 1=1 – this broadcasts your intent.

Topics covered:
How to get table metadata with SHOW
Basic SELECT statements
– Choosing columns
– Filtering with WHERE clauses
– ORDER BY
– LIMIT
Joining tables with INNER JOIN
INSERT and INSERT IGNORE
DELETE
UPDATE
TRUNCATE

Reference/Quick Links for MySQL Marinate

Percona Live Europe Session Interview: MySQL on Docker – Containerizing the Dolphin

Planet MySQL -

One of the widely discussed technologies at Percona Live Europe was the logistics of running MySQL in containers. Containers – particularly Docker – have become a hot topic, so there was a good-sized crowd on day one of the conference for Ashraf Sharif, Senior Support Engineer with Severalnines. He presented his talk “MySQL on Docker: Containerizing the Dolphin”. 

During his presentation, Ashraf shared some recommendations for best practices when setting out on containerizing MySQL. He sees the trend of moving to containers as a progression from the use of virtual hosts.

After his talk on day one of the Percona Live Europe conference, I caught up with Ashraf and asked about his presentation. I was interested in which concepts are most important for ensuring a smoother implementation.

If you enjoy this brief presentation and would like to find out more, then you might like to subscribe to Ashraf’s blog on the Severalnines website where he regularly posts insights on his special interests of system scalability and high availability.

MySQL Connector/Node.js 8.0.8-dmr has been released

Planet MySQL -

MySQL Connector/Node.js is a new Node.js driver for use with the X
DevAPI. This release, v8.0.8, is the second development release of the
MySQL Connector/Node.js 8.0 series.

The X DevAPI enables application developers to write code that combines
the strengths of the relational and document models using a modern,
NoSQL-like syntax that does not assume previous experience writing
traditional SQL.

MySQL Connector/Node.js can be downloaded through npm (see
https://www.npmjs.com/package/@mysql/xdevapi for details) or from
https://dev.mysql.com/downloads/connector/nodejs/.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information
about how the X DevAPI is implemented in MySQL Connector/Node.js, and
its usage, see http://dev.mysql.com/doc/dev/connector-nodejs/.

Note

Please note that the X DevAPI requires at least MySQL Server version
5.7.12 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/5.7/en/document-store.html.

Functionality Added or Changed

Bugs Fixed

  • Added support for the parentheses-based IN syntax, such as IN (x, y, z, …), as defined in the X DevAPI. (Bug #26666817)

On behalf of the MySQL/Oracle Release Engineering Team
Piotr Obrzut

MySQL Connector/Java 8.0.8-dmr has been released

Planet MySQL -

Dear MySQL users,

MySQL Connector/J 8.0.8 Development Release is a development milestone
release for the 8.0.x series.

This release includes the following new features and changes, also
described in more detail on

https://dev.mysql.com/doc/relnotes/connector-j/8.0/en/news-8-0-8.html

MySQL Connectors and other MySQL client tools and applications now
synchronize the first digit of their version number with the (highest)
MySQL server version they support.
This change makes it easy and intuitive to decide which client version
to use for which server version.

As always, we recommend that you check the “CHANGES” file in the download
archive to be aware of changes in behavior that might affect your application.

To download MySQL Connector/J 8.0.8 dmr, see the “Development
Releases” tab at http://dev.mysql.com/downloads/connector/j/

Enjoy!

Changes in MySQL Connector/J 8.0.8 (2017-09-28, Development Milestone) Version 8.0.8 Development Milestone is the latest development release of the 8.0 branch of MySQL Connector/J, providing an insight into upcoming features. It is suitable for use with MySQL Server versions 5.5, 5.6, 5.7, and 8.0. It supports the Java Database Connectivity (JDBC) 4.2 API. Functionality Added or Changed * Packaging: RPM and Debian packages for installing Connector/J are now available from the Connector/J Download page (http://dev.mysql.com/downloads/connector/j/). * X DevAPI: Connector/J has implemented a new interface of the X Dev API that allows the retrieving, adding, removing, and updating of persistent session continuation data. The implementation includes the following: + A SessionConfig object that holds the information for a session configuration data set. + A PersistenceHandler interface that allows custom implementations of persistence handlers. + A PasswordHandler interface that allows custom implementations of password handling code. + A SessionConfigManager class for editing and fetching Sessionconfig objects, and defining instances of the PersistenceHandler and PasswordHandler. See MySQL Connector/J X DevAPI Reference (http://dev.mysql.com/doc/dev/connector-j) for more details. * X DevAPI: A new connection property, xdevapi.auth, has been added for specifying the authentication mechanism for connections using the X Protocol. Allowed values are MYSQL41, PLAIN, and EXTERNAL. See the entry for the new property in Configuration Properties (http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-configuration-properties.html) for details. * X DevAPI: To support row locks (http://dev.mysql.com/doc/refman/5.7/en/innodb-locking-reads.html) for the find() method of the X DevAPI, the FindStatement and the SelecStatement interfaces have been extended with the following methods: + lockExclusive(), which works like SELECT ... FOR UPDATE for relational tables. + lockShared(), which works like the SELECT ... LOCK IN SHARED MODE (for MySQL 5.7) or SELECT ... FOR SHARE (for MySQL 8.0) for relational tables. See MySQL Connector/J X DevAPI Reference (http://dev.mysql.com/doc/dev/connector-j) for more details. * X DevAPI: Connector/J now supports the expanded syntax for the IN and NOT IN operator, which can check if a sub-expression is contained inside another one; for example: // For documents coll.find("$.b IN [100,101,102]").execute(); coll.find("'some text with 5432' in $.a").execute(); coll.find("1 in [1, 2, 4]").execute(); coll.find("{'a': 3} not in {'a': 1, 'b': 2}").execute(); // For relational tables tbl.select().where("3 not in [1, 2, 4]").execute(); tbl.select().where("'qqq' not in $.a").execute(); tbl.select().where("{'a': 1} in {'a': 1, 'b': 2}").execute(); * X DevAPI: A number of changes have been implemented for the "drop" methods for the X DevAPI: + Removed dropCollection(schemaName, collectionName) and dropTable(schemaName, tableName) from Session. + Added dropCollection(collectionName) and dropTable(tableName) to Schema. + Schema.dropView() now executes immediately and returns void; also, the ViewDrop interface has been removed. + Collection.dropIndex() now executes immediately and returns void; also the DropCollectionIndexStatement interface has been removed. + The "drop" methods now succeed even if the objects to be dropped do not exist. * Conversion from the MySQL TIME data to java.sql.Date is now supported. In the past, a getDate() retrieving data from a TIME column would throw an SQLException. Now, such a retrieval returns a java.sql.Date object containing the time value expressed in number of milliseconds from the Java epoch; also returned is the warning: "Date part does not exist in SQL TIME field, thus it is set to January 1, 1970 GMT while converting to java.sql.Date." (Bug #26750807) * A new connection property, enabledTLSProtocols, can now be used to override the default restrictions on the TLS versions to be used for connections, which are determined by the version of the MySQL Server that is being connected to. By providing a comma-separated list of values to this option (for example, "TLSv1,TLSv1.1,TLSv1.2") users can, for example, prevent connections from using older TLS version, or allow connections to use TLS versions only supported by a user-compiled MySQL Server. See the entry for the new property in Configuration Properties (http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-configuration-properties.html) for details. Thanks to Todd Farmer for contributing the code. (Bug #26646676) * Updated the timezone mappings using the latest IANA and CLDR time zone databases. (Bug #25946965) * A new option for the loadBalancingStrategy connection property called serverAffinity has been added. The servers listed in the new connection property serverAffinityOrder (which should be a subset of the servers in the host list of the connection URL) are contacted in the order they are listed until a server is available or until the list of servers is exhausted, at which point a random load-balancing strategy is used with the hosts not listed by serverAffinityOrder. See descriptions for loadBalancingStrategy and serverAffinityOrder in Configuration Properties (http://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-configuration-properties.html) for details. (Bug #20182108) Bugs Fixed * Important Change: Following the changes in MySQL Server 8.0.3, the system variables tx_isolation and tx_read_only have been replaced with transaction_isolation and transaction_read_only in the code of Connector/J. Users should update Connector/J to this latest release in order to connect to MySQL 8.0.3. They should also make the same adjustments to their own applications if they use the old variables in their codes. (Bug #26440544) * X DevAPI: Calling schema.dropView() with a null argument resulted in a NullPointerException. (Bug #26750807) * X DevAPI: When dropCollection() was applied on a null collection, a NullPointerException occurred. (Bug #26393132) * When using cached server-side prepared statements, a memory leak occurred as references to opened statements were being kept while the statements were being decached; it happened when either the close() method has been called twice on a statement, or when there were conflicting cache entries for a statement and the older entry had not been closed and removed from the opened statement list. This fix makes sure the statements are properly closed in both cases. Thanks to Eduard Gurskiy for contributing to the fix. (Bug #26633984, Bug #87429) * The regression test for Bug#63800 failed because the default value of the system variable explicit_defaults_for_timestamp of MySQL Server has been changed since release 8.0.2. The test has been adjusted to take the change into consideration. (Bug #26501245) * Running callable statements against MySQL Server 8.0 resulted in the SQLException: ResultSet is from UPDATE. No Data. (Bug #26259384) * Secure JDBC connections did not fall back to the default truststore when a custom one was not provided. (Bug #26243128) * In com/mysql/jdbc/ServerPreparedStatement.java, the arguments resultSetType and resultSetConcurrency for a call of Connection.preparedStatement() were swapped. (Bug #25874048, Bug #85885) * Some JDBC proxied objects were missing the proper handlings of the equals() methods, thus even comparison of one of these proxied objects to its own self with equals() yielded false. This patch introduces proper handlings for the equals() method in all the relevant proxies. (Bug #21931572, Bug #78313) * A server-side prepared statement was not closed when the same statement was being prepared again while the original statement was being cached. This was caused by the silent replacement of the cache entry of the old statement by the new. When this happened repeatedly, it caused eventually the complaint that max_prepared_stmt_count was exceeded. This fix makes sure that when a cache entry for a statement replaces an older one, the older statement is immediately closed. (Bug #20066806, Bug #74932)

On behalf of Oracle MySQL Release Team
Balasubramanian Kandasamy

MySQL Connector/Net 8.0.9-dmr has been released

Planet MySQL -

Dear MySQL users,

MySQL Connector/Net 8.0.9 is the sixth development release that expands
cross-platform support to Linux and macOS when using Microsoft’s .NET Core
framework. Now, .NET developers can use the X DevAPI with .NET Core and
Entity Framework Core (EF Core) 1.0 to create server applications that run
on Windows, Linux and macOS. We are very excited about this change and
really look forward to your feedback on it!

MySQL Connector/Net 8.0.9 is also the eighth development release of MySQL
Connector/Net to add support for the new X DevAPI.  The X DevAPI enables
application developers to write code that combines the strengths of the
relational and document models using a modern, NoSQL-like syntax that
does not assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/index.html. For more
information about how the X DevAPI is implemented in Connector/Net, see
http://dev.mysql.com/doc/dev/connector-net.

Please note that the X DevAPI requires at least MySQL Server version
5.7.12 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/5.7/en/document-store.html.

To download MySQL Connector/Net 8.0.9-dmr, see the “Development
Releases” tab at http://dev.mysql.com/downloads/connector/net/

Changes in MySQL Connector/Net 8.0.9 (2017-09-28, Development Milestone) * Functionality Added or Changed * Bugs Fixed Functionality Added or Changed * For accuracy, the following Entity Framework 6 items were renamed: + NuGet package - MySql.Data.EntityFramework (was MySql.Data.Entity) + Namespace - MySql.Data.EntityFramework (was MySql.Data.Entity) + Assembly - MySql.Data.EntityFramework.dll (was MySql.Data.Entity.EF6.dll) (Bug #26396260) * X DevAPI: The SessionConfigManager.Update method was removed and the SessionConfigManager.Save method now always overwrites the data with the given key. For example: SessionConfigManager.Save( "mysess", "{ \"uri\": \"mysqlx://myuser@localhost/mysess\", \"appdata\": { \"biz\": \"quux\" } }" ); SessionConfigManager.Save( "mysess", "{ \"uri\": \"mysqlx://test@localhost/mysess\", \"appdata\": { \"tar\": \"zzzz\" } }" ); The mysess.uri and mysess.appdata values set by the first statement are replaced with the new values set by the second statement. (Bug #25829054, Bug #25860579) * MySQL Connector/Net now supports MySQL servers configured to use utf8mb4 as the default character set. * The following methods are available for use with EF Core in asynchronous command and connection operations: + Microsoft.EntityFrameworkCore.DbContext.AddAsync + Microsoft.EntityFrameworkCore.DbContext.AddRangeAsync + Microsoft.EntityFrameworkCore.DbContext.FindAsync + Microsoft.EntityFrameworkCore.DbContext.SaveChangesAsync + Microsoft.EntityFrameworkCore.Infrastructure.Databas eFacade.EnsureDeletedAsync + Microsoft.EntityFrameworkCore.Infrastructure.Databas eFacade.EnsureCreatedAsync + Microsoft.EntityFrameworkCore.DbContext.ToListAsync * X DevAPI: The following methods execute directly, whereas each method previously required .execute() as the final item in the method chain: + BaseSession.DropSchema + Collection.DropIndex + Schema.DropCollection + Schema.DropView In addition, the methods now succeed even if the objects to be dropped do not exist. * The AutoEnlist and IncludeSecurityAsserts connection-string options are not appropriate for use by applications that target .NET Core and now return an error when used. * EF Core: Support for explicit loading was added. Explicit loading is an object-relational mapper (O/RM) pattern introduced in EF Core 1.1.0, which enables .NET developers to explicitly load related data from the database at a later time. * The following connection-string options are not currently supported for use by applications that target .NET Core and now return an error when used: + SharedMemoryName + IntegratedSecurity + PipeName + Logging + UseUsageAdvisor + UsePerformanceMonitor + InteractiveSession + Replication * X DevAPI: To provide safe transactional document and row updates, the following new methods were added: + FindStatement.LockShared + FindStatement.LockExclusive + TableSelectStatement.LockShared + TableSelectStatement.LockExclusive The LockShared() and LockExclusive() methods can be called any number of times with either the Collection.Find() or Table.Select() method chains and in any combination. If multiple calls to the methods are made, only the final method is invoked. For additional information about the two types of locking, see Shared and Exclusive Locks (http://dev.mysql.com/doc/refman/5.7/en/innodb-locking.html#innodb-shared-exclusive-locks). * X DevAPI: When creating a new session, multiple hosts can be tried until a successful connection is established. A list of hosts can be given in a connection string or as session creation options, with or without priorities. var mySession = MySQLX.GetSession( "mysqlx://dbuser:password@[" + "(address=localhost:33060, priority=90)," + "(address=192.1.10.10:33060, priority=100)," + "(address=[2001:db8:85a3:8d3:1319:8a2e:370:7348]:33060, priority=30) " + "]" ); var mySession = MySQLX.GetSession( "user=dbuser;" + "password=dbpassword;" + "server=" + "(address=192.1.10.10, priority=90)," + "(address=server.example.com, priority=100)," + "(address=localhost, priority=30);" + "port=33060;" ); * X DevAPI: The IN and NOT IN operators have been updated to expand the range of operands that can be evaluated by the Find(), Modify(), and Remove() methods for collections and the Select(), Update(), and Delete() methods for tables. This update provides support for expressions using the following syntax: compExpr ["NOT"] "IN" compExpr The previous syntax used with IN and NOT IN operators is still valid and it takes precedence over the new syntax when both are present. * X DevAPI: Several new direct-execution methods were added to the Collection class that operate at a single document level, unlike the other CRUD methods that operate on all documents that match a filter. The new methods are: ReplaceOne(), AddOrReplaceOne(), GetOne(), and RemoveOne(). * Support for connections using Unix domain socket files was extended to include MySQL servers deployed on Linux hosts. X DevAPI connection example: "server=/path/to/socket;protocol=unix;user=root;password=mypass;ssl-mode=none" Classic MySQL connection example: "server=/path/to/socket;protocol=unix;user=root;password=mypass" * Connections to the MySQL server now can be made using accounts that authenticate with the sha256_password plugin. For more information, see SHA-256 Pluggable Authentication (http://dev.mysql.com/doc/refman/5.7/en/sha256-pluggable-authentication.html). In addition, a new connection option was added to extend authentication support for connections made using the X Protocol with either basic or URI connection strings and as an anonymous type. The auth connection option enables the MYSQL41, PLAIN, or EXTERNAL authentication mechanism if supported by the server. For a description of the auth option, see Connector/Net Connection-String Options Reference (http://dev.mysql.com/doc/connector-net/en/connector-net-connection-options.html). Bugs Fixed * Assemblies within NuGet packages were not fully signed. (Bug #26739307) * EF Core: Some methods in the DbContext class were not supported for use with asynchronous operations. (Bug #26448321, Bug #84814) * X DevAPI: Priority assignment when connecting to the server in client-side failover situations was not supported in earlier versions of the connector by design. Priority-based failover is now available. (Bug #26198794) * EF Core: When attempting to commit a transaction in which the FirstOrDefaultAsync method was called, the connector returned "System.InvalidOperationException: Connection must be valid and open to commit transaction." instead of committing the transaction. (Bug #26026972, Bug #86199) * X DevAPI: Passing in a value of 0 or lower to the Limit method now produces a more relevant error message indicating that the argument is out of range. (Bug #24384660) * X DevAPI: Passing in the NULL value as a parameter to the DbDoc.SetValue method resulted in an exception. This fix ensures that NULL is accepted for this method. (Bug #23542093) Nuget packages are available at: https://www.nuget.org/packages/MySql.Data/8.0.9-dmr https://www.nuget.org/packages/MySql.Web/8.0.9-dmr https://www.nuget.org/packages/MySql.Data.EntityFrameworkCore/8.0.9-dmr https://www.nuget.org/packages/MySql.Data.EntityFrameworkCore.Design/8.0.9-dmr Enjoy and thanks for the support!

On Behalf of the MySQL/ORACLE RE Team
Gipson Pulla

Percona Live Dublin - Event Recap & Our Sessions

Planet MySQL -

Severalnines was pleased to yet again sponsor Percona Live Europe, the Open Source Database Conference which was held this year in Dublin, Ireland.

At the Conference

Severalnines team members flew in from around the world to partner up with our two local Dubliners to demo ClusterControl in the exhibit hall and present three sessions (see below).

On our Twitter feed we live tweeted both of the keynote sessions to help keep those who weren’t able to attend up-to-speed on the latest happenings in the database world.

We were also able to sit down with René Cannaò, creator of ProxySQL to talk about what’s new with the exciting load balancing technology.

Our Sessions

Members of the Severalnines team presented three technical sessions, all of which were widely attended… some with standing room only!

MySQL Load Balancers - MaxScale, ProxySQL, HAProxy, MySQL Router & nginx - A Close Up Look

Session Details: Load balancing MySQL connections and queries using HAProxy has been popular in the past years. Recently however, we have seen the arrival of MaxScale, MySQL Router, ProxySQL and now also Nginx as a reverse proxy.

For which use cases do you use them and how well do they integrate in your environment? This session aims to give a solid grounding in load balancer technologies for MySQL and MariaDB.

We review the main open-source options available: from application connectors (php-mysqlnd, jdbc), TCP reverse proxies (HAproxy, Keepalived, Nginx) and SQL-aware load balancers (MaxScale, ProxySQL, MySQL Router).

We also look into the best practices for backend health checks to ensure load balanced connections are routed to the correct nodes in several MySQL clustering topologies. You'll gain a good understanding of how the different options compare, and enough knowledge to decide which ones to explore further.

MySQL on Docker - Containerizing the Dolphin

Session Details: Docker is becoming more mainstream and adopted by users as a method to package and deploy self-sufficient applications in primarily stateless Linux containers. It's a great toolset on top of OS-level virtualization (LXC, a.k.a containers) and plays well in the world of micro services.

However, Docker containers are transient by default. If a container is destroyed, all data created is also lost. For a stateful service like a database, this is a major headache to say the least.

There are a number ways to provide persistent storage in Docker containers. In this presentation, we will talk about how to setup a persistence data service with Docker that can be torn down and brought up across hosts and containers.

We touch upon orchestration tools, shared volumes, data-only-containers, security and configuration management, multi-host networking, service discovery and implications on monitoring when we move from host-centric to role-centric services with shorter life cycles.

Automating and Managing MongoDB: An Analysis of Ops Manager vs. ClusterControl Related resources  Follow Severalnines on Twitter  Database Load Balancing with ProxySQL - Tutoria  MySQL on Docker: Running Galera Cluster in Production with ClusterControl on Kubernetes

Session Details: In any busy operations environment, there are countless tasks to perform - some monthly, or weekly, some daily or more frequently, and some on an ad-hoc basis. And automation is key to performing fast, efficient and consistently repeatable software deployments and recovery.

There are many generic tools available, both commercial and open source, to aid with the automation of operational tasks. Some of these tools are even deployed in the database world. However, there are a small number of specialist domain-specific automation tools available also, and we are going to compare two of these products: MongoDB?s own Ops Manager, and ClusterControl from Severalnines.

We cover Installation and maintenance, Complexity of architecture, Options for redundancy, Comparative functionality, Monitoring, Dashboard, Alerting, Backing up and restoring, Automated deployment of advanced configurations, and Upgrading existing deployments

Thanks to the Percona Team for organising another great conference and to everyone who participated from near and afar! We hope to see you again soon!

Tags:  percona live percona MySQL docker proxysql MongoDB clustercontrol presentation event conference

MySQL Connector/Python 8.0.5 DMR has been released

Planet MySQL -

Dear MySQL users,

MySQL Connector/Python 8.0.5 dmr is the sixth development release of the
MySQL Connector Python 8.0 series. This series adds support for the new X
DevAPI. The X DevAPI enables application developers to write code that
combines the strengths of the relational and document models using a
modern, NoSQL-like syntax that does not assume previous experience
writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/.

For more information about how the X DevAPI is implemented in
MySQL Connector/Python, and its usage, see
http://dev.mysql.com/doc/dev/connector-python

Please note that the X DevAPI requires MySQL Server version 5.7.12 or
higher with the X Plugin enabled. For general documentation about how to
get started using MySQL as a document store, see

http://dev.mysql.com/doc/refman/5.7/en/document-store.html

To download MySQL Connector/Python 8.0.5 dmr, see the “Development Releases”
tab at
http://dev.mysql.com/downloads/connector/python/

Enjoy!

Changes in MySQL Connector/Python 8.0.5 (2017-09-28, Development Milestone) * Packaging Notes * Functionality Added or Changed Packaging Notes * MySQL Connector/Python packages are now available in two formats: Pure Python packages that contain only Python files, and packages that contain the Python files plus the C Extension and C Protobuf extension. Exception platforms are Solaris, macOS, and Windows, for which packages containing the Python files and C extensions are available but not pure Python packages. (Bug #26648417) Functionality Added or Changed * MySQL Connector/Python now supports connections to MySQL accounts that use the caching_sha2_password authentication plugin (see SHA-2 Pluggable Authentication (http://dev.mysql.com/doc/refman/8.0/en/caching-sha2-plug gable-authentication.html)). This requires MySQL server version 8.0.3 or higher. It also requires use of a secure connection because Connector/Python does not support RSA encryption for password exchange. * MySQL Connector/Python now supports an auth connection option to specify the authentication methanism. Permitted values are plain, mysql41, and external. The option name and value are not case sensitive. If the authentication mechanism is not specified, it defaults to plain for secure (TLS) or Unix socket connections, or mysql41 for insecure connections. * MySQL Connector/Python now supports a pure Python implementation of Protobuf. Consequently, the Protobuf C extension has become optional. Connector/Python will use the Python implementation if the C extension is not available. The Protobuf Python package is required if it is desired not to use the C extension. The version requirements are Protobuf C++ 2.6.0 or higher, Protobuf Python 3.0.0 or higher. * A mysqlx.sessions variable is now exposed to scripts that can be used for session-related tasks such as saving or loading session configuration information. * These methods have been added for Collection: add_or_replace_one(), get_one(), replace_one(), and remove_one(). * These methods have been added for FindStatement and SelectStatement, to enable shared and exclusive locks to be acquired: lock_shared() and lock_exclusive(). * There is support for new forms of comparisons that use the IN operator: item IN list item IN document path dict IN dict The left-hand-side value must be castable to the JSON type.

Documentation
——————–

Online:
http://dev.mysql.com/doc/connector-python/en/index.html

The source distribution includes the manual in various formats under
the docs/ folder.

Reporting Bugs
——————–

We welcome and appreciate your feedback and bug reports:
http://bugs.mysql.com/

On Behalf of the MySQL/Oracle Release Engineering Team,
Prashant Tekriwal

MySQL Connector/C++ 8.0.6-dmr has been released

Planet MySQL -

Dear MySQL users,

MySQL Connector/C++ 8.0.6-dmr is the next development milestone release
(dmr) of the MySQL Connector/C++ 8.0 series.

Connector/C++ 8.0 can be used to access MySQL implementing Document
Store or in a traditional way, using SQL queries. It allows writing
both C++ applications using X DevAPI or plain C applications using
XAPI.

To learn more about how to write applications using X DevAPI, see
“X DevAPI User Guide”

https://dev.mysql.com/doc/x-devapi-userguide/en/

and “X DevAPI Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/devapi_ref.html

For more information about using plain C XAPI see “XAPI Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/xapi_ref.html

For generic information on using Connector/C++ 8.0, see

https://dev.mysql.com/doc/dev/connector-cpp/

Note

Connector/C++ 8.0 requires MySQL Server version 5.7.12 or higher with
X Plugin enabled. For general documentation about how to get started
using MySQL as a document store, see “Using MySQL as a Document Store”

https://dev.mysql.com/doc/refman/5.7/en/document-store.html

To download MySQL Connector/C++ 8.0.6-dmr, see the “Development Releases”
tab at

https://dev.mysql.com/downloads/connector/cpp/

Changes in MySQL Connector/C++ 8.0.6-dmr (2017-09-28, Development Milestone) Functionality Added or Changed * A session now can acquire a lock for documents or rows returned by find or select statements, to prevent the returned values from being changed from other sessions while the lock is held (provided that appropriate isolation levels are used). Locks can be requested several times for a given find or select statement. Only the final request is acted upon. An acquired lock is held until the end of the current transaction. For X DevAPI, CollectionFind and TableSelect implement .lockExclusive() and .lockShared() methods, which request exclusive or shared locks, respectively, on returned documents or rows. These methods can be called after .bind() and before the final .execute(). For XAPI, the new mysqlx_set_locking(stmt, lock) function can be called to request exclusive or shared locks on returned documents or rows, or to release locks. The lock parameter can be ROW_LOCK_EXCLUSIVE, ROW_LOCK_SHARED, or ROW_LOCK_NONE. The first two values specify a type of lock to be acquired. ROW_LOCK_NONE removes any row locking request from the statement. * For X DevAPI, a new auth option can be specified in connection strings or URIs to indicate the authentiation mechanism. Permitted values are PLAIN and MYSQL41. The option name and value are not case sensitive. The SessionSettings::Options object supports a new AUTH enumeration, with the same permitted values. For XAPI, a new auth setting can be specified in connection strings or URIs to indicate the authentiation mechanism. Permitted values are PLAIN and MYSQL41. The option name and value are not case sensitive. A new MYSQLX_OPT_AUTH constant is recognized by the mysqlx_options_set() function, with permitted values MYSQLX_AUTH_PLAIN and MYSQLX_AUTH_MYSQL41. If the authentication mechanism is not specified, it defaults to PLAIN for secure (TLS) connections, or MYSQL41 for insecure connections. For Unix socket connections, the default is PLAIN. * These drop API changes were made: + Session::dropTable(schema, table) and Session::dropCollection(schema, coll) were replaced by Schema::dropTable(table) and Schema::dropCollection(coll), respectively. + Schema::dropView() is now a direct-execute method returning void rather than Executable. + All dropXXX() methods succeed if the dropped objects do not exist. * The following Collection methods were added: addOrReplaceOne(), getOne(), replaceOne(), and removeOne(). The addOrReplaceOne() and replaceOne() methods work only with MySQL 8.0.3 and higher servers. For older servers, they report an error. * Boolean expressions used in queries and statements now support a variant of the IN operator for which the right hand side operand is any expression that evaluates to an array or document. X DevAPI example: coll.find("'car' IN $.toys").execute(); XAPI example: res = mysqlx_collection_find(coll, "'car' IN $.toys"); In this form, the IN operator is equivalent to the JSON_CONTAINS() SQL function. * On Unix and Unix-like systems, Unix domain socket files are now supported as a connection transport for X DevAPI or XAPI connections. The socket file can be given in a connection string or in the session creation options. X DevAPI examples: XSession sess("mysqlx://user:password@(/path/to/mysql.sock)/schema"); XSession sess({ SessionSettings::USER, "user", SessionSettings::PWD, "password, SessionSettings::SOCKET, "/path/to/mysql.sock" SessionSettings::DB, "schema" }); XAPI examples: mysqlx_session_t *sess = mysqlx_get_session_from_url( "mysqlx://user:password@(/path/to/mysql.sock)/schema", err_buf, &err_code ); mysqlx_opt_type_t *sess_opt = mysqlx_session_option_new(); mysqlx_session_option_set(sess_opt, MYSQLX_OPT_SOCKET, "/path/to/mysql.sock", MYSQLX_OPT_USER, "user", MYSQLX_OPT_PWD, "password", MYSQLX_OPT_DB, "schema"); mysqlx_session_t *sess = mysqlx_get_session_from_options( sess_opt, err_buf, &err_code ); Bugs Fixed * Creating a TLS session with only the ssl-ca option specified could succeed, although it should fail if ssl-mode is not also specified. (Bug #26226502) * mysqlx_get_node_session_from_options() could succeed even when a preceding mysqlx_session_option_set() failed. (Bug #26188740)

On Behalf of the MySQL/Oracle Release Engineering Team,
Daniel Horecki

MariaDB 10.2.9, MariaDB 10.1.28 and MariaDB Connector/J Releases now available

Planet MySQL -

The MariaDB project is pleased to announce the availability of MariaDB 10.2.9, MariaDB 10.1.28, MariaDB Connector/J 2.1.2 and MariaDB Connector/J 1.6.5. See the release notes and changelogs for details. Download MariaDB 10.2.9 Release Notes Changelog What is MariaDB 10.2? MariaDB APT and YUM Repository Configuration Generator Download MariaDB 10.1.28 Release Notes Changelog What is MariaDB […]

The post MariaDB 10.2.9, MariaDB 10.1.28 and MariaDB Connector/J Releases now available appeared first on MariaDB.org.

MariaDB Server 10.2.9, 10.1.28, and updated Connector/Java now available

Planet MySQL -

MariaDB Server 10.2.9, 10.1.28, and updated Connector/Java now available dbart Wed, 09/27/2017 - 21:08

The MariaDB project is pleased to announce the immediate availability of MariaDB Server 10.2.9, MariaDB Server 10.1.28, MariaDB Connector/J 2.1.2, and MariaDB Connector/J 1.6.5. See the release notes and changelogs for details and visit mariadb.com/downloads to download.

Download MariaDB Server 10.2.9

Release Notes Changelog What is MariaDB Server 10.2?

Download MariaDB Server 10.1.28

Release Notes Changelog What is MariaDB Server 10.1?

Download MariaDB Connector/J 2.1.2

Release Notes Changelog About MariaDB Connector/J

Download MariaDB Connector/J 1.6.5

Release Notes Changelog About MariaDB Connector/J

The MariaDB project is pleased to announce the immediate availability of MariaDB Server 10.1.26. See the release notes and changelog for details.

Login or Register to post comments

Percona Live Europe Session Interview: Spatial Data in MySQL 8.0 with Norvald Ryeng (Oracle)

Planet MySQL -

Day one of the Percona Live Europe Conference was a huge hit. The first day of sessions went well. People spoke on many different open source database topics, and talks were well-attended.

One such talk I got to sit in was on Spatial Data in MySQL 8.0, given by Norvald Ryeng of Oracle.

MySQL 8.0 is still in development, but we already see a lot of improvement in GIS support. The latest development release comes with support for around 5000 different spatial reference systems, improved standard compliance and a lot of new functionality. How does it all work, and how can it be used to build applications? 

This talk started with the basics of GIS and spatial data in MySQL: geometries, data types, functions, storage and indexes. Then Norvald walked through a demo of how all the parts fit together to support a GIS web application. We also got a sneak peek into the future, including what to do right now to prepare for the upgrade to MySQL 8.0.

Whether you’re currently storing or planning to store spatial data in MySQL, this talk was for you. It covers the topics in a way that is accessible to both novices and more advanced GIS users.

After the talk, I had a chance to interview Norvald, and here is the video:

Percona Live Europe 2017 Keynotes Day 2

Planet MySQL -

Black coffee was flowing this morning for day two Percona Live Europe 2017 Keynotes after many of the delegates had spent a good few hours the night before enjoying Irish hospitality at the Community Dinner.

So today Laurie Coffin, Chief Marketing Officer for Percona, introduced proceedings for day two and later also took to the stage for a Q&A session with authors Laine Campbell and Charity Majors. More on that later…

State of the Dolphin

Geir Høydalsvik, Development Director for MySQL at Oracle, delivers his keynote “State of the Dolphin”

First up Geir Høydalsvik, Development Director for MySQL at Oracle, delivered juicy tidbits of what to expect in MySQL 8.0 (beyond what you see in the current Developer Milestone Releases). He gave a comprehensive overview of plans and current developments to what had become an almost full house – despite the night before’s revelries.

Many Faces of Continuent Tungsten

M C Brown, VP Products at Continuent, delivers his keynote “Many Faces of Continuent Tungsten”

MC Brown brought the conference up to date with the latest Tungsten developments, as well as some thoughts for the future. He described the wide-ranging deployments of Tungsten out in the field and his thoughts on how it might look going forward.

Database Reliability Engineering

Laine Campbell, Charity Majors are quizzed by Laurie Coffin

Laurie Coffin took to the stage to quiz Laine Campbell, Senior Director Production Engineering at OpsArtisan, and Charity Majors, CEO of Honeycomb Q&A about the newly released O’Reilly title: Database Reliability Engineering. The book focuses on designing and operating resilient database systems and uses open-source engines such as MySQL, PostgreSQL, MongoDB, and Cassandra as examples throughout.

Database Performance in High Traffic Environments

Pavel Genov, Head of Software Development at Pepper, delivers his keynote “Database Performance in High Traffic Environments”

Pepper.com is purposely different than other platforms that list daily deals. Around the clock, the community seeks and finds the best offers in fashion, electronics, traveling and much more. Pavel described how Pepper optimizes their database performance to make sure their web applications remain responsive and meet users’ expectations.

Deploying AdonisJS Apps to Heroku

Planet MySQL -

The question on how to deploy AdonisJS apps keeps popping out within the AdonisJS community. The thing is that most people tend to forget that AdonisJS apps are Node.js applications and can be deployed the same way you would deploy any Node.js application.

In my last post, we built a task list app with AdonisJS. Today, I'll be showing you how to deploy the task list app to Heroku.

Create a Git Repository

If you follow along from the last tutorial, we need to first setup a git repository since we'll be using git for deployment. Head over to GitHub/Gitlab/Bitbucket and create a new repository.

Then we initialize git repository in the adonis-tasks directory:

cd adonis-tasks git init git remote add origin git@github.com:YOUR_USERNAME/adonis-tasks.git Create a Heroku App

Login to your Heroku dashboard or signup if you don't have an account already. Then create a new app.

I named mine adonis-tasks (which might no longer be available for you)

Once the app is created, we'll be redirected to the Deploy page on the app dashboard where we can see different ways with which we can deploy our app. We'll be deploying using Heroku Git with the use of the Heroku CLI. So, we need to install the Heroku CLI on our computer. Go through the documentation on how to install the CLI depending on your operating system.

Once the CLI is installed, we need to login to Heroku with our account details.

heroku login

Next, we add Heroku remote to our task list app repository.

heroku git:remote -a adonis-tasks Setting Up MySQL Database

The task list app uses MySQL as its database which Heroku does not support out of the box. We, therefore, need a way to use MySQL on Heroku. Luckily for us, there is support for MySQL through what Heroku call add-ons. There are numerous add-ons to add MySQL to our app, but for this tutorial, we'll be using ClearDB MySQL. So, we need to add ClearDB MySQL to our Heroku app. To do this, go to the Overview page of the Heroku app, we'll see Installed add-ons section showing the add-ons we have added to the app (it will be empty for now since we haven't added any).

Click on Configure Add-ons which will take us to a page where we can configure or add a new add-on. Start typing ClearDB and it will show up in the selection option, which we can then click to select.

Upon selecting ClearDB, a modal will appear for us to provision ClearDB to our app. At this point, we can choose a plan we want, but we'll be going with the free plan for this tutorial.

Clicking on Provision will add ClearDB to our app, and it will also add CLEARDB_DATABASE_URL to our app’s config variables.

Let’s add another config variable DB_CONNECTION which will tell AdonisJS the database connection our app is using. Go to Settings then click on Reveal Config Vars and add DB_CONNECTION as key and mysql as value.

Next, we need to modify our app to use ClearDB when deployed. The CLEARDB_DATABASE_URL is a URL string that contains our database details (host, username, password and database name), so we need to parse this URL and extract the individual detail. Let’s install an npm package to help us with that:

npm install URL-parse

With that installed, open config/database.js and add the snippet below to the top of it just after where we declare Helpers:

// config/database.js const Url = require('url-parse') const CLEARDB_DATABASE_URL = new Url(Env.get('CLEARDB_DATABASE_URL'))

Still in config/database.js, replace mysql’s connection object with:

// config/database.js connection: { host: Env.get('DB_HOST', CLEARDB_DATABASE_URL.host), port: Env.get('DB_PORT', ''), user: Env.get('DB_USER', CLEARDB_DATABASE_URL.username), password: Env.get('DB_PASSWORD', CLEARDB_DATABASE_URL.password), database: Env.get('DB_DATABASE', CLEARDB_DATABASE_URL.pathname.substr(1)) }

Depending on the environment our app is running, the necessary database settings will be used. It will first look for DB_ variables (which indicate it’s on development) and use them if found else it will fallback to CLEARDB settings (which indicate it’s on production).

Specifying Node.js Version

By default, Heroku will use the current stable version (v6.11.3 as at this tutorial) of Node.js. AdonisJS v4 (which our app is on) requires Node.js v8.0 or greater. So we need to tell Heroku to use a specific Node.js version. We can do this by adding the snippet below to our app package.json:

// package.json "engines": { "node": "8.5.0" }

This will force Heroku to use Node.js v8.5.0 (which is the current version as at this tutorial).

Create a Procfile

A Procfile is use to explicitly declare what command should be executed to start your app. We can also add other commands we want executed. For instance, release phase which enables us to run tasks before a new release of our app is deployed to production. Create a file named Procfile (without an extension) directly in the root of our app (that is adonis-task directory). Note that the P is uppercased. Add the code below into it:

// Procfile release: ENV_SILENT=true node ace migration:run --force web: ENV_SILENT=true npm start

Instead of running our app migrations manually with heroku run. We use release phase to run our app migrations before deploying the app to production. This is really helpful compared to running our app migrations manually, because we might forget to run migrations after deploying to production some times. We are using the --force flag because we are running the migrations on production. The next command simply start the app.

Noticed we prefixed both commands with ENV_SILENT=true. This will prevent us from getting Env provider error because AdonisJS by default expects a .env file which it pulls some config settings from.

Now let’s commit and push the changes made to the app to remote:

git add --all git commit -m "initial commit" git push -u origin master

To deploy our application, we simply push to Heroku:

git push heroku master

This will start the deployment by installing Node.js and other necessary dependences. Once the deployment is done, we can use the Heroku CLI to open the application.

heroku open

This will open our app in a new browser window.

There you have it, our AdonisJS application is now running on https://adonis-tasks.herokuapp.com.

Conclusion

That's it guys, you have seen how easy it is to deploy AdonisJS application to Heroku. Kindly drop your questions and comments below. In my next post, I will show you how to deploy AdonisJS application to a VPS.

Pages

Subscribe to alexyu.se aggregator