Planet MySQL

One Week Until Percona Live Open Source Database Conference Europe 2018

It’s almost here! One week until the Percona Live Europe Open Source Database Conference 2018 in Frankfurt, Germany! Are you ready?

This year’s theme is “Connect. Accelerate. Innovate.” We want to live these words by making sure that the conference allows you to connect with others in the open source community, accelerate your ideas and solutions and innovate when you get back to your projects and companies.

  • There is one day of tutorials (Monday) and two days of sessions (Tuesday and Wednesday). We have multiple tracks: MySQL 8.0, Using MySQL, MongoDB, PostgreSQL, Cloud, Database Security and Compliance, Monitoring and Ops, and Containers and Emerging Technologies. This year also includes a specialized “Business Track” aimed at how open source can solve critical enterprise issues.
  • Each of the session days begins with excellent keynote presentations in the main room by well-known people and players in the open source community. Don’t miss them!
  • Don’t forget to attend our Welcome Reception on Monday.
  • Want to meet with our Product Managers? Join them for Lunch on Wednesday, November 7, where you’ll have a chance to participate in the development of Percona Software!
  • On our community blog, we’ve been highlighting some of the sessions that will be occurring during the conference. You can check them out here.

The entire conference schedule is up and available here.

Percona Live Europe provides the community with an opportunity to discover and discuss the latest open source trends, technologies and innovations. The conference includes the best and brightest innovators and influencers in the open source database industry.

Our daily sessions, day-one tutorials, demonstrations, keynotes and events provide access to what is happening NOW in the world of open source databases. At the conference, you can mingle with all levels of the database community: DBAs, developers, C-level executives and the latest database technology trend-setters.

Network with peers and technology professionals and unite the open source database community! Share knowledge, experiences and use cases! Learn about how open source database technology can power your applications, improve your websites and solve your critical database issues.

Come to the conference.

Don’t miss out, buy your tickets here! Connect. Accelerate. Innovate.

With a lot of focus on the benefits of open source over proprietary models of software delivery, you surely can’t afford to miss this opportunity to connect with leading figures of the open source database world. On Monday, November 5 you can opt to accelerate your knowledge with our in-depth tutorials, or choose to attend our business track geared towards open source innovation and adoption.

Tuesday and Wednesday’s sessions across eight different tracks provides something for all levels of experience, and addresses a range of business challenges. See the full schedule.

Quickly Load JSON Data into The MySQL Document Store with util.importJson

With new MySQL Shell 8.0.13 comes a new way to quickly load JSON data sets very quickly.  In a past blog and in several talks I have shown how to use the shell with the Python mode to pull in the data.  But now there is a much faster way to load JSON

Load JSON Quickly Start a copy of the new shell with mysqlsh. Connect to your favorite server \c dave@localhost and then create a new schema session.createSchema('bulk'). Then point you session to the schema just created with \use bulk.  Version 8.0.13 has a new utility function named importJson that does the work.  The first argument is the path to the data set (here the MongoDB restaurant collection) and the second allows you to designate the schema and collection where you wish to have the data stored.  In this example the data set was in the downloads directory of my laptop and I wanted to put it in the newly created 'bulk' schema in a collection named 'restaurants'
An Example of using util.importJson to quickly load JSON data into the MySQL Document Store It took just over 2 seconds to read in over 25,000 records, not bad.
And a quick check of the data shows that is loaded successfully!


2018-11-15: Announcing: Scaling MySQL with TiDB, Vitess and MySQL Cluster at Madrid MySQL Users Group

[ English ] – texto en español abajo We’re pleased to announce the next Madrid MySQL Users Group meetup which will take place on the 15th of November at 19:00.  Sign up details can be found here.  There’ll also be a similar meetup in Amsterdam on the 12th November hosted by a colleague. Details here. … Continue reading 2018-11-15: Announcing: Scaling MySQL with TiDB, Vitess and MySQL Cluster at Madrid MySQL Users Group

Fun with Bugs #71 - On Some Public Bugs Fixed in MySQL 5.7.24

Oracle released many new MySQL versions back on Monday, but I had no time during this very busy week to check anything related (besides the fact that MySQL 8.0.13 can be complied from source on my Fedora 27 box). I am sure you've read a lot about MySQL 8.0.13 elsewhere already, even patches contributed by Community are already presented in a separate post by Jesper Krogh.

I am still mostly interested in MySQL 5.7. So, here is my typical quick review of some selected bugs reported in public by MySQL Community users and fixed in MySQL 5.7.24.

My wife noticed this nice spider in the garden and reported it to me via this photo. Spider is formally not a bug, while in this post I discuss pure bugs... Let me start with fixes in Performance Schema (that is supposed to be mostly bugs free):
  • Bug #90264 - "Some file operations in mf_iocache2.c are not instrumented". This bug reported by Yura Sorokin from Percona, who also contributed patches, is fixed in all recent Oracle releases, from 5.5.62 to 8.0.13.
  • Bug #77519 - "Reported location of Innodb Merge Temp File is wrong". This bug was reported by Daniël van Eeden back in 2015. Let's hope files are properly reported in @@tmpdir now.
  • Bug #80777 - "order by on LAST_SEEN_TRANSACTION results in empty set". Yet another bug report from Daniël van Eeden got fixed.
Let's continue with InnoDB bugs:
  • Bug #91032 - "InnoDB 5.7 Primary key scan lack data". Really weird bug was reported by Raolh Rao back in May.
  • Bug #95045 - Release notes are referring to public bug that does not exist! So, we have a bug in them. Related text:
    "It was possible to perform FLUSH TABLES FOR EXPORT on a partitioned table created with innodb_file_per_table=1 after discarding its tablespace. Attempting to do so now raises ER_TABLESPACE_DISCARDED. (Bug #95045, Bug #27903881)"and refer to Bug #80669 - "Failing assert: fil_space_get(table->space) != __null in row0quiesce.cc line 724", reported by Ramesh Sivaraman from Percona. In the comment from Roel there we see that actual bug was Bug #90545 that is, surprise, still private!
    Recently I found out (here) that some community members think that keeping crashing bugs private after the fixed version is released is still better than publish test cases for them before all affected versions are fixed... I am not so sure.
What about replication (group replication aside, I have enough Galera problems to deal with in my life to even think about it)? There are some interesting bug fixes:
  • Bug #90551 - "[MySQL 8.0 GA Debug Build] Assertion `!thd->has_gtid_consistency_violation'". Good to know that Oracle engineers still pay attention to debug assertions, as in this report (with nice simple test case involving XA transactions)  reported by Roel Van de Paar from Percona.
  • Bug #89370 - "semi-sync replication doesn't work for minutes after restart replication". This bug was reported by Yan Huang, who had contributed a patch for it.
  • Bug #89143 - "Commit order deadlock + retry logic is not considering trx error cases". Nice bug report from Jean-François Gagné.
  • Bug #83232 - "replication breaks after bug #74145 happens in master". FLUSH SLOW LOGS that failed on master (because of file permission problem, for example) was still written to the binary log. Nice finding by Jericho Rivera from Percona.
There are interesting bugs fixed in other categories as well. For example:
  • Bug #91914 - "Mysql 5.7.23 cmake fail with 'Unknown CMake command "ADD_COMPILE_FLAGS".'" Now thanks to this report by Tomasz Kłoczko one can build MySQL 5.7 with gcc 8.
  • Bug #91080 - "sql_safe_updates behaves inconsistently between delete and select". The fix is described as follows:
    "For DELETE and UPDATE that produced an error due to sql_safe_updates being enabled, the error message was insufficiently informative. The message now indicates that data truncation occurred or the range_optimizer_max_mem_size value was exceeded.

    Additionally: (1) Using EXPLAIN for such statements does not produce an error, enabling users to see from EXPLAIN output why an index is not used; (2) For multiple-table deletes and updates, an error is produced with safe updates enabled only if the target table or tables use a table scan."I am NOT sure this is the fix that bug reporter, Nakoa Mccullough, was expecting. He asked to be consistent with SELECT (that works). The bug is still closed :(
  • Bug #90624 - "Restore dump created with 5.7.22 on 8.0.11". It seems Emmanuel CARVIN asked for the working way to upgrade from 5.7.x to 8.0.x. Last comment seems to state that upgrade from 5.7.24 to 8.0.13 is still not possible. I had not checked this.
  • Bug #90505 is private. Release notes say:
    "If flushing the error log failed due to a file permission error, the flush operation did not complete. (Bug #27891472, Bug #90505) References: This issue is a regression of: Bug #26447825"OK, we have a private regression bug, fixed. Nice.
  • Bug #90266 - "No warning when truncating a string with data loss". It was when making BLOB/TEXT columns smaller. Nice finding by Carlos Tutte.
  • Bug #89537 - "Regression in FEDERATED storage engine after GCC 7 fixes". Yet another by report with patch contributed by Yura Sorokin.
  • Bug #88670 - "Subquery incorrectly shows duplicate values on subqueries.". Simple from results bug in optimizer affecting all versions starting from 5.6. Fixed now thanks to Mark El-Wakil.
That's all bugs I wanted to mention today. To summarize my feelings after reading the release notes:
  1. I'd surely consider upgrade to 5.7.24 in any environment where replication is used. Some InnoDB fixes also matter.
  2. We still see not only private bugs (with questionable security impact) mentioned in the release notes, but this time also a typo in bug number that makes it harder to find out what was really fixed and why.
  3. I think it would be fair for Oracle to mention Percona as a major contributor to MySQL 5.7, in a same way as Facebook is mentioned in many places with regards to 8.0.13.
  4. It's good to know that some debug assertions related bugs are still fixed. More on this later...

Group Replication: coping with unreliable failure detection

Failure detection is a cornerstone of distributed systems as it determines which components are working properly or not, allowing it to tackle both network and host instabilities and failures. Like many other distributed systems, Group Replication (GR) requires a majority of correctly operating group members to work properly.…

Percona Live Europe Presents: pg_chameleon MySQL to PostgreSQL Replica Made Easy

What excites me is the possibility that this tool is giving to other people. Also, the challenges I’ve faced and the new ideas for the future releases are always source of interest that keep me focused on the project. So I’m looking forward to sharing this with the conference delegates.

pg_chameleon can achieve two tasks in a very simple way. It can setup a permanent replica between MySQL and PostgreSQL, giving the freedom of choice for the right tool for the right job, or can migrate multiple schemas to a PostgreSQL database.

Anybody that want to extend their database experience, taking the best of the two worlds, or who is seeking a simple way to migrate data with minimal downtime will find the presentation interesting.

What else am I looking forward to at Percona Live Europe?

I’m looking forward to Bruce Momjian’s Explaining the Postgres Query Optimizer, Bo Wang’s How we use and improve Percona XtraBackup at Alibaba Cloud and Federico Razzoli’s MariaDB system-versioned tables

Read the Percona blog about pg_chameleon 

 

 

The post Percona Live Europe Presents: pg_chameleon MySQL to PostgreSQL Replica Made Easy appeared first on Percona Community Blog.

Announcing Keynotes for Percona Live Europe!

There’s just over one week to go so it’s time to announce the keynote addresses for Percona Live Europe 2018! We’re excited to share our lineup of conference keynotes, featuring talks from Paddy Power Betfair, Amazon Web Services, Facebook, PingCap and more!

The speakers will address the current status of key open source database projects MySQL®, PostgreSQL, MongoDB®, and MariaDB®. They’ll be sharing with you how organizations are shifting from a single use database to a polyglot strategy, thereby avoiding vendor lock-in and enabling business growth.

Without further ado, here’s the full keynote line-up for 2018!

Tuesday, November 6 Maximizing the Power and Value of Open Source Databases

Open source database adoption continues to grow in enterprise organizations, as companies look to scale for growth, maintain performance, keep up with changing technologies, control risks and contain costs. In today’s environment, a single database technology or platform is no longer an option, as organizations shift to a best-of-breed, polyglot strategy to avoid vendor lock-in, increase agility and enable business growth. Percona’s CEO Peter Zaitsev shares his perspective.

Following this keynote, there will be a round of lightning talks featuring the latest releases from PostgreSQL, MongoDB and MariaDB.

Technology Lightning Talks PostgreSQL 11

PostgreSQL benefits from over 20 years of open source development, and has become the preferred open source relational database for developers. PostgreSQL 11 was released on October 18. It provides users with improvements to the overall performance of the database system, with specific enhancements associated with very large databases and high computational workloads.

MongoDB 4.0

Do you love MongoDB? With version 4.0 you have a reason to love it even more! MongoDB 4.0 adds support for multi-document ACID transactions, combining the document model with ACID guarantees. Through snapshot isolation, transactions provide a consistent view of data and enforce all-or-nothing execution to maintain data integrity. And not only transactions – MongoDB 4.0 has more exciting features like non-blocking secondary reads, improved sharding, security improvements, and more.

MariaDB 10.3

MariaDB benefits from a thriving community of contributors. The latest release, MariaDB 10.3, provides several new features not found anywhere else, as well back-ported and reimplemented features from MySQL.

Paddy Power Betfair, Percona, and MySQL

This keynote highlights the collaborative journey Paddy Power Betfair and Percona have taken through the adoption of MySQL within the PPB enterprise. The keynote focuses on how Percona has assisted PPB in adopting MySQL, and how PPB has used this partnership to deliver a full DBaaS for a MySQL solution on OpenStack.

Wednesday 7th November State of the Dolphin

Geir Høydalsvik (Oracle) will talk about the focus, strategy, investments, and innovations evolving MySQL to power next-generation web, mobile, cloud, and embedded applications. He will also discuss the latest and the most significant MySQL database release ever in its history, MySQL 8.0.

Amazon Relational Database Services (RDS)

Amazon RDS is a fully managed database service that allows you to launch an optimally configured, secure, and highly available database with just a few clicks. It manages time-consuming database administration tasks, freeing you to focus on your applications and business. This keynote features the latest news and announcements from RDS, including the launches of Aurora Serverless, Parallel Query, Backtrack, RDS MySQL 8.0, PostgreSQL 10.0, Performance Insights, and several other recent innovations.

TiDB 2.1, MySQL Compatibility, and Multi-Cloud Deployment

This keynote talk from PingCap will provide an architectural overview of TiDB, how and why it’s MySQL compatible, the latest features and improvements in TiDB 2.1 GA release, and how its multi-cloud fully-managed solution works.

MyRocks in the Real World

In this keynote, Yoshinori Matsunobu, Facebook, will share interesting lessons learned from Facebook’s production deployment and operations of MyRocks and future MyRocks development roadmaps. Vadim Tkachenko, Percona’s CTO, will discuss MyRocks in Percona Server for MySQL and share performance benchmark results from on-premise and cloud deployments.

Don’t miss out, buy your tickets here! Connect. Accelerate. Innovate.

With a lot of focus on the benefits of open source over proprietary models of software delivery, you surely can’t afford to miss this opportunity to connect with leading figures of the open source database world. On Monday, November 5 you can opt to accelerate your knowledge with our in-depth tutorials, or choose to attend our business track geared towards open source innovation and adoption.

Tuesday and Wednesday with sessions across 8 different tracks, there’s something for all levels of experience, addressing a range of business challenges. See the full schedule.

With thanks to our sponsors!

Platinum: AWS, Percona
Gold: Facebook
Silver: Altinity, PingCap, Shannon Systems, OlinData, MySQL
Startup: Silver Nines
Community: PostgreSQL, MariaDB Foundation
Contributing: Intel Optane, Idera, Studio3T

Media Sponsors: Datanami, Enterprise Tech, HPC Wire, ODBMS.org, Database Trends and Applications, Packt

 

Multi-arch Docker Images for MySQL Server

Since the new 8.0.13 release we publish docker images for a new architecture: aarch64, as part of our normal release process. This means that the mysql/mysql-server docker image will work on both amd64 and aarch64 architectures. The newest images are as usually available on dockerhub. On amd64 machines: [user@amd64host]$ docker pull mysql/mysql-server:8.0.13 [user@aarch64host]$ docker run […]

MySQL Server 8.0.13: Thanks for the 10 Facebook and Community Contributions

MySQL 8.0.13 was released this week. There are several exciting changes including functional indexes and using general expressions as the default value for your columns. So, I will recommend you to get MySQL 8.0.13 installed and try out the new features. You can read about changed in the release notes section of the MySQL documentation and in Geir’s release blog.

However, what I would like to focus on in this blog is the external contributions that has been included in this release. There are five patches contributed by Facebook as well as five contributions from other MySQL users.

The patches contributed by Facebook are:

  • The MySQL client library now returns better error messages for OpenSSL errors. (Bug #27855668, Bug #90418)
  • The optimizer now supports a Skip Scan access method that enables range access to be used in previously inapplicable situations to improve query performance. For more information, see Skip Scan Range Access Method. (Bug #26976512, Bug #88103)
  • A new Performance Schema stage, waiting for handler commit, is available to detect threads going through transaction commit. (Bug #27855592, Bug #90417)
  • For mysqldump --tables output, file names now always include a .txt or .sql suffix, even for file names that already contain a dot. Thanks to Facebook for the contribution. (Bug #28380961, Bug #91745)
  • Failure to create a temporary table during a MyISAM query could cause a server exit. Thanks to Facebook for the patch. (Bug #27724519, Bug #90145)

Other contributions are:

  • Previously, file I/O performed in the I/O cache in the mysys library was not instrumented, affecting in particular file I/O statistics reported by the Performance Schema about the binary log index file. Now, this I/O is instrumented and Performance Schema statistics are accurate. Thanks to Yura Sorokin for the contribution. (Bug #27788907, Bug #90264)
  • Performance for locating user account entries in the in-memory privilege structures has been improved. Thanks to Eric Herman for the contribution. (Bug #27772506, Bug #90244)
  • InnoDB: A helper class was introduced to improve performance associated with reading from secondary keys when there are multiple versions of the same row. Thanks to Domas Mituzas for the contribution. (Bug #25540277, Bug #84958)
  • Replication: When the binlog_group_commit_sync_delay system variable is set to a wait time to delay synchronization of transactions to disk, and the binlog_group_commit_sync_no_delay_count system variable is also set to a number of transactions, the MySQL server exits the wait procedure if the specified number of transactions is reached before the specified wait time is reached. The server manages this process by checking on the transaction count after a delta of one tenth of the time specified by binlog_group_commit_sync_delay has elapsed, then subtracting that interval from the remaining wait time. If rounding during calculation of the delta meant that the wait time was not a multiple of the delta, the final subtraction of the delta from the remaining wait time would cause the value to be negative, and therefore to wrap to the maximum wait time, making the commit hang. The data type for the remaining wait time has now been changed so that the value does not wrap in this situation, and the commit can proceed when the original wait time has elapsed. Thanks to Yan Huang for the contribution. (Bug #28091735, Bug #91055)
  • Replication: In code for replication slave reporting, a rare error situation raised an assertion in debug builds, but in release builds, returned leaving a mutex locked. The mutex is now unlocked before returning in this situation. Thanks to Zsolt Parragi for the patch. (Bug #27448019, Bug #89421)

Thanks a lot for the contributions.

MySQL Shell: API Command Line Integration for DevOps

MySQL Shell is a command-line shell for MySQL Server that has the capability for
interactive and batch code execution.  It also offers a wealth of APIs that make it easier and more efficient to work with and manage MySQL servers. In 8.0.13, we made an effort to make those APIs easily accessible straight from the command line.…

The Future Of The Application Stack

Containers are eating the world. If you have built and deployed an application in production over the last few years, the odds are that you have deployed your code in containers. You might have created and deployed individual containers (Docker, Linux LXC, etc.) directly in the beginning, but quickly switched over to a container orchestration technology like Kubernetes (K8s) or Swarm when you needed to coordinate multi-node deployments and high availability (HA). In this container-driven world, what will the future of the application stack look like? Let’s start with what we need from this “future” application stack.

What Do We Need From This Future Application Stack?
  1. Cloud Agnostic

    We want to be cloud agnostic with the ability to deploy to any cloud of our choice. Ideally, we can even mix in various providers in a single deployment.

  2. On-Premise

    We need to be able to run our application stack on-premise with our own custom hardware, private cloud, and internally managed datacenters.

  3. Language Agnostic

    It almost goes with saying, but I’ll add it in for completeness. The future open stack needs to support all of the popular programming languages.

The Future Application Stack

The future application stack will be composed of a triad of technologies – K8s, Platform-as-a-Service (PaaS), and Database-as-a-Service (DBaaS):

K8s

Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, and you can deploy your applications directly to K8s containers. For customers with existing applications, it makes perfect sense to package and deploy your existing applications to K8s directly.

The Future Of The Application Stack: K8s, PaaS & DBaaSClick To Tweet

All of the public cloud vendors provide strong native support for K8s, and you can run your own K8s cluster on-premise as well. Docker has also now jumped on the K8s bandwagon – so you have full flexibility. There’s no doubt that K8s is king of the hill today. In a few years, it might get beat out by another solution, but container orchestration technology is here to stay.

PaaS Solutions

If you’re creating a new application from scratch, Platform-as-a-Service solutions like Cloud Foundry and OpenShift offer compelling advantages that you can leverage to speed up your application development lifecycle. Is a PaaS a must? Definitely not, but I think it’s worth considering if you’re creating a new application.

In some cases, the PaaS solutions might run on K8s or sit beside it – from an application perspective, it doesn’t matter. If your IT organization is deploying the PaaS solution, they might like it if it just runs on their existing K8s clusters. The PaaS solutions mentioned above are also available on all the public clouds. So again, you have the full flexibility you need from your platform.

DBaaS Solutions

Running and managing a production database system is not for the faint of heart. If you think you’re going to install your production database on three containers and it’s going to run happily ever after, you have another thing coming. By using a Database-as-a-Service solution, they will handle all the operational aspects of your database management so you’re prepared for any and all of the unexpected.

Your DBaaS may or may not run on K8s. Maybe some portions of the DBaaS run on K8s, but the odds are that it doesn’t. Why is that?

  1. Outside of the popular public clouds, there isn’t a great solution for storage/volumes that are up to EC2 Elastic Block Storage (EBS) quality. Vendors like PortWorx and OpenEBS are working on it, but it’s not there yet. Without a good storage solution, it’s practically impossible to put data into your containers.
  2. If you’re running a large, several TB production database server, it doesn’t make sense to run it in containers – you are going to have large dedicated machines with fancy SSD’s.
  3. Too much dynamism – yes, you can have too much of a good thing. Sometimes, when things fail, you want them to stay failed so you can take a look and see what is going on. StatefulSets in Kubernetes are a great step in the right direction to solve this problem.

At ScaleGrid, our vision is to deliver on the DBaaS part of this future application stack. We are multi-cloud today, and can also run on-premise or in your own private datacenter. Additionally, our platform is a polyglot system that supports multiple databases, including MongoDB, Redis, MySQL, and PostgreSQL.

For the sake of simplicity, I have excluded some other parts of the application stacks like object storage, file system storage, etc. In principle, I expect these components will be similar to the DBaaS component. This blog post was inspired by a Silicon Valley PostgreSQL meetup I attended a few weeks ago, and shout out to Dave Nielsen (@davenielsen) from RedisLabs for starting the discussion on this topic.

On-demand primary election

Having a server acting as a primary with multiple secondaries is the most common setup when using Group Replication. Until now though, there was no way to change the current primary member while the group was running without causing some sort of disruption.…

Continuent Clustering versus AWS RDS/MySQL

Enterprises require high availability for their business-critical applications. Even the smallest unplanned outage or even a planned maintenance operation can cause lost sales, productivity, and erode customer confidence. Additionally, updating and retrieving data needs to be robust to keep up with user demand.

Let’s take a look at how Continuent Clustering helps enterprises keep their data available and globally scalable, and compare it to Amazon’s RDS running MySQL (RDS/MySQL).

Replicas and Failover What does RDS do?

Having multiple copies of a database is ideal for high availability. RDS/MySQL approaches this with “Multi-AZ” deployments. The term “Multi-AZ” here is a bit confusing, as enabling this simply means a single “failover replica” will be created in a different availability zone from the primary database instance. Only one failover replica can be created, and thus we have just one failover candidate with a copy of the database in a “Multi”-AZ deployment. The failover replica has only one purpose – to be used as a failover target, and cannot be used for other purposes – but more about this later.

The failover process for RDS happens automatically and takes between 1 – 2 minutes. It also updates the DNS record for the database to point to the failover replica. As a result, we have the following consequences:

  • Application downtime of 1 -2 minutes
  • Application must reconnect to database, which, depending on the application, may report cryptic errors to the user or even crash
  • You may need to reconfigure your JVM environment to handle DNS caching in this case
  • You now do not have any other failover candidates until another is brought online
How does Continuent Clustering handle failover?

Using Continuent Clustering, you set up your cluster to have a primary (master), and 2 or more replicas (slaves). Each slave in a cluster is a candidate for failover, and since this is a true cluster, the application simply connects to the cluster with no modification, and any changes to the cluster happen behind the scenes to the application. This is made possible using the Connector, which is an intelligent proxy that speaks the MySQL protocol.

During failover, the cluster selects a slave to promote to master. The Connector temporary holds connections from the applications until failover process is complete. When the slave has been promoted to master, the Connector resumes connections but to the new master. The advantages here are:

  • Fast failover time, often within 10 seconds!
  • Applications do not get disconnected. No errors reported to the users.
  • Applications do not need to be aware of a new master
  • After a failover in a 3 node cluster, there is still yet another slave that can handle a subsequent failover. A 5 node cluster with a failed master would still have 3 slaves online!
  • The failed master in many cases can repaired and added back into the cluster, saving reprovisioning time.
Performance and Scalability RDS Style

RDS/MySQL provides “read replicas,” which, although not automatic failover candidates, are replicas of the primary instance and can be used for reading data, offloading some traffic from the master. Note that a read replica can be manually promoted to a master. A read replica will have a different IP address, thus to take advantage of using it for reads, your application must be designed to send reads to the read replica, and writes to the master.

Coming back to the “failover replica,” note that the failover replica uses synchronous replication. This means that EACH write to primary database will block until the write has been committed and acknowledged on the failover replica. This will introduce high latency to your applications, and it could be significant for systems with a lot of writes.

Clustering Style

In a Continuent Cluster, a slave is not only a failover candidate, but can be used for reads as well. That 3 node cluster mentioned above already has 3 nodes available for reading, and once again, using the power of the Connector, reads can be automatically directed to slaves without modifying our application! Since the Connector is a true proxy and router, there are quite a few algorithms available for splitting reads and writes. If your application is already read/write aware, great! We can leverage your existing logic. If not, the Connector offers read/write algorithms for you to use.

Also note that by adding more slaves in a Continuent Cluster, you are scaling the number of nodes available for reads without impacting your application.

Maintenance

With maintenance tasks, you are in control with Continuent Clustering. Plan your maintenance when you want, and perform many maintenance tasks, like OS patches and MySQL upgrades, with no downtime. Imagine upgrading from MySQL 5.6 to MySQL 5.7 with NO downtime!

RDS/MySQL requires a maintenance window, and during that window, your instances may be restarted. This of course translates to application downtime.

Benefits of a True Cluster

There are many benefits of using Continuent Clustering. Some of the benefits we discussed are: Automatic failover with no application disconnect, read scaling, read/write splitting, and ease of maintenance. However, there are many more benefits – multi-master (deploy replicated clusters across the continent or across the globe), cloud compatibility (think running a cluster in AWS and being able to failover to Google Cloud!), replication to other databases and data warehouses, and support from engineers with 20-30 years of experience each in databases and clustering!

Click here to talk with us and sign up for a free proof-of-concept!

Email SalesContact UsSupport PortalDocumentation

Full Stack Python Developer 9 Things to KNOW

I often get question in my Bangla Youtube channel that how to be a Full Stack Python Web Developer.

We can divide web development into 2 section:

  1. Backend
  2. Frontend

Backend Web Developer:

Backend developers are those who write application code that runs in server. How the application runs in server, how database connectivity happens and some other things are all part of backend developers job. To be a Backend Python Web Developer one have to learn at least one of the popular Python based web frameworks:

  1. Django or
  2. Flask

For database management system either have to know

  1. PostgreSQL or
  2. MySQL

Beside these technologies one should have knowledge about version controlling system like subversion or Git. But now a days Git is more popular than Subversion.

Frontend Web Developer:

Normally people who design webpage by HTML, CSS and Javascript are considered frontend web developers. Now a days one have to know at least one of the popular Javascript based frontend framework or library. For example one have to know either:

  1. Angular
  2. ReactJs
  3. Vue.js

Beside these Javascript frameworks one also have to know 

  1. HTML
  2. CSS
  3. Git 

Also Bootstrap framework is now very popular to create frontend web design.

So to be a Full Stack Python Developer One have to know all of these technology from backend and frontend.

To learn Python Programming Free you can checkout my course on Udemy which is free.

To learn Python based web scraping I also have a course on Udemy

Percona Live Europe 2018: Our Sponsors

Without our sponsors, it would be almost out of reach to deliver a conference of the size and format  that everyone has come to expect from Percona Live. As well as financial support, our sponsors contribute massively by supporting their teams in presenting at the conference, and adding to the quality and atmosphere of the event. Having their support means we can present excellent in-depth technical content for the tutorials and talks, and that’s highly valued by conference delegates. This year, too, Amazon Web Services (AWS) sponsors the cloud track on day two, with a superb line up of cloud content.

Here’s a shout out to our sponsors, you’ll find more information on the Percona Live sponsors page:

Platinum

 

For over 12 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. https://aws.amazon.com/

Gold

Facebook offer a fantastic contribution to open source databases with MyRocks and are greatly appreciated for their ongoing support of Percona Live.
https://www.facebook.com

Silver

Altinity is the leading service provider for ClickHouse
https://www.altinity.com/


PingCAP is the company and core team building TiDB, a popular open-source MySQL-compatible NewSQL hybrid database.
https://www.pingcap.com/en/

Shannon Systems is a global leader in providing enterprise-grade Flash storage devices and system solutions.
http://en.shannon-sys.com/

OlinData is an open source infrastructure management company providing services to help companies from small to large with their infrastructure.
https://www.olindata.com/en

MySQL is the world’s most popular OS database, delivered by Oracle.
https://www.mysql.com/

Start Up


SeveralNines provide automation and management software for MySQL, MariaDB and MongoDB clusters
https://severalnines.com/

Community Sponsors


PostgreSQL is a powerful, open source object-relational database system.
https://www.postgresql.org/

MariaDB Server is one of the most popular database servers in the world.
https://mariadb.org

Branding


Intel is the world’s leading technology company, powering the cloud and billions of smart, connected computing devices.
https://www.intel.com


IDERA designs powerful software with one goal in mind – to solve customers’ most complex challenges with elegant solutions.
https://www.idera.com/

Studio 3T is a GUI and IDE for developers and data engineers who work with MongoDB.
https://studio3t.com/

Media
  • datanami online portal for data science, AI and advanced analytics
  • Enterprise Tech online portal addressing high performance computing technologies at scale
  • HPC Wire covering the fastest computers in the world and the people who run them
  • odbms.org a resource portal for big data, new data management technologies, data science and AI
  • Packt online technical publications and videos
Thanks again to all – appreciated!

Effective Monitoring of MySQL Replication with SCUMM Dashboards - Part 2

In our previous blog on SCUMM dashboards, we looked at the MySQL overview dashboard. The new version of ClusterControl (ver. 1.7) offers a number of high resolution graphs of useful metrics, and we went through the meaning of each of the metrics, and how they help you troubleshoot your database. In this blog, we will look at the MySQL Replication dashboard. Let’s proceed on the details of this dashboard on what has to offer.

MySQL Replication Dashboard

The MySQL Replication Dashboard offers a very straightforward sets of graph that makes it easier to monitor your MySQL master and replica(s). Starting from the top, it shows the most important variables and information to determine the health of the replica(s) or even the master. This dashboard offers a very useful part when inspecting the health of the slaves or a master in master-master setup. One can as well check on this dashboard the master’s binary log creation and determine the overall dimension, in terms of the generated size, at a particular given period of time.

First thing in this dashboard, it presents you with the most important information you might need with the health of your replica. See the graph below:

Basically, it will show you the Slave thread’s IO_Thread, SQL_Thread, replication error and if has read_only variable enabled. From the sample screenshot above, all the information shows that my slave 192.168.70.20 is healthy and running normally.

Additionally, ClusterControl has information to gather as well if you go over to Cluster -> Overview. Scroll down and you can see the graph below:

Another place to view the replication setup is the topology view of the replication setup, accessible at Cluster -> Topology. It gives, at a quick glance, a view of the different nodes in the setup, their roles, replication lag, retrieved GTID and more. See the graph below:

In addition to this, the Topology View also shows all the different nodes that form part of your database cluster whether its the database nodes, load balancers (ProxySQL/MaxScale/HaProxy) or arbitrators (garbd), as well as the connections between them. The nodes, connections, and their statuses are discovered by ClusterControl. Since ClusterControl is continuously monitoring the nodes and keeps state information, any changes in the topology are reflected in the web interface. In case of failure of nodes are reported, you can use the this view along with the SCUMM Dashboards and see what impact that might have cause it.

The Topology View has some similarity with Orchestrator in which you can manage the nodes, change masters by dragging and dropping the object on the desired master, restart nodes and synchronize data. To know more about our Topology View, we suggest you to read our previous blog - “Visualizing your Cluster Topology in ClusterControl”.

Let’s now proceed with the graphs.

  • MySQL Replication Delay
    This graph is very familiar to anybody managing MySQL, especially those who are working on a daily basis on their master-slave setup. This graph has the trends for all the lags recorded for a specific time range specified in this dashboard. Whenever we want to check the periodic fall time that our replica has, then this graph is good to look at. There are certain occasions that a replica could lag for odd reasons like your RAID has a degraded BBU and needs a replacement, a table has no unique key but not on the master, an unwanted full table scan or full index scan, or a bad query was left running by a developer. This is also a good indicator to determine if slave lag is a key issue, then you may want to take advantage of parallel replication.

  • Binlog Size
    These graphs are related to each other. The Binlog Size graph shows you how your node generates the binary log and helps determine its dimension based on the period of time you are scanning.

  • Binlog Data Written Hourly
    The Binlog Data Written Hourly is a graph based on the current day and the previous day recorded. This might be useful whenever you want to identify how large your node is that is accepting writes, which you can later use for capacity planning.

  • Binlogs Count
    Let’s say you expect high traffic for a given week. You want to compare how large writes are going through your master and slaves with the previous week. This graph is very useful for this kind of situation - To determine how high the generated binary logs were on the master itself or even on the slaves if log_slave_updates variable is enabled. You may also use this indicator to determine your master vs slaves binary log data generated, especially if you are filtering some tables or schemas (replicate_ignore_db, replicate_ignore_table, replicate_wild_do_table) on your slaves that were generated while log_slave_updates is enabled.

  • Binlogs Created Hourly
    This graph is a quick overview to compare your binlogs creation hourly from yesterday and today’s date.

  • Relay Log Space
    This graph serves as the basis of the generated relay logs from your replica. When used along with the MySQL Replication Delay graph, it helps determine how large the number of relay logs generated is, which the administrator has to consider in terms of disk availability of the current replica. It can cause trouble when your slave is rigorously lagging, and is generating large numbers of relay logs. This can consume your disk space quickly. There are certain situations that, due to a high number of writes from the master, the slave/replica will lag tremendously, thus generating a large amount of logs can cause some serious problems on that replica. This can help the ops team when talking to their management about capacity planning.

  • Relay Log Written Hourly
    Same as the Relay Log Space but adds a quick overview to compare your relay logs written from yesterday and today’s date.

Conclusion Related resources  MySQL Overview Dashboard

You learned that using SCUMM to monitor your MySQL Replication adds more productivity and efficiency to the operations team. Using the features we have from previous versions combined with the graphs provided with SCUMM is like going to the gym and seeing massive improvements in your productivity. This is what SCUMM can offer: monitoring on steroids! (now, we are not advocating that you should take steroids when going to the gym!)

In Part 3 of this blog, I will discuss the InnoDB Metrics and MySQL Performance Schema Dashboards.

Tags:  MySQL MariaDB replication monitoring

MySQL 8.0.13 — What’s New With Connectors

Welcome to MySQL 8.0.13.   If you haven’t already read about all the great new updates please take a minute and read the announcement.  Along with all the new updates in the server we have a full complement of connectors t hat are now available and I wanted to take a moment to share our updates.

First I want to go over the changes related to the X DevAPI, our new API that merges the great relational power of MySQL with our new Document Store.  Unless specified, all of our MySQL 8.0 series connectors includes all the DevAPI updates.

DevAPI

Connection Pooling – One of the most expensive operations that a connector can do is to simply connect to the server.  This is especially true if you are using SSL/TLS (and  you should be!).  So we wanted to make sure all of our connectors implement a connection pool where idle connections are kept open and are available to be reused for future operations.    This is done by using the new getClient operation like this:

// default pooling options var client = mysqlx.getClient('mysqlx://root@localhost:33060') client.getSession().then(session => {} ) // overriding pooling options var client = mysqlx.getClient('mysqlx://root@localhost:33060', { pooling: { maxSize: 20, maxIdleTime: 1000, queueTimeout: 1500 } }) client.getSession()

Connect Timeout – This is a simple control where you can control how long to wait for a connection to happen.  Not very exciting but certainly important!

Non-DevAPI

The following are some of the key enhancements coming in each of our connectors that are not related to the DevAPI.

  • Connector/J
    • Protobuf library updated to 3.6.1
    • A new sslMode connection property has been introduced to replace the connection properties useSSL, requireSSL, and verifyServerCertificate, which are now deprecated
  • Connector/Net
    • Entity Framework 2.1 support
  • Connector/Python
    • Python 3.7 support
  • Connector/C++
    • Improve JSON parsing speed and efficiency
    • Enable building on Solaris
    • Implement Windows MSI packaging
  • Connector/ODBC
    • Enable support of Solaris
    • Add support for dynamic linking of libmysqlclient
  • PHP
    • Improve building by treating warnings as errors
    • Improve runtime error handling

Each of these products has their own announcement blog post giving more detail on the bugs fixed and features implemented and I would encourage you to read those.  Those links are:

As always, thank you for using MySQL products.  Please let us know what we are doing right and doing wrong and stay tuned for more exciting things coming in 8.0.14!

 

 

JSFoo, October 26-27, 2018 - with MySQL!

As we already informed you on another blog posted on Oct 16, 2018, MySQL is going to have a MySQL talk at JavaScript (JSFoo) this year. This is our first time we are attending JSFoo, Bangalore, India and we hope our attendance will be very well welcomed! Do not miss the opportunity to come to listen the MySQL talk as follows:

  • "MySQL 8 loves JavaScript" given by Sanjay Manwani, MySQL India Sr Director & Developer Evangelism at Oracle, talk is scheduled for Saturday, Oct 27, 2018 @13:40-14:10.

More information and logistics can be found here.

We are looking forward to meeting & talking to you at JSFoo 2018!

    Forum PHP, October 25-26, 2018 with MySQL!

    As it was already announced on another blog posted on Oct 16, 2018, MySQL is a Bronze sponsor of Forum PHP show this year.

    We are going to have a MySQL talk on "MySQL 8.0: What's new" given by Olivier Dasini, the Principal Sales Consultant. His talk is scheduled for October 26th @9:30-10:10am. You are very welcome to come to listen Olivier's talk & discuss MySQL topics with our staff at Forum PHP in Paris, France.

    More information & logistics available at Forum PHP website.

    We are looking forward to talking to you there!

     

    How to Install Sentrifugo HRM on Ubuntu 18.04 LTS

    Sentrifugo is a powerful Human Resource Management System (HRM) written in PHP that uses MySQL/MariaDB as the database system. In this tutorial, we will be going to explain how to install Sentrifugo on Ubuntu 18.04 LTS server.

    Pages