Planet MySQL

MySQL Cluster 7.6.8 performance jump of up to 240%

In February I added a new feature to my Sysbench version that I use in
my MySQL Cluster testing. This new feature adds a new column in the
table called filter. It contains the same value as the primary key.

With this new column I can easily change the range scan queries in
sysbench from returning 100 rows to instead scan 100 rows and
return 1 row. This means that sysbench can benchmark the filtering
performance of the underlying database engine.

Next I ran tests where set the number of rows in the range to
10.000 rows. This new test was a perfect vehicle to improve performance
in NDB for scan filtering.

Filtering one row in 7.6.7 in this sysbench tests costs about 750 ns.

When I started out optimising these 750 ns of time I didn't expect so
much improvement, but using perf it was possible to get very
fine-grained pinpointing of the wasted CPU performance. One
interesting thing was that I found a bitmask that had zeroing of the
bitmask in the constructor, it turned out that this constructor was
called twice in filtering a row and neither of them was required.
So fixing this simple thing removed about 20 ns of CPU usage and
in this case about 3-4% performance improvement.

As you can see this is micro-optimisations and for those perf is a
splendid tool.

One of the biggest reasons for bad performance in modern software
applications is instruction cache misses. Most modern software
is packed with features and this requires a lot of code to handle.
The compiler has a hard time knowing which code is the common
path and which path is the error handling path.

In the MySQL code we have two macro's likely and unlikely that
can hint the compiler what code path to optimise for.

In this code path I was optimising I found that I had roughly 1 billion
instruction cache misses over a short period (20 seconds if I remember
correctly). I managed with numerous changes to decrease the number
of instruction cache misses to 100 million in the same amount of time.

I also found some simple fixes that cut away a third of the processing time.

In the end I found myself looking at the cost being brought down to around
250ns. So comparing the performance of this scan filtering with 7.5.10 we
have optimised this particular code path by 240%.

During the development of these improvements of scan filtering, I discovered
that some of the optimisations could be applied also to searching in our
ordered indexes. The impact of this is that the index rebuild phase of a restart
will go faster, I haven't measured the exact impact this has yet. It also means
that any application using ordered indexes will go a lot faster.

For example performance of a standard Sysbench OLTP RW benchmark
with one added ordered index column improves by 70% in 7.6.8
compared to earlier versions of 7.6 and 7.5.

Adding Support in MySQL 8 for openSUSE and SLE 15

With the release of MySQL 8.0.13 yesterday, we added support in MySQL 8 for SUSE Linux Enterprise (SLE) 15, the brand new version of one of the key enterprise Linux distros out there. At the same time, we also introduced support for SLE 15’s community twin, openSUSE Leap 15. So as of yesterday, all MySQL […]

React 16.6: React.memo() for Functional Components Rendering Control

React 16.6.0 is released! With it comes a host of new features including the two big ones:

  • React.memo()
  • React.lazy(): Code-splitting and lazy-loading with React Suspense

We'll focus on React.memo() for this article and React.lazy() and Suspense in an upcoming larger article.

What is React.memo()?

React.memo() is similar to PureComponent in that it will help us control when our components rerender.

Components will only rerender if its props have changed! Normally all of our React components in our tree will go through a render when changes are made. With PureComponent and React.memo(), we can have only some components render.

const ToTheMoonComponent = React.memo(function MyComponent(props) { // only renders if props have changed });

This is a performance boost since only the things that need to be rendered are rendered.

PureComponent works with classes. React.memo() works with functional components.

import React from 'react'; const MyScotchyComponent = React.memo(function MyComponent(props) { // only renders if props have changed! }); // can also be an es6 arrow function const OtherScotchy = React.memo(props => { return <div>my memoized component</div>; }); // and even shorter with implicit return const ImplicitScotchy = React.memo(props => ( <div>implicit memoized component</div> )); Wrapping an Existing Component

Since React.memo() is a higher order component, you can use it to wrap a functional component you already have.

const RocketComponent = props => <div>my rocket component. {props.fuel}!</div>; // create a version that only renders on prop changes const MemoizedRocketComponent = React.memo(RocketComponent); A Quick Demo

I tried creating a quick demo to show the render happen and also not happen if a component hasn't changed. Unfortunately, the React Developer Tools hasn't fully implemented the React.memo() stuff yet.

If you look at components, it shows TODO_NOT_IMPLEMENTED_YET:

Once DevTools is updated, we'll be able to see which components are being rendered. The memoized component should not trigger a render if it's props haven't changed!

And here's the demo app:

  • has a counter just to trigger app renders
  • has an input for showing messages
  • has a normal version and memoized version of component to show messages

https://codesandbox.io/s/53wj3rr3nn?runonclick=1&codemirror=1

Why is it called memo?

Per Wikipedia:

In computing, memoization is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.

This makes sense since that's exactly what React.memo() does! Check to see if an upcoming render will be different than the previous render. If they are the same, keep the previous one.

This question was asked on Twitter also and Dan explained why it was called memo and not pure like PureComponent:

Conclusion

This is a great addition to React as I've always written things in the class form just to take advantage of PureComponent. Now we can have our cake (functional components) and eat it too (render only on changes) with React.memo()!

No-Downtime Cluster Software Upgrades

One important way to protect your data is to keep your Continuent Clustering software up-to-date.

A standard cluster deployment uses three nodes, which allows for no-downtime upgrades along with the ability to have a fully available cluster during maintenance.

Please note that with only two database cluster nodes, there is a window of vulnerability created by leaving zero failover candidates available when the lone slave is taken down for service.

The Best Practices: Staging Performing a No-Downtime Upgrade for a Staging Deployment

When upgrading a Staging-style deployment, all nodes are upgraded at once in parallel via the tools/tpm update command run from inside the staging directory on the staging host.

No Master switch happens, and all layers are restarted to use the new code. This could introduce an outage for the applications depending on the age and feature-set of the old version. For that reason the --no-connectors option is used to prevent the restart of the Connector processes until you are ready to do so.

By default, an update/upgrade process will restart all services, including the connector. Adding this option prevents the connectors from being restarted. For example:

shell> tools/tpm update --no-connectors

If this option is used, the connectors must be manually updated to the new version after being drained from your load balancer pool or during a quieter period of traffic. This can be achieved by running a promote on each Connector node:

shell> tpm promote-connector

This will result in a short period of downtime (couple of seconds) on the single host concerned, while the other connectors in your deployment keep running. During the upgrade, the Connector is restarted using the updated software and/or configuration.

Click here to read more about “Upgrading using the Staging Method” in our online docs…

The Best Practices: INI Performing a No-Downtime Upgrade for an INI-based Deployment

In many ways, upgrading an INI-based deployment is similar to a Staging upgrade, except that the tools/pm update command is executed individually on all cluster and database nodes from the locally-extracted staging directory.

Use of the --no-connectors option is the same.

The biggest difference is due to the fact that each node is done separately. This introduces the possibility of upgrading all the slaves first, then doing a switch, then upgrading the final node.

To Switch or Not to Switch, THAT is the Question

We recommend only using the No-Switch method of INI upgrades. Performing a switch in the middle of an upgrade can lead to various possible mismatches on multiple layers.

Click here to read more about “Upgrading when using INI-based configuration”.

We have documented both approaches for those customers who feel they must perform a switch in the middle of an upgrade.

√ No Switch
To use the No-Switch method of upgrading (docs here):

  1. Place the cluster into maintenance mode
  2. Upgrade the slaves in the dataservice. Be sure to shun and welcome each slave.
  3. Upgrade the master node. (Important: Replication traffic to the slaves will be delayed while the replicator restarts. The delays will increase if there are a large number of stored events in the THL. Old THL may be removed to decrease the delay. Do NOT delete THL that has not been received on all slave nodes or events will be lost.)
  4. Upgrade the connectors in the dataservice one-by-one. (Important: Application traffic to the nodes will be disconnected when each connector restarts.)
  5. Place the cluster into automatic mode

· Switch (Not Recommended)
To use the Switch method of upgrading (docs here):

  1. Upgrade the slaves in the dataservice. Be sure to shun and welcome each slave.
  2. Switch the current master to one of the upgraded slaves. (Important: Application and replication traffic will be delayed while the switch occurs.)
  3. Upgrade the original master node which is now a slave. Be sure to shun and welcome it.
  4. Upgrade the connectors in the dataservice one-by-one. (Important: Application traffic to the nodes will be disconnected when the connector restarts.)

Continuent Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit https://www.continuent.com/solutions

Want to learn more or run a POC? Contact us.

Percona Live Europe Presents: MariaDB 10.4 Reverse Privileges (DENY)

One of the most common questions about privileges in MySQL and MariaDB is how would a user revoke access to a particular table, in a large database with hundreds or thousands of tables, while keeping the rest available. Currently, there is no easy solution. Just grant access to everything else, individually. Not only does this reduce server performance, but is a nightmare to maintain. Reverse privileges solve this and more. And they are simple to explain to new admins too! So I look forward to sharing the knowledge during my presentation at PLE18.

DBAs would benefit from this talk the most. As it is a feature still under development, we are open for input from the community. Tell us what you think we should do to make this feature the best it can be.

What I’m looking forward to…

It will be quite interesting to see what challenges people have faced with MySQL and MariaDB and how they were overcome. As a database developer, it’s always important to understand how your users make use of the product. It is only through this that we can make it better.

The post Percona Live Europe Presents: MariaDB 10.4 Reverse Privileges (DENY) appeared first on Percona Community Blog.

Webinar: Backup Management for MySQL, MariaDB, PostgreSQL & MongoDB with ClusterControl

Are you frustrated with traditional, labour-intensive backup and archive practices for your MySQL, MariaDB, MongoDB and PostgreSQL databases?

What if you could have one backup management solution for all your business data? What if you could ensure integrity of all your backups? And what if you could leverage the competitive pricing and almost limitless capacity of cloud-based backup while meeting cost, manageability, and compliance requirements from the business?

Welcome to our webinar on Backup Management with ClusterControl on November 13th 2018.

Whether you are looking at rebuilding your existing backup infrastructure, or updating it, this webinar is for you.

ClusterControl’s centralized backup management for open source databases provides you with hot backups of large datasets, point in time recovery in a couple of clicks, at-rest and in-transit data encryption, data integrity via automatic restore verification, cloud backups (AWS, Google and Azure) for Disaster Recovery, retention policies to ensure compliance, and automated alerts and reporting.

Date, Time & Registration Europe/MEA/APAC

Tuesday, November 13th at 09:00 GMT / 10:00 CET (Germany, France, Sweden)

Register Now

North America/LatAm

Tuesday, November 13th at 09:00 PST (US) / 12:00 EST (US)

Register Now

Agenda
  • Backup and recovery management of local or remote databases
    • Logical or physical backups
    • Full or Incremental backups
    • Position or time-based Point in Time Recovery (for MySQL and PostgreSQL)
    • Upload to the cloud (Amazon S3, Google Cloud Storage, Azure Storage)
    • Encryption of backup data
    • Compression of backup data
  • One centralized backup system for your open source databases (Demo)
    • Schedule, manage and operate backups
    • Define backup policies, retention, history
    • Validation - Automatic restore verification
    • Backup reporting
Speaker

Bartlomiej Oles is a MySQL and Oracle DBA, with over 15 years experience in managing highly available production systems at IBM, Nordea Bank, Acxiom, Lufthansa, and other Fortune 500 companies. In the past five years, his focus has been on building and applying automation tools to manage multi-datacenter database environments.

Tags:  backup database backup MySQL MongoDB MariaDB PostgreSQL

Arg: relay_log_space_limit is (still) not dynamic !

This will be a short rant: why is relay_log_space_limit still not dynamic ? This is obviously a rhetorical question.  I see no reason why this MySQL System Variable is not (yet) dynamic.  Obviously, changing the value of this variable could need replication to be stopped (like for slave_parallel_type, slave_parallel_workers and slave_preserve_commit_order), but at least it would not need a MySQL

MySQL InnoDB Cluster – What’s new in the 8.0.13 (GA) release

The MySQL Development Team is very happy to announce the second 8.0 Maintenance Release of InnoDB Cluster!

In addition to bug fixes, 8.0.13 brings some new exciting features:

  • Defining the next primary instance “in line”
  • Defining the behavior of a server whenever it drops out of the cluster
  • Improved command line integration for DevOps type usage

Here are the highlights of this release!…

Golang’s Database/SQL in Action

Google’s Go language is now very popular for systems programming. It’s efficient and fun to program in Go, and it produces high-performance programs. Go is well suited for working with SQL databases, and has excellent SQL drivers. In this talk Baron will show how to use Go’s database/sql package with databases like MySQL and PostgreSQL. He’ll discuss idiomatic database/sql code, available drivers for MySQL and PostgreSQL, and discuss tips and tricks that will save you time and frustration.

Baron Schwartz, @xaprb, talks shop about how to use Go’s database/sql package with databases like MySQL and PostgreSQL. He also discusses tips and tricks that will save you time and frustration. @AllThingsOpen #opensource #AllThingsOpen pic.twitter.com/nvZOBHG8Y8

— Jonathan Corbett (@joncorbett12) October 23, 2018

How to Monitor Your Database

The first time I tried to monitor a database, I was overwhelmed. There were hundreds of variables to monitor, and there were lots of Nagios check scripts that had dozens of checks. I wasn’t sure what alerts I should set up, so I set up too many and got a bunch of noise as a result. A couple of years later, I returned to that company and found all those alerts still in place, still spamming everyone — but they’d just filtered every alert to the trash.

In this talk, I’ll share how I learned to do this better, so you won’t make the same mistakes I made! Any sophisticated system like a database has many more instrumentation points than you should actively monitor. The trick is approaching it with a sound monitoring framework in mind. This talk explains the framework I’ve developed over many years, which breaks monitoring into a holistic approach that’s easy to understand and makes it obvious what kinds of data are useful for what purposes.

You’ll learn the 7 golden signals (yes, seven and not four), how workload and resource performance are complementary and necessary for a complete understanding of database health and performance, and how to monitor technology-specific “sharp edges.” I’ll also cover some of those for a few popular databases: MySQL, PostgreSQL, and MongoDB.

Presentation : Evolution of DBA’s in the Cloud Era

Mydbops team has presented at 15th edition Open source India Conference happened at Bangalore on 12-10-2018. Manosh malai ( Senior Devops/ DB Consultant ) and Kabilesh P R ( Co-Founder Mydbops ) has made presentation on “Evolution of DBA in the Cloud Era“. They have shared the value addition that DBA’s can bring into any organisation who have adopted to cloud ( DBaaS ). They have shared a case study of their consulting experience too.

MySQL on Docker: Running ProxySQL as a Helper Container on Kubernetes

ProxySQL commonly sits between the application and database tiers, in so called reverse-proxy tier. When your application containers are orchestrated and managed by Kubernetes, you might want to use ProxySQL in front of your database servers.

In this post, we’ll show you how to run ProxySQL on Kubernetes as a helper container in a pod. We are going to use Wordpress as an example application. The data service is provided by our two-node MySQL Replication, deployed using ClusterControl and sitting outside of the Kubernetes network on a bare-metal infrastructure, as illustrated in the following diagram:

ProxySQL Docker Image

In this example, we are going to use ProxySQL Docker image maintained by Severalnines, a general public image built for multi-purpose usage. The image comes with no entrypoint script and supports Galera Cluster (in addition to built-in support for MySQL Replication), where an extra script is required for health check purposes.

Basically, to run a ProxySQL container, simply execute the following command:

$ docker run -d -v /path/to/proxysql.cnf:/etc/proxysql.cnf severalnines/proxysql

This image recommends you to bind a ProxySQL configuration file to the mount point, /etc/proxysql.cnf, albeit you can skip this and configure it later using ProxySQL Admin console. Example configurations are provided in the Docker Hub page or the Github page.

ProxySQL on Kubernetes

Designing the ProxySQL architecture is a subjective topic and highly dependent on the placement of the application and database containers as well as the role of ProxySQL itself. ProxySQL does not only route queries, it can also be used to rewrite and cache queries. Efficient cache hits might require a custom configuration tailored specifically for the application database workload.

Ideally, we can configure ProxySQL to be managed by Kubernetes with two configurations:

  1. ProxySQL as a Kubernetes service (centralized deployment).
  2. ProxySQL as a helper container in a pod (distributed deployment).

The first option is pretty straightforward, where we create a ProxySQL pod and attach a Kubernetes service to it. Applications will then connect to the ProxySQL service via networking on the configured ports. Default to 6033 for MySQL load-balanced port and 6032 for ProxySQL administration port. This deployment will be covered in the upcoming blog post.

The second option is a bit different. Kubernetes has a concept called "pod". You can have one or more containers per pod, these are relatively tightly coupled. A pod’s contents are always co-located and co-scheduled, and run in a shared context. A pod is the smallest manageable container unit in Kubernetes.

Both deployments can be distinguished easily by looking at the following diagram:

The primary reason that pods can have multiple containers is to support helper applications that assist a primary application. Typical examples of helper applications are data pullers, data pushers, and proxies. Helper and primary applications often need to communicate with each other. Typically this is done through a shared filesystem, as shown in this exercise, or through the loopback network interface, localhost. An example of this pattern is a web server along with a helper program that polls a Git repository for new updates.

This blog post will cover the second configuration - running ProxySQL as a helper container in a pod.

ProxySQL as Helper in a Pod

In this setup, we run ProxySQL as a helper container to our Wordpress container. The following diagram illustrates our high-level architecture:

In this setup, ProxySQL container is tightly coupled with the Wordpress container, and we named it as "blog" pod. If rescheduling happens e.g, the Kubernetes worker node goes down, these two containers will always be rescheduled together as one logical unit on the next available host. To keep the application containers' content persistent across multiple nodes, we have to use a clustered or remote file system, which in this case is NFS.

ProxySQL role is to provide a database abstraction layer to the application container. Since we are running a two-node MySQL Replication as the backend database service, read-write splitting is vital to maximize resource consumption on both MySQL servers. ProxySQL excels at this and requires minimal to no changes to the application.

There are a number of other benefits running ProxySQL in this setup:

  • Bring query caching capability closest to the application layer running in Kubernetes.
  • Secure implementation by connecting through ProxySQL UNIX socket file. It is like a pipe that the server and the clients can use to connect and exchange requests and data.
  • Distributed reverse proxy tier with shared nothing architecture.
  • Less network overhead due to "skip-networking" implementation.
  • Stateless deployment approach by utilizing Kubernetes ConfigMaps.
Preparing the Database

Create the wordpress database and user on the master and assign with correct privilege:

mysql-master> CREATE DATABASE wordpress; mysql-master> CREATE USER wordpress@'%' IDENTIFIED BY 'passw0rd'; mysql-master> GRANT ALL PRIVILEGES ON wordpress.* TO wordpress@'%';

Also, create the ProxySQL monitoring user:

mysql-master> CREATE USER proxysql@'%' IDENTIFIED BY 'proxysqlpassw0rd';

Then, reload the grant table:

mysql-master> FLUSH PRIVILEGES; Preparing the Pod

Now, copy paste the following lines into a file called blog-deployment.yml on the host where kubectl is configured:

apiVersion: apps/v1 kind: Deployment metadata: name: blog labels: app: blog spec: replicas: 1 selector: matchLabels: app: blog tier: frontend strategy: type: RollingUpdate template: metadata: labels: app: blog tier: frontend spec: restartPolicy: Always containers: - image: wordpress:4.9-apache name: wordpress env: - name: WORDPRESS_DB_HOST value: localhost:/tmp/proxysql.sock - name: WORDPRESS_DB_USER value: wordpress - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 80 name: wordpress volumeMounts: - name: wordpress-persistent-storage mountPath: /var/www/html - name: shared-data mountPath: /tmp - image: severalnines/proxysql name: proxysql volumeMounts: - name: proxysql-config mountPath: /etc/proxysql.cnf subPath: proxysql.cnf - name: shared-data mountPath: /tmp volumes: - name: wordpress-persistent-storage persistentVolumeClaim: claimName: wp-pv-claim - name: proxysql-config configMap: name: proxysql-configmap - name: shared-data emptyDir: {}

The YAML file has many lines and let's look the interesting part only. The first section:

apiVersion: apps/v1 kind: Deployment

The first line is the apiVersion. Our Kubernetes cluster is running on v1.12 so we should refer to the Kubernetes v1.12 API documentation and follow the resource declaration according to this API. The next one is the kind, which tells what type of resource that we want to deploy. Deployment, Service, ReplicaSet, DaemonSet, PersistentVolume are some of the examples.

The next important section is the "containers" section. Here we define all containers that we would like to run together in this pod. The first part is the Wordpress container:

- image: wordpress:4.9-apache name: wordpress env: - name: WORDPRESS_DB_HOST value: localhost:/tmp/proxysql.sock - name: WORDPRESS_DB_USER value: wordpress - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 80 name: wordpress volumeMounts: - name: wordpress-persistent-storage mountPath: /var/www/html - name: shared-data mountPath: /tmp

In this section, we are telling Kubernetes to deploy Wordpress 4.9 using Apache web server and we gave the container the name "wordpress". We also want Kubernetes to pass a number of environment variables:

  • WORDPRESS_DB_HOST - The database host. Since our ProxySQL container resides in the same Pod with the Wordpress container, it's more secure to use a ProxySQL socket file instead. The format to use socket file in Wordpress is "localhost:{path to the socket file}". By default, it's located under /tmp directory of the ProxySQL container. This /tmp path is shared between Wordpress and ProxySQL containers by using "shared-data" volumeMounts as shown further down. Both containers have to mount this volume to share the same content under /tmp directory.
  • WORDPRESS_DB_USER - Specify the wordpress database user.
  • WORDPRESS_DB_PASSWORD - The password for WORDPRESS_DB_USER. Since we do not want to expose the password in this file, we can hide it using Kubernetes Secrets. Here we instruct Kubernetes to read the "mysql-pass" Secret resource instead. Secrets has to be created in advanced before the pod deployment, as explained further down.

We also want to publish port 80 of the container for the end user. The Wordpress content stored inside /var/www/html in the container will be mounted into our persistent storage running on NFS.

Next, we define the ProxySQL container:

- image: severalnines/proxysql:1.4.12 name: proxysql volumeMounts: - name: proxysql-config mountPath: /etc/proxysql.cnf subPath: proxysql.cnf - name: shared-data mountPath: /tmp ports: - containerPort: 6033 name: proxysql

In the above section, we are telling Kubernetes to deploy a ProxySQL using severalnines/proxysql image version 1.4.12. We also want Kubernetes to mount our custom, pre-configured configuration file and map it to /etc/proxysql.cnf inside the container. There will be a volume called "shared-data" which map to /tmp directory to share with the Wordpress image - a temporary directory that shares a pod's lifetime. This allows ProxySQL socket file (/tmp/proxysql.sock) to be used by the Wordpress container when connecting to the database, bypassing the TCP/IP networking.

The last part is the "volumes" section:

volumes: - name: wordpress-persistent-storage persistentVolumeClaim: claimName: wp-pv-claim - name: proxysql-config configMap: name: proxysql-configmap - name: shared-data emptyDir: {}

Kubernetes will have to create three volumes for this pod:

  • wordpress-persistent-storage - Use the PersistentVolumeClaim resource to map NFS export into the container for persistent data storage for Wordpress content.
  • proxysql-config - Use the ConfigMap resource to map the ProxySQL configuration file.
  • shared-data - Use the emptyDir resource to mount a shared directory for our containers inside the Pod. emptyDir resource is a temporary directory that shares a pod's lifetime.

Therefore, based on our YAML definition above, we have to prepare a number of Kubernetes resources before we can begin to deploy the "blog" pod:

  1. PersistentVolume and PersistentVolumeClaim - To store the web contents of our Wordpress application, so when the pod is being rescheduled to other worker node, we won't lose the last changes.
  2. Secrets - To hide the Wordpress database user password inside the YAML file.
  3. ConfigMap - To map the configuration file to ProxySQL container, so when it's being rescheduled to other node, Kubernetes can automatically remount it again.
Severalnines   MySQL on Docker: How to Containerize Your Database Discover all you need to understand when considering to run a MySQL service on top of Docker container virtualization Download the Whitepaper PersistentVolume and PersistentVolumeClaim

A good persistent storage for Kubernetes should be accessible by all Kubernetes nodes in the cluster. For the sake of this blog post, we used NFS as the PersistentVolume (PV) provider because it's easy and supported out-of-the-box. The NFS server is located somewhere outside of our Kubernetes network and we have configured it to allow all Kubernetes nodes with the following line inside /etc/exports:

/nfs 192.168.55.*(rw,sync,no_root_squash,no_all_squash)

Take note that NFS client package must be installed on all Kubernetes nodes. Otherwise, Kubernetes wouldn't be able to mount the NFS correctly. On all nodes:

$ sudo apt-install nfs-common #Ubuntu/Debian $ yum install nfs-utils #RHEL/CentOS

Also, make sure on the NFS server, the target directory exists:

(nfs-server)$ mkdir /nfs/kubernetes/wordpress

Then, create a file called wordpress-pv-pvc.yml and add the following lines:

apiVersion: v1 kind: PersistentVolume metadata: name: wp-pv labels: app: blog spec: accessModes: - ReadWriteOnce capacity: storage: 3Gi mountOptions: - hard - nfsvers=4.1 nfs: path: /nfs/kubernetes/wordpress server: 192.168.55.200 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: wp-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi selector: matchLabels: app: blog tier: frontend

In the above definition, we would like Kubernetes to allocate 3GB of volume space on the NFS server for our Wordpress container. Take note for production usage, NFS should be configured with automatic provisioner and storage class.

Create the PV and PVC resources:

$ kubectl create -f wordpress-pv-pvc.yml

Verify if those resources are created and the status must be "Bound":

$ kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/wp-pv 3Gi RWO Recycle Bound default/wp-pvc 22h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/wp-pvc Bound wp-pv 3Gi RWO 22h Secrets

The first one is to create a secret to be used by the Wordpress container for WORDPRESS_DB_PASSWORD environment variable. The reason is simply because we don't want to expose the password in clear text inside the YAML file.

Create a secret resource called mysql-pass and pass the password accordingly:

$ kubectl create secret generic mysql-pass --from-literal=password=passw0rd

Verify that our secret is created:

$ kubectl get secrets mysql-pass NAME TYPE DATA AGE mysql-pass Opaque 1 7h12m ConfigMap

We also need to create a ConfigMap resource for our ProxySQL container. A Kubernetes ConfigMap file holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data. ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.

Since our database server is already running on bare-metal servers with a static hostname and IP address plus static monitoring username and password, in this use case the ConfigMap file will store pre-configured configuration information about the ProxySQL service that we want to use.

First create a text file called proxysql.cnf and add the following lines:

datadir="/var/lib/proxysql" admin_variables= { admin_credentials="admin:adminpassw0rd" mysql_ifaces="0.0.0.0:6032" refresh_interval=2000 } mysql_variables= { threads=4 max_connections=2048 default_query_delay=0 default_query_timeout=36000000 have_compress=true poll_timeout=2000 interfaces="0.0.0.0:6033;/tmp/proxysql.sock" default_schema="information_schema" stacksize=1048576 server_version="5.1.30" connect_timeout_server=10000 monitor_history=60000 monitor_connect_interval=200000 monitor_ping_interval=200000 ping_interval_server_msec=10000 ping_timeout_server=200 commands_stats=true sessions_sort=true monitor_username="proxysql" monitor_password="proxysqlpassw0rd" } mysql_servers = ( { address="192.168.55.171" , port=3306 , hostgroup=10, max_connections=100 }, { address="192.168.55.172" , port=3306 , hostgroup=10, max_connections=100 }, { address="192.168.55.171" , port=3306 , hostgroup=20, max_connections=100 }, { address="192.168.55.172" , port=3306 , hostgroup=20, max_connections=100 } ) mysql_users = ( { username = "wordpress" , password = "passw0rd" , default_hostgroup = 10 , active = 1 } ) mysql_query_rules = ( { rule_id=100 active=1 match_pattern="^SELECT .* FOR UPDATE" destination_hostgroup=10 apply=1 }, { rule_id=200 active=1 match_pattern="^SELECT .*" destination_hostgroup=20 apply=1 }, { rule_id=300 active=1 match_pattern=".*" destination_hostgroup=10 apply=1 } ) mysql_replication_hostgroups = ( { writer_hostgroup=10, reader_hostgroup=20, comment="MySQL Replication 5.7" } )

Pay extra attention to the "mysql_servers" and "mysql_users" sections, where you might need to modify the values to suit your database cluster setup. In this case, we have two database servers running in MySQL Replication as summarized in the following Topology screenshot taken from ClusterControl:

All writes should go to the master node while reads are forwarded to hostgroup 20, as defined under "mysql_query_rules" section. That's the basic of read/write splitting and we want to utilize them altogether.

Then, import the configuration file into ConfigMap:

$ kubectl create configmap proxysql-configmap --from-file=proxysql.cnf configmap/proxysql-configmap created

Verify if the ConfigMap is loaded into Kubernetes:

$ kubectl get configmap NAME DATA AGE proxysql-configmap 1 45s Deploying the Pod

Now we should be good to deploy the blog pod. Send the deployment job to Kubernetes:

$ kubectl create -f blog-deployment.yml

Verify the pod status:

$ kubectl get pods NAME READY STATUS RESTARTS AGE blog-54755cbcb5-t4cb7 2/2 Running 0 100s

It must show 2/2 under the READY column, indicating there are two containers running inside the pod. Use the -c option flag to check the Wordpress and ProxySQL containers inside the blog pod:

$ kubectl logs blog-54755cbcb5-t4cb7 -c wordpress $ kubectl logs blog-54755cbcb5-t4cb7 -c proxysql

From the ProxySQL container log, you should see the following lines:

2018-10-20 08:57:14 [INFO] Dumping current MySQL Servers structures for hostgroup ALL HID: 10 , address: 192.168.55.171 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 100 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment: HID: 10 , address: 192.168.55.172 , port: 3306 , weight: 1 , status: OFFLINE_HARD , max_connections: 100 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment: HID: 20 , address: 192.168.55.171 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 100 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment: HID: 20 , address: 192.168.55.172 , port: 3306 , weight: 1 , status: ONLINE , max_connections: 100 , max_replication_lag: 0 , use_ssl: 0 , max_latency_ms: 0 , comment:

HID 10 (writer hostgroup) must have only one ONLINE node (indicating a single master) and the other host must be in at least in OFFLINE_HARD status. For HID 20, it's expected to be ONLINE for all nodes (indicating multiple read replicas).

To get a summary of the deployment, use the describe flag:

$ kubectl describe deployments blog

Our blog is now running, however we can't access it from outside of the Kubernetes network without configuring the service, as explained in the next section.

Creating the Blog Service

The last step is to create attach a service to our pod. This to ensure that our Wordpress blog pod is accessible from the outside world. Create a file called blog-svc.yml and paste the following line:

apiVersion: v1 kind: Service metadata: name: blog labels: app: blog tier: frontend spec: type: NodePort ports: - name: blog nodePort: 30080 port: 80 selector: app: blog tier: frontend

Create the service:

$ kubectl create -f blog-svc.yml

Verify if the service is created correctly:

root@kube1:~/proxysql-blog# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE blog NodePort 10.96.140.37 <none> 80:30080/TCP 26s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43h

Port 80 published by the blog pod is now mapped to the outside world via port 30080. We can access our blog post at http://{any_kubernetes_host}:30080/ and should be redirected to the Wordpress installation page. If we proceed with the installation, it would skip the database connection part and directly show this page:

It indicates that our MySQL and ProxySQL configuration is correctly configured inside wp-config.php file. Otherwise, you would be redirected to the database configuration page.

Our deployment is now complete.

Managing ProxySQL Container inside a Pod

Failover and recovery are expected to be handled automatically by Kubernetes. For example, if Kubernetes worker goes down, the pod will be recreated in the next available node after --pod-eviction-timeout (default to 5 minutes). If the container crashes or is killed, Kubernetes will replace it almost instantly.

Some common management tasks are expected to be different when running within Kubernetes, as shown in the next sections.

Scaling Up and Down

In the above configuration, we were deploying one replica in our deployment. To scale up, simply change the spec.replicas value accordingly by using kubectl edit command:

$ kubectl edit deployment blog

It will open up the deployment definition in a default text file and simply change the spec.replicas value to something higher, for example, "replicas: 3". Then, save the file and immediately check the rollout status by using the following command:

$ kubectl rollout status deployment blog Waiting for deployment "blog" rollout to finish: 1 of 3 updated replicas are available... Waiting for deployment "blog" rollout to finish: 2 of 3 updated replicas are available... deployment "blog" successfully rolled out

At this point, we have three blog pods (Wordpress + ProxySQL) running simultanouesly in Kubernetes:

$ kubectl get pods NAME READY STATUS RESTARTS AGE blog-54755cbcb5-6fnqn 2/2 Running 0 11m blog-54755cbcb5-cwpdj 2/2 Running 0 11m blog-54755cbcb5-jxtvc 2/2 Running 0 22m

At this point, our architecture is looking something like this:

Take note that it might require more customization than our current configuration to run Wordpress smoothly in a horizontal-scaled production environment (think about static contents, session management and others). Those are actually beyond the scope of this blog post.

Scaling down procedures are similar.

Configuration Management

Configuration management is important in ProxySQL. This is where the magic happens where you can define your own set of query rules to do query caching, firewalling and rewriting. Contrary to the common practice, where ProxySQL would be configured via Admin console and push into persistency by using "SAVE .. TO DISK", we will stick with configuration files only to make things more portable in Kubernetes. That's the reason we are using ConfigMaps.

Since we are relying on our centralized configuration stored by Kubernetes ConfigMaps, there are a number of ways to perform configuration changes. Firstly, by using the kubectl edit command:

$ kubectl edit configmap proxysql-configmap

It will open up the configuration in a default text editor and you can directly make changes to it and save the text file once done. Otherwise, recreate the configmaps should also do:

$ vi proxysql.cnf # edit the configuration first $ kubectl delete configmap proxysql-configmap $ kubectl create configmap proxysql-configmap --from-file=proxysql.cnf

After the configuration is pushed into ConfigMap, restart the pod or container as shown in the Service Control section. Configuring the container via ProxySQL admin interface (port 6032) won't make it persistent after pod rescheduling by Kubernetes.

Service Control

Since the two containers inside a pod are tightly coupled, the best way to apply the ProxySQL configuration changes is to force Kubernetes to do pod replacement. Consider we are having three blog pods now after we scaled up:

$ kubectl get pods NAME READY STATUS RESTARTS AGE blog-54755cbcb5-6fnqn 2/2 Running 0 31m blog-54755cbcb5-cwpdj 2/2 Running 0 31m blog-54755cbcb5-jxtvc 2/2 Running 1 22m

Use the following command to replace one pod at a time:

$ kubectl get pod blog-54755cbcb5-6fnqn -n default -o yaml | kubectl replace --force -f - pod "blog-54755cbcb5-6fnqn" deleted pod/blog-54755cbcb5-6fnqn

Then, verify with the following:

$ kubectl get pods NAME READY STATUS RESTARTS AGE blog-54755cbcb5-6fnqn 2/2 Running 0 31m blog-54755cbcb5-cwpdj 2/2 Running 0 31m blog-54755cbcb5-qs6jm 2/2 Running 1 2m26s

You will notice the most recent pod has been restarted by looking at the AGE and RESTART column, it came up with a different pod name. Repeat the same steps for the remaining pods. Otherwise, you can also use "docker kill" command to kill the ProxySQL container manually inside the Kubernetes worker node. For example:

(kube-worker)$ docker kill $(docker ps | grep -i proxysql_blog | awk {'print $1'})

Kubernetes will then replace the killed ProxySQL container with a new one.

Monitoring

Use kubectl exec command to execute SQL statement via mysql client. For example, to monitor query digestion:

$ kubectl exec -it blog-54755cbcb5-29hqt -c proxysql -- mysql -uadmin -p -h127.0.0.1 -P6032 mysql> SELECT * FROM stats_mysql_query_digest;

Or with a one-liner:

$ kubectl exec -it blog-54755cbcb5-29hqt -c proxysql -- mysql -uadmin -p -h127.0.0.1 -P6032 -e 'SELECT * FROM stats_mysql_query_digest' Related resources  Database Load Balancing for MySQL and MariaDB with ProxySQL - Tutorial  MySQL on Docker: Running a MariaDB Galera Cluster without Container Orchestration Tools  The Easy Way to Deploy MySQL Replication and ProxySQL with Automatic Failover

By changing the SQL statement, you can monitor other ProxySQL components or perform any administration tasks via this Admin console. Again, it will only persist during the ProxySQL container lifetime and won't get persisted if the pod is rescheduled.

Final Thoughts

ProxySQL holds a key role if you want to scale your application containers and and have an intelligent way to access a distributed database backend. There are a number of ways to deploy ProxySQL on Kubernetes to support our application growth when running at scale. This blog post only covers one of them.

In an upcoming blog post, we are going to look at how to run ProxySQL in a centralized approach by using it as a Kubernetes service.

Tags:  proxysql docker kubernetes replication MySQL MariaDB

MySQL Connector/NET 8.0.13 has been released

Dear MySQL users,

MySQL Connector/NET 8.0.13 is the first version to support
Entity Framework Core 2.1 and the third general availability release
of MySQL Connector/NET to add support for the new X DevAPI,which
enables application developers to write code that combines the
strengths of the relational and document models using a modern,
NoSQL-like syntax that does not assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/index.html. For more
information about how the X DevAPI is implemented in Connector/NET, see
http://dev.mysql.com/doc/dev/connector-net.

NuGet packages provide functionality at a project level. To get the
full set of features available in Connector/NET such as availability
in the GAC, integration with Visual Studio’s Entity Framework Designer
and integration with MySQL for Visual Studio, installation through
the MySQL Installer or the stand-alone MSI is required.

For general documentation about how to get started using MySQL
as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

To download MySQL Connector/NET 8.0.13, see
http://dev.mysql.com/downloads/connector/net/

Installation instructions can be found at
https://dev.mysql.com/doc/connector-net/en/connector-net-installation.html

Changes in MySQL Connector/NET 8.0.13 (2018-10-22, General Availability) * Important Changes * Functionality Added or Changed * Bugs Fixed Important Changes * The default value for the SslMode connection option now differs based on the protocol used to make the connection. The Preferred mode has been reintroduced in this release (see Options for Both Classic MySQL Protocol and X Protocol (http://dev.mysql.com/doc/connector-net/en/connector-net-8-0-connection-options.html#connector-net-8-0-connection-options-classic-xprotocol). To summarize the default Sslmode values in the Connector/NET 8.0 (and 7.0) release series: Connector/NET 8.0.13: Preferred mode is the default for classic MySQL protocol connections only. Required mode is the default for X Protocol connections only (Preferred mode is not available for use with X Protocol). Connector/NET 8.0.8 to 8.0.12: Preferred mode is not supported for any connections. Required mode is the default for both classic MySQL protocol and X Protocol connections. Connector/NET 7.0.0 to 7.0.7: Preferred mode is the default for both classic MySQL protocol and X Protocol connections. (Bug #28687769) Functionality Added or Changed * Document Store: An incremental improvement was made to the performance of session creation with a connection string. (Bug #28343655) * Support for EF Core 2.1 was added to Connector/NET 8.0.13 and support for EF Core 2.0 was discontinued in the same connector version. Other versions of Connector/NET continue to support EF Core 2.0 (see Entity Framework Core Support (http://dev.mysql.com/doc/connector-net/en/connector-net-entityframework-core.html). * The ConnectionTimeout connection option and property were reimplemented as the Connect-Timeout option (and the ConnectTimeout property) for X Protocol operations. Some aspects of the timeout behavior were changed (see Options for X Protocol Only (http://dev.mysql.com/doc/connector-net/en/connector-net-8-0-connection-options.html#connector-net-8-0-connection-options-xprotocol). The new ConnectTimeout property was added to the MySqlX.XDevAPI.MySqlXConnectionStringBuilder class and the existing ConnectionTimeout property was removed. No modifications were made to the existing implementation of the ConnectionTimeout option (or property) for classic MySQL operations. * Connector/NET now provides connection pooling for applications using the X Protocol. The implementation includes the new static MySQLX.GetClient method that takes two parameters: connectionData (connection string or URI) and connectionOptions (JSON-formatted string or anonymous object containing the pooling options). Method overloading enables the following type combinations: MySQLX.GetClient(Object, Object) MySQLX.GetClient(Object, String) MySQLX.GetClient(String, Object) MySQLX.GetClient(String, String) GetClient returns the new Client object, which retrieves an existing and currently unused network connection from the pool, resets it, and uses it. Closing a session marks the underlying connection as unused and returns it to the pool. Connection options are configured as follows: // Connection options of type String Client client = MySQLX.GetClient(ConnectionData, "{ \"pooling\": { \"maxSize\": 5, \"queueTimeout\": 5000 } }"); // Connection options of type Object (anonymous object) Client client = MySQLX.GetClient(ConnectionData, new { pooling = new {maxSize = 5, queueTimeout = 5000 } }); The new keyword must be used twice when the connection options are configured using an anonymous object. Connection options are: enabled, maxSize, maxIdleTime, and queueTimeout. For more information, see Connection and Session Concepts (http://dev.mysql.com/doc/x-devapi-userguide/en/devapi-connection-concepts.html). Bugs Fixed * Document Store: Connector-side validation performed when the Collection.CreateIndex method was called duplicated some of the checks already made by the server. The connector now only validates that the indexDefinition value passed to the method is a valid JSON document with the correct structure. (Bug #28343828) * EF Core: An invalid syntax error was generated when a new property (defined as numeric, has a default value, and is not a primary key) was added to an entity that already contained a primary-key column with the AUTO_INCREMENT attribute. This fix validates that the entity property (column) is a primary key first before adding the attribute. (Bug #28293927) * Document Store: A mixed alphanumeric value such as "1address" when used with the Set and Unset methods caused the operations to throw an exception. This fix resolves the way mixed values are converted into tokens to generate only one token as an identifier. (Bug #28094094) * EF Core: The implementation of some methods required to scaffold an existing database were incomplete. (Bug #27898343, Bug #90368) * Attempts to create a new foreign key from within an application resulted in an exception when the key was generated by a server in the MySQL 8.0 release series. (Bug #27715069) * A variable of type POINT when used properly within an application targeting MySQL 8.0 generated an SQL syntax error. (Bug #27715007) * The case-sensitive lookup of field ordinals was initialized using case-insensitive comparison logic. This fix removes the original case-sensitive lookup. (Bug #27285641, Bug #88950) * The MySql.Data.Types.MySqlGeometry constructor called with an array of bytes representing an empty geometry collection generated an ArgumentOutOfRangeException exception, rather than creating the type as expected. Thanks to Peet Whittaker for the patch. (Bug #26421346, Bug #86974) * Slow connections made to MySQL were improved by reducing the frequency and scope of operating system details required by the server to establish and maintain a connection. (Bug #22580399, Bug #80030)

On Behalf of MySQL Release Engineering team,
Surabhi Bhat

MySQL Connector/C++ 8.0.13 has been released

Dear MySQL users,

MySQL Connector/C++ 8.0.13 is a new release version of the MySQL
Connector/C++ 8.0 series.

Connector/C++ 8.0 can be used to access MySQL implementing Document Store or in a traditional way, using SQL queries. It allows writing both C++ and plain C applications using X DevAPI and X DevAPI for C. It also supports the legacy API of Connector/C++ 1.1 based on JDBC4.

To learn more about how to write applications using X DevAPI, see “X DevAPI User Guide”

https://dev.mysql.com/doc/x-devapi-userguide/en/

See also “X DevAPI Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/devapi_ref.html

and “X DevAPI for C Reference” at

https://dev.mysql.com/doc/dev/connector-cpp/xapi_ref.html

For generic information on using Connector/C++ 8.0, see

https://dev.mysql.com/doc/dev/connector-cpp/

For general documentation about how to get started using MySQL
as a document store, see

http://dev.mysql.com/doc/refman/8.0/en/document-store.html

To download MySQL Connector/C++ 8.0.13, see the “Generally Available (GA)
Releases” tab at

https://dev.mysql.com/downloads/connector/cpp/

Changes in MySQL Connector/C++ 8.0.13 (2018-10-22, General Availability) * Character Set Support * Packaging Notes * X DevAPI Notes * Functionality Added or Changed * Bugs Fixed Character Set Support * For connections to the server made using the legacy JDBC API (that is, not made using X DevAPI or X DevAPI for C), the default connection character set is now utf8mb4 rather than utf8. Connections to the server made using X DevAPI or X DevAPI for C continue to use the connection character set determined by the server. (Bug #28204677) Packaging Notes * Connector/C++ 32-bit MSI packages are now available for Windows. These 32-bit builds enable use of the legacy JDBC connector. * Connector/C++ compressed tar file packages are now available for Solaris. It is also possible to build Connector/C++ from source on Solaris. For platform-specific build notes, see Building Connector/C++ Applications: Platform-Specific Considerations (http://dev.mysql.com/doc/connector-cpp/8.0/en/connector-cpp-apps-platform-considerations.html). X DevAPI Notes * Connector/C++ now provides connection pooling for applications using X Protocol. This capability is based on client objects, a new type of X DevAPI object. A client can be used to create sessions, which take connections from a pool managed by that client. For a complete description, see Connecting to a Single MySQL Server Using Connection Pooling (http://dev.mysql.com/doc/x-devapi-userguide/en/connecting-connection-pool.html). X DevAPI example: using namespace mysqlx; Client cli("user:password@host_name/db_name", ClientOption::POOL_MAX_SIZE, 7); Session sess = cli.getSession(); // use sess as before cli.close();  // close session sess X DevAPI for C example: char error_buf[255]; int  error_code; mysqlx_client_t *cli = mysqlx_get_client_from_url( "user:password@host_name/db_name", "{ \"maxSize\": 7 }", error_buf, &error_code); mysqlx_session_t *sess = mysqlx_get_session_from_client(cli); // use sess as before mysqlx_close_client(cli);  // close session sess * For X DevAPI, a new connect-timeout option can be specified in connection strings or URIs to indicate a connection timeout in milliseconds. The SessionSettings::Options object supports a new CONNECT_TIMEOUT option. For X DevAPI for C, the mysqlx_opt_type_t constant is MYSQLX_OPT_CONNECT_TIMEOUT together with the OPT_CONNECT_TIMEOUT() macro. If no timeout option is specified, the default is 10000 (10 seconds). A value of 0 disables the timeout. The following examples set the connection timeout to 10 milliseconds: X DevAPI examples: Session sess("user@host/db?connect-timoeut=10"); Session sess(..., SessionOption::CONNECT_TIMEOUT, 10, ...); Session sess( ..., SessionOption::CONNECT_TIMEOUT, std::chrono::milliseconds(10), ... ); X DevAPI for C example: mysqlx_session_options_t *opt = mysqlx_session_options_new(); mysqlx_session_option_set(opt, ..., OPT_CONNECT_TIMEOUT(10), ...); Functionality Added or Changed * JSON: Connector/C++ now uses RapidJSON for improved performance of operations that involve parsing JSON strings. There are no user-visible API changes for X DevAPI or X DevAPI for C. Bugs Fixed * On SLES 15, Connector/C++ installation failed if libmysqlcppcon7 was already installed. (Bug #28658120) * Applications that were statically linked to the legacy JDBC connector could encounter a read access violation at exit time due to nondeterministic global destruction order. (Bug #28525266, Bug #91820) * Configuring with -DCMAKE_BUILD_TYPE=Release did not work on Linux. (Bug #28045274) * Field references in .having() expressions could be interpreted incorrectly and produce errors. (Bug #26310713)

Enjoy and thanks for the support!

On Behalf of Oracle/MySQL Release Engineering Team,
Balasubramanian Kandasamy

MySQL Connector/Node.js 8.0.13 has been released

Dear MySQL users,

MySQL Connector/Node.js is a new Node.js driver for use with the X
DevAPI. This release, v8.0.13, is a maintenance release of the
MySQL Connector/Node.js 8.0 series.

The X DevAPI enables application developers to write code that combines
the strengths of the relational and document models using a modern,
NoSQL-like syntax that does not assume previous experience writing
traditional SQL.

MySQL Connector/Node.js can be downloaded through npm (see
https://www.npmjs.com/package/@mysql/xdevapi for details) or from
https://dev.mysql.com/downloads/connector/nodejs/.

To learn more about how to write applications using the X DevAPI, see
http://dev.mysql.com/doc/x-devapi-userguide/en/. For more information
about how the X DevAPI is implemented in MySQL Connector/Node.js, and
its usage, see http://dev.mysql.com/doc/dev/connector-nodejs/.

Please note that the X DevAPI requires at least MySQL Server version
8.0 or higher with the X Plugin enabled. For general documentation
about how to get started using MySQL as a document store, see
http://dev.mysql.com/doc/refman/8.0/en/document-store.html.

Changes in MySQL Connector/Node.js 8.0.13 (2018-10-22, General availability)

Functionality Added or Changed

* To go with the existing asynchronous
mysqlx.getSession(conn_str) method, a new synchronous
mysqlx.getClient(conn_str, options) method was added that
creates a connection pool handler that provides an
asynchronous getSession() method to create and retrieve
connections from the pool. The collection pooling options
are:

+ enabled: enables or disables connection pooling;
boolean and defaults to true.

+ maxSize: maximum number of connections available in
the pool; positive integer and defaults to 25.

+ maxIdleTime: maximum number of milliseconds a
connection can be idle in the queue before being
closed; integer >= 0 and defaults to 0 (infinite).

+ queueTimeout: maximum number of milliseconds a
request will wait for a connection to become
available; integer >= 0 and defaults to 0
(infinite).
This is different than connectTimeout that’s used
for non-pooling. In a pooling scenario, there might
already be connections in the pool and queueTimeout
controls how long to wait for a connection in the
pool.
Example usage:
var mysqlx = require(‘@mysql/xdevapi’)
var client = mysqlx.getClient(
{ user: ‘root’, host: ‘localhost’, port: 33060 },
{ pooling: { enabled: true, maxIdleTime: 5000, maxSize: 25, queueTimeout: 20000 } }
);

client.getSession()
.then(session => {
console.log(session.inspect())
return session.close() // the connection becomes idle in the client pool
})
.then(() => {
return client.getSession()
})
.then(session => {
console.log(session.inspect())
return client.close() // closes all connections and destroys the pool
})

Closing a session attached to the pool makes the
connection available in the pool for subsequent
getSession() calls, while closing (destroying) the pool
effectively closes all server connections.

* Added a connection timeout query parameter. This defines
the length of time (milliseconds) the client waits for a
MySQL server to become available in the given network
addresses. It was added to both the mysqlx.getSession()
(non-pooling sessions) and mysqlx.getClient() (pooling
sessions) interfaces. This option defaults to 10000 (10
seconds). The value 0 disables the timeout so the client
will wait until the underlying socket (platform
dependent) times out.
Similar to other option formatting rules, this option
defined as connection-timeout (kebab-case) for URI
definitions and connectionTimeout (camelCase) for plain
JavaScript configuration objects.
Example usage:
const mysqlx = require(‘@mysql/xdevapi’);
var client = mysqlx.getClient(‘root@localhost?connect-timeout=5000’)
client.getSession()
.catch(err => {
console.log(err.message) // “Connection attempt to the server was aborted. Timeout of 5000 ms was exceeded.”
})

// Or

const mysqlx = require(‘@mysql/xdevapi’);
var client = mysqlx.getClient(‘mysqlx://root:passwd@[localhost:33060, 127.0.0.1:33060]?connect-timeout=5000’)
client.getSession()
.catch(err => {
// connection could not be established after 10 seconds (5 seconds for each server)
console.log(err.message); // All server connection attempts we re aborted. Timeout of 5000 ms was exceeded for each selected server.
});

In a multi-host scenario, the connect-timeout value
applies to each individual host.

Bugs Fixed

* Improved the handling of X Protocol global notices by
properly logging and then ignoring non-fatal errors, and
making the connection unusable for subsequent operations
in the case of a fatal error. (Bug #28653781)

* Calling getCollationName() on non-textual fields, such as
INT, threw the following error “TypeError: Cannot read
property ‘collation’ of undefined”. (Bug #28608923)

* The fields() method did not function with valid
expressions generated by the expr() method. (Bug
#28409639)

* The returned Session.inspect() object now includes the
‘user’ property in addition to the ‘dbUser’ property but
containing the same value. (Bug #28362115)

On Behalf of the MySQL/Oracle Release Engineering Team,
Hery Ramilison

MySQL Connector/ODBC 8.0.13 has been released

Dear MySQL users,

MySQL Connector/ODBC 8.0.13 is a new version in the MySQL Connector/ODBC 8.0 series, the ODBC driver for the MySQL Server.

The available downloads include both a Unicode driver and an ANSI driver based on the same modern codebase. Please select the driver type you need based on the type of your application – Unicode or ANSI. Server-side prepared statements are enabled by default. It is suitable for use with any MySQL server version from 5.5.

This release of the MySQL ODBC driver is conforming to the ODBC 3.8 specification. It contains implementations of key 3.8 features, including self-identification as a ODBC 3.8 driver, streaming of output parameters (supported for binary types only), and support of the SQL_ATTR_RESET_CONNECTION connection attribute (for the Unicode driver only).

The release is now available in source and binary form for a number of platforms from our download pages at

https://dev.mysql.com/downloads/connector/odbc/

For information on installing, please see the documentation at

https://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation.html

Changes in MySQL Connector/ODBC 8.0.13 (2018-10-22, General Availability)

Functionality Added or Changed

  • Added dynamic libmysql linking support via the -DMYSQLCLIENT_STATIC_LINKING:BOOL=TRUE|FALSE option; defaults to FALSE to enable dynamic linking.

Bugs Fixed

  • Fixed column metadata handling with Microsoft Access.
    (Bug #28670725, Bug #91856)
  • The following obsolete options were removed: NO_SCHEMA (use NO_CATALOG instead), DISABLE_SSL_DEFAULT (use SSLMODE instead), and SSL_ENFORCE (use SSLMODE instead). (Bug #28407520)
  • The ODBC Driver returned 0 for the SQL_MAX_SCHEMA_NAME_LEN attribute, and now returns 64 as the maximum length for a MySQL schema name.
    (Bug #28385722)
  • Because the MySQL ODBC driver ignored the SQL_RD_OFF value for the SQL_ATTR_RETRIEVE_DATA attribute, it incorrectly kept writing into the data buffers. This led to write access violation errors when data was written into the buffer when the user application explicitly requested not to write there.
    (Bug #28098219, Bug #91060)

On Behalf of Oracle/MySQL Release Engineering Team,
Kent Boortz

Announcing MySQL Server 8.0.13, 5.7.24, 5.6.42, and 5.5.62

MySQL Server 8.0.13, 5.7.24, 5.6.42, and 5.5.62, new versions of the popular Open Source Database Management System, have been released. These releases are recommended for use on production systems. For an overview of what’s new, please see http://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html http://dev.mysql.com/doc/refman/5.5/en/mysql-nutshell.html For information on installing the release on new servers, please see the MySQL installation […]

Pages