Planet MySQL

LaraconEU 2016 - Overview

Last time I went to LaraconEU as in 2014 as speaker in the community track, it was my first international event ever speaking in english and I had jitters from it.

A lot a friends were made and it was nice to see all of them again this year.

What I also like about LaraconEU it is a Laravel conference that doesn’t talk always about Laravel, being a bit less insular than one would imagine.

Shawn McCool and his team made a great job this year and I was very honoured this year to be in the Track #1 (Blue Track).

The first round of applauses when I was introducing myself and saying this was my second Laracon made very much emotional and triggered something on my mind:

These people are there to soak in the knowledge you have to share, so give your best, and I confess I tried exactly that.

The venue was amazing, I don’t have a single complaint to make regarding the staff and the organisers, they made everything possible to make people comfortable and I could see them working hard behind the scenes.

The problem is when you are speaking on the second day on a 2 day conference is that you don’t get to get the most out of the other talks that are before yours: insecurity, anxiety, fear, all play a role in there.

The talks I saw were really well delivered and you could see by people that “knew their shit” as we were joking around. Also a lot of cheese jokes to break the ice.

My only regret is not have seen more of the speakers talk, which I will definitely rectify it as soon as the videos comes out, and have made new friends this years was also awesome, keep in touch!

Joind.in: Making the most out of MySQL

Slides:

Making the most out of MySQL from Gabriela D'Ávila
PlanetMySQL Voting: Vote UP / Vote DOWN

Google Cloud SQL Second Generation is available

Recently Google announced, that the second generation of Cloud SQL left the beta state and it is available. I decided to take a look, because last time when I checked it, it looked good, but I couldn’t take it seriously because of the nonexistent SLA. I have a few databases running on Amazon RDS, but I don’t […]
PlanetMySQL Voting: Vote UP / Vote DOWN

Develop By Example – Working with custom queries using Node.js

In all the previous blog posts we covered a lot of examples of how to perform actions in a MySQL server set up as a document store. In all the code examples we used the XSession to communicate to the MySQL server.

The XSession is a logical session that abstracts our connection. With it, we can be connected to one or more servers and also give us the required support to work with collections and relational tables. But there is something that the XSession does not provide, full SQL language support.

When you need to execute a query that is not supported by an XSession, use a NodeSession instead. The NodeSession object can be useful, for example, when you want to connect to a specific MySQL server and do operations specifically on this server, or when you want to execute a specific function or create a stored procedure.

The following example demonstrates how to use a NodeSession object.

var mysqlx = require('mysqlx'); mysqlx.getNodeSession({   host: 'host',   port: '33060',   dbUser: 'root',   dbPassword: 'my pass' }).then(function (nodesession) {   nodesession.executeSql("USE mySchema")   .execute()   .then(result => {     nodesession.executeSql("SELECT * FROM myColl LIMIT 1")     .execute(function (result){       console.log(result[0]);       nodesession.close();     });   }).catch(function (err) {     console.log(err.message);     console.log(err.stack);   }); });

In the previous example, we use the method getNodeSession to get the NodeSession object (nodesession). Then we call the nodesession objects’s executeSql method to set the schema that we are going to use. To execute the query, we need to call the execute method. When the query that sets the current database completes, it returns a Result object (result) that contains information about the execution of the query. In this instance, the result object does not contain any useful information other than reporting the success or failure of the operation. Then, we call the executeSql method again, setting a query that may return one record. Once the query is executed, a result object is received, but in this case, it contains a row returned from the query represented as an array object. Each item in the array represents a column of a row. We access the array object by indexes to get the information we require, in the code we get the value of the column 0 (index 0) to write it in the console.

Of course, the previous example can be accomplished without using a NodeSession object. For that reason, in the next example we are going to create a more complex query that cannot be handled by an XSession object.

Suppose that we have a collection where we store the reviews for a movie, and the review contains a rating. We want to get the 10 best rating movies based on the average of the rating that the movie has. Currently, we do not have support to calculate the average using an XSession object, and so for this case, we need a NodeSession.

The next example demonstrates how to accomplish what we want.

var mysqlx = require('mysqlx'); mysqlx.getNodeSession({   host: 'host',   port: '33060',   dbUser: 'root',   dbPassword: 'my pass' }).then(function (nodesession) {   var query = 'select movie_name from' +   ' (select  JSON_EXTRACT(doc, "$.movie_name")'+   ' as movie_name, round(avg(JSON_EXTRACT(doc,' +   ' "$.rate")), 2) as average from mySchema.reviews' +   ' group by movie_name) as bestrated' +   ' order by average desc limit 10;';   const execSql = nodesession.executeSql(query);   execSql.execute(function (result) {     console.log(result[0]);   })   .catch(function (err) {     console.log(err.message);     console.log(err.stack);   });   nodesession.close(); }).catch(function (err) { console.log(err.message); console.log(err.stack); });

In the previous example we get the NodeSession object (nodesession). Then we call the nodesession object’s executeSql method to define the query we want to execute. As you can see in this example, we don’t set the schema we want to use, but specify it in the query (mySchema.reviews). Once the query has executed, we log the results returned to the console and at the end the node session is closed.

In the query you can notice that we are using a function called “JSON_EXTRACT”, which is one of the many functions introduced in MySQL Server 5.7 that enable you to create and query JSON documents. For more information about these functions, see the MySQL Reference Guide articles JSON Functions and Manipulation JSON Data.

See you in the next blog post.


PlanetMySQL Voting: Vote UP / Vote DOWN

MariaDB 10.0.27 now available

The MariaDB project is pleased to announce the immediate availability of MariaDB 10.0.27. See the release notes and changelog for details on this release. Download MariaDB 10.0.27 Release Notes Changelog What is MariaDB 10.0? MariaDB APT and YUM Repository Configuration Generator Thanks, and enjoy MariaDB!

The post MariaDB 10.0.27 now available appeared first on MariaDB.org.


PlanetMySQL Voting: Vote UP / Vote DOWN

Percona Live Europe featured talk with Krzysztof Książek — MySQL Load Balancers – MaxScale, ProxySQL, HAProxy, MySQL Router & nginx

Welcome to the first Percona Live Europe featured talk with Percona Live Europe 2016: Amsterdam speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference. We’ll also discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live Europe registration bonus!

In this Percona Live Europe featured talk, we’ll meet Krzysztof Książek, Senior Support Engineer at Severalnines AB. His talk will be on MySQL Load Balancers – MaxScale, ProxySQL, HAProxy, MySQL Router & nginx: a close up look. Load balancing MySQL connections and queries using HAProxy has been popular in the past years. However, the recent arrival of MaxScale, MySQL Router, ProxySQL and now also Nginx as a reverse proxy have changed the game. Which use cases are best for which solution, and how well do they integrate into your environment?

I had a chance to speak with Krzysztof and learn a bit more about these questions:

Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it?

Krzysztof: I was working as a system administrator in a hosting company in Poland. They had a need for a dedicated MySQL DBA. So I volunteered for the job. Later, I decided it was time to move on and joined Laine Campbell’s PalominoDB. I had a great time there, working with large MySQL deployments. At the beginning of 2015, I joined Severalnines as Senior Support Engineer. It was a no-brainer for me as I was always interested in building and managing scalable clusters based on MySQL — this is exactly what Severalnines helps its customers with.

Percona: Your talk is called “MySQL Load Balancers: MaxScale, ProxySQL, HAProxy, MySQL Router & nginx – a close up look.” Why are more load balancing solutions becoming available? What problems does load balancing solve for database environments?

Krzysztof:Load balancers are a must in highly scalable environments that are usually distributed across multiple servers or data centers. Large MySQL setups can quickly become very complex — many clusters, each containing numerous nodes and using different and interconnected technologies: MySQL replication, Galera Cluster. Load balancers not only help to maintain availability of the database tier by routing traffic to available nodes, but they also hide the complexity of the database tier from the application.

Percona: You call out three general groups of load balancers: application connectors, TCP reverse proxies, and SQL-aware load balancers. What workloads do these three groups generally address best?

Krzysztof: I wouldn’t say “workloads” — I’d say more like “use cases.” Each of those groups will handle all types of workloads but they do it differently. TCP reverse proxies like HAProxy or nginx will just route packets: fast and robust. They won’t understand the state of MySQL backends, though. For that you need to use external scripts like Percona’s clustercheck or Severalnines’ clustercheck-iptables.

On the other hand, should you want to build your application to be more database-aware, you can use mysqlnd and manage complex HA topologies from your application. Finally, SQL-aware load balancers like ProxySQL or MaxScale can be used to move complexity away from the application and, for example, perform read-write split in the proxy layer. They detect the MySQL state and can make necessary changes in routing — such as moving writes to a newly promoted master. They can also empower the DBA by allowing him to (for example) rewrite queries as they pass the proxy.

Percona: Where do you see load balancing technologies heading in order to deal with some of the database trends that keep you awake at night?

Krzysztof: Personally, I love to see the “empowerment” of DBA’s. For example, ProxySQL not only routes packets and helps to maintain high availability (although this is still the main role of a proxy), it is also a flexible tool that can help a DBA tackle many day-to-day problems. An offending query? You can cache it in the proxy or you can rewrite it on the fly. Do you need to test your system before an upgrade, using real-world queries? You can configure ProxySQL to mirror the production traffic on a test system. You can use it to build a sharded environment. These things, in the past, typically weren’t possible for a DBA to do — the application had to be modified and new code had to be deployed. Activities like those take time, time that is very precious when the ops staff is dealing with databases on fire from a high load. Now I can do all that just through reconfiguring a proxy. Isn’t it great?

Percona: What are looking forward to the most at Percona Live Europe this year?

Krzysztof: The Percona Live Europe agenda looks great and, as always, it’s a hard choice to decide which talks to attend. I’d love to learn more about the upcoming MySQL 8.0: there are quite a few talks covering both performance improvements and different features of 8.0. There’s also a new Galera version in the works with great features like non-blocking DDL’s, so it would be great to see what’s happening there. We’re also excited to run the “Become a MySQL DBA” tutorial again (our blog series on the same topic has been very popular).

Additionally, I’ve been working within the MySQL community for a while and I have many friends who, unfortunately, I don’t see very often. Percona Live Europe is an event that brings us together and where we can catch up. I’m definitely looking forward to this.

You can read more about Krzysztof thoughts on load balancers at Severalnines blog.

Want to find out more about Krzysztof, load balancers and Severalnines? Register for Percona Live Europe 2016, and come see his talk MySQL Load Balancers – MaxScale, ProxySQL, HAProxy, MySQL Router & nginx: a close up look.

Use the code FeaturedTalk and receive €25 off the current registration price!

Percona Live Europe 2016: Amsterdam is the premier event for the diverse and active open source database community. The conferences have a technical focus with an emphasis on the core topics of MySQL, MongoDB, and other open source databases. Percona live tackles subjects such as analytics, architecture and design, security, operations, scalability and performance. It also provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs. This conference is an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience. All of these people help you learn how to tackle your open source database challenges in a whole new way.

This conference has something for everyone!

Percona Live Europe 2016: Amsterdam is October 3-5 at the Mövenpick Hotel Amsterdam City Centre.


PlanetMySQL Voting: Vote UP / Vote DOWN

PostgreSQL Day at Percona Live Amsterdam 2016

Introducing PostgreSQL Day at Percona Live Europe, Amsterdam 2016.

As modern open source database deployments change, often including more than just a single open source database, Percona Live has also changed. We changed our model from being a purely MySQL-focused conference (with variants) to include a significant amount of MongoDB content. We’ve also expanded our overview of the open source database landscape and included introductory talks on many other technologies. These included practices we commonly see used in the world, and new up and coming solutions we think show promise.

In getting Percona Live Europe 2016 ready, something unexpected happened: we noticed the PostgreSQL community come together and submit many interesting talks about this great open source database technology. This effort on their part pushed to go further than we initially planned this year, and we’ve put together a full day of PostgreSQL talks. At Percona Live Europe this year, we will be running our first ever PostgreSQL Day on October 4th!

Some folks have been questioning this decision: do we really need so much PostgreSQL content? Isn’t there some tension between the MySQL and PostgreSQL communities? (Here is a link to a very recent example.)  

While it might be true (and I think it is) that some contention exists between these groups, I don’t think isolation and indifference are the answers to improving cooperation. They certainly aren’t the best plan for the open source database community at large, because there is too much we can learn from each other — especially when it comes to promoting open source databases as a real alternative to commercial ones.

Every open source community has its own set of “zealots” (or maybe just “strict adherents”). But our dedication to one particular technology shouldn’t blind us to the value of others. The MySQL and PostgreSQL communities have both successfully obtained support through substantial large scale deployments. There are more and more engineers joining those communities, looking to find better solutions for the problems they face and learn from others’ technologies.  

Through the years I have held very productive discussions with people like Josh Berkus, Bruce Momjian, Oleg Bartunov,  Ilya Kosmodemiansky and Robert Treat (to name just a few) about how things are done in MySQL versus PostgreSQL — and what could be done better in both.

At PGDay this year, I was glad to see Alexey Kopytov speaking about what MySQL does better; it got some very constructive conversations going. I was also pleased that my keynote on Migration to the Open Source Databases at the same conference was well attended and also sparked some good conversations.

I want this trend to continue to grow. This is why I think running a PostgreSQL Day as part of Percona Live Europe, Amsterdam is an excellent development. It provides an outstanding opportunity for people interested in PostgreSQL to further their knowledge through exposure to  MySQL, MongoDB and other open source technologies. This holds true for folks attending the conference mainly as MySQL and MongoDB users: they get exposed to the state of PostgreSQL in 2016.

Even more, I hope that this new track will spark productive conversations in the hallways, at lunches and other events between the speakers themselves. It’s really the best way to see what we can learn from each other. In the end, it benefits all technologies.

I believe the whole conference is worth attending, but for people who only wish to attend our new  PostgreSQL Day on October 4th, you can register for a single day conference pass using the PostgreSQLRocks discount code (€200, plus VAT).  

I’m looking forward to meeting and speaking with members of the PostgreSQL community at Percona Live!


PlanetMySQL Voting: Vote UP / Vote DOWN

Planets9s - Join us next Tuesday for part 1 of our MySQL Query Tuning Trilogy

Welcome to this week’s Planets9s, covering all the latest resources and technologies we create around automation and management of open source database infrastructures.

Join us next Tuesday for part 1 of our MySQL Query Tuning Trilogy

Remember to join us next Tuesday, August 30th for the first part of our upcoming webinar trilogy on MySQL Query Tuning. In this first webinar we will discuss building, collecting, analysing, tuning and testing processes as well as the main tools involved, tcpdump and pt-query-digest. If you haven’t done so yet, sign up below to join us and get your questions answered around MySQL query tuning.

Register for the webinar

Deploying ClusterControl and MySQL-based systems on AWS using Ansible

We recently made a number of enhancements to the ClusterControl Ansible Role, so it now also supports automatic deployment of MySQL-based systems (MySQL Replication, Galera Cluster, NDB Cluster). The updated role uses the awesome ClusterControl RPC interface to automate deployments. It is available at Ansible Galaxy and Github.

Read the blog

Become a MongoDB DBA: backing up your data

In this blog post we describe what tools are available for making backups in MongoDB and what strategies to use. In previous posts of our MongoDB DBA series, we have covered Deployment, Configuration and Monitoring. The next step now is ensuring your data gets backed up safely. Find out how in this latest installment of our Become a MongoDB DBA blog series.

Read the blog

That’s it for this week! Feel free to share these resources with your colleagues and follow us in our social media channels.

Have a good end of the week,

Jean-Jérôme Schmidt
Planets9s Editor
Severalnines AB

Tags:
PlanetMySQL Voting: Vote UP / Vote DOWN

Data Streaming with MariaDB MaxScale

Thu, 2016-08-25 08:31Massimiliano PintoData Streaming with MariaDB MaxScale

While traditional analytics databases exists, Apache Hadoop is becoming the de facto data storage for big data. It’ an open-source software framework for distributed storage and distributed processing of very large data sets. There is a need for the ability to transfer data from MariaDB/MySQL operational data store into Hadoop. While tools such as Apache sqoop exist to export data out of MariaDB/MySQL into Hadoop - its performance is not suitable for streaming or real-time data transfer as it operates as a batch application.

To address this need, the MariaDB MaxScale team has designed a modular solution with MariaDB MaxScale to stream binlog events coming from the Master database to the data lake via messaging systems such as Kafka’s distributed broker. The binlog events for inserts, updates and deletes are converted in AVRO or JSON format before it’s forwarded to the data lake. Kafka is used as a data ingestion pipeline for distributed data process environment. MariaDB MaxScale will be the Kafka producer, whereas big data platforms such as Hadoop, Cassandra, Spark or any other analytic database will be the consumer application consuming the data through the Kafka broker.

MariaDB Maxscale Plugins for Data Streaming

The current MariaDB MaxScale binlog router provides change data capture and flow from the MariaDB Master database towards the MariaDB Slave database, while caching binlog events on the MaxScale server itself. By extending this approach, two new plugin are introduced in MariaDB MaxScale:

  • Avro Router: To convert the change data events from binlog events to AVRO and JSON events.
  • Change Data Protocol Plugin: To publish AVRO or JSON change data events to registered clients via CDC Client API.

 

 

The avrorouter is a new MaxScale component has been added in order to convert MySQL binary events into AVRO records: it’s basically a MariaDB 10.0, 10.1 compatible binary log to AVRO file converter. It consumes binary logs from a local directory and transforms them into a set of AVRO files. These files can then be queried by clients for various purposes.

This router is intended to be used in tandem with the Binlog Server. The Binlog Server can connect to a master server and request binlog records. These records can then be consumed by the “avrorouter” directly from the binlog cache of the Binlog Server. This allows MariaDB MaxScale to automatically transform binlog events on the master to local Avro format files.

The converted AVRO files can be requested any time with the new CDC protocol plugin. This protocol should be used to communicate with the avrorouter. The clients can request either AVRO or JSON format data streams from a database table.

AVRO

AVRO is a binary Object Container File that consists of a file header and one or more file data blocks. The header contains the JSON version of the schema.

Note: Each AVRO file contains data related to only ONE table.

AVRO relies on schemas. When AVRO data is read, the schema is used. When writing, it is always present.  AVRO schemas are defined with JSON. In the context of MariaDB MaxScale Binlog-AVRO conversion, each AVRO file contains data related to one table. For each Master database table, there is a corresponding AVRO schema file on MariaDB MaxScale.  There is a utility provided for cdc-schema to generate AVRO schema from the MariaDB database tables to AVRO schema in MariaDB MaxScale.

 

Next up, we’ll have upcoming blogs on how to use MariaDB MaxScale for data streaming, including:

  • MariaDB MaxScale 2.0 Configuring MariaDB Master and MariaDB MaxScale for Data Streaming Service
  • How to Stream Change Data through MariaDB MaxScale using CDC API
  • Real-time Data Streaming to Kafka with MaxScale CDC

 

Documentation Links

MariaDB MaxScale 2.0:

Change Data Capture (CDC) Protocol

Avro Router

Avro Route Tutorial

AVRO:

Apache AVRO

Tags: Big DataMaxScaleProxyReplication About the Author

Massimiliano is a Senior Software Solutions Engineer working mainly on MaxScale. Massimiliano has worked for almost 15 years in Web Companies playing the roles of Technical Leader and Software Engineer. Prior to joining MariaDB he worked at Banzai Group and Matrix S.p.A, big players in the Italy Web Industry. He is still a guy who likes too much the terminal window on his Mac. Apache modules and PHP extensions skills are included as well.


PlanetMySQL Voting: Vote UP / Vote DOWN

Shinguz: Beware of large MySQL max_sort_length parameter

Today we had a very interesting phenomena at a customer. He complained that MySQL always get some errors of the following type:

[ERROR] mysqld: Sort aborted: Error writing file '/tmp/MYGbBrpA' (Errcode: 28 - No space left on device)

After a first investigation we found that df -h /tmp shows from time to time a full disk but we could not see any file with ls -la /tmp/MY*.

After some more investigation we found even the query from the Slow Query Log which was producing the same problem. It looked similar to this query:

SELECT * FROM test ORDER BY field5, field4, field3, field2, field1;

Now we were capable to simulate the problem at will with the following table:

CREATE TABLE `test` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `data` varchar(64) DEFAULT NULL, `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `field1` varchar(16) DEFAULT NULL, `field2` varchar(16) DEFAULT NULL, `field3` varchar(255) DEFAULT NULL, `field4` varchar(255) DEFAULT NULL, `field5` varchar(32) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=8912746 DEFAULT CHARSET=utf8 ;

An we have seen the query in SHOW PROCESSLIST:

| Query | 26 | Creating sort index | select * from test order by field5, field4, field3, field2, field1 |

But we were still not capable to see who or better how the hell mysqld is filling our disk!

I remembered further that I have seen some strange settings in the my.cnf before when we did the review of the database configuration. But I ignored them somehow.

[mysqld] max_sort_length = 8M sort_buffer_size = 20M

Now I remembered again these settings. We changed max_sort_length back to default 1k and suddenly our space problems disappeared!

We played a bit around with different values of max_sort_length and got the following execution times for our query:

max_sort_lengthexecution time [s]comment 64 8.8 s128 8.2 s256 9.3 s512 11.8 s 1k 14.9 s 2k 20.0 s 8k129.0 s 8M 75.0 sdisk full (50 G)
Conclusion

We set the values of max_sort_length back to the defaults. Our problems disappeared and we got working and much faster SELECT queries.

Do not needlessly change default values of MySQL without proving the impact. It can become worse than before!!!

The default value of max_sort_length is a good compromise between performance and an appropriate sort length.

Addendum

What I really did not like on this solution was, that I did not understand the way the problem occurred. So I did some more investigation in this. We were discussing forth and back if this could be because of XFS, because of sparse files or some kind of memory mapped files (see also man mmap).

At the end I had the idea to look at the lsof command during my running query:

mysql> SELECT * FROM test ORDER BY field5, field4, field3, field2, field1; ERROR 3 (HY000): Error writing file '/tmp/MYBuWcXP' (Errcode: 28 - No space left on device) shell> lsof -p 14733 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 14733 mysql 32u REG 8,18 9705619456 30147474 /tmp/MYck8vf4 (deleted) mysqld 14733 mysql 49u REG 8,18 749797376 30147596 /tmp/MYBuWcXP (deleted)

So it looks like that there were some deleted files which were growing!

Further information from the IRC channel led me to the libc temporary files (see also man 3 tmpfile).

And some hints from MadMerlin|work pointed me to:

shell> ls /proc/

/fd

Where you can also see those temporary files.

Thanks to MadMerlin|work for the hints!

Taxonomy upgrade extras: sortfileorder by
PlanetMySQL Voting: Vote UP / Vote DOWN

MySQL Group Replication Docker Images

The MySQL development team recently made a new labs release of the Group Replication plugin for MySQL Server. This is a preview of a new plugin that adds virtually synchronous replication with support for multi-master or active/active update-anywhere replication groups to the already strong lineup of native replication options in MySQL. Docker is a great […]
PlanetMySQL Voting: Vote UP / Vote DOWN

HowTo: Starting with MySQL EF Core provider and Connector/Net 7.0.4

This article shows the essential parts of the configuration for a quick console application using the .NET Core command-line (CLI) tool. The application will show how to create an EF Core model and some basic operations that can easily be done in Windows, Linux or OSx.

1. Requirements

Install the sdk, framework and tools:

Windows:
Install
– .NET Core SDK for Windows direct link here
– Visual Studio 2015 Update 3* here’s the publication page with more information about the update.
– .NET Core 1.0 tooling for Visual Studio direct link here

Linux:
An official detailed guide can be followed here.

OSx:
Follow the instructions and download the official installer here

2. Create the console application

This tutorial shows instructions for developing the application in Windows, but the same application can be done just changing the commands to the equivalent in the corresponding platform.

1. Create a folder where the new project will be saved.

On a command prompt type:
> mkdir efcore

2. Create the project
> cd efcore
> dotnet new

3. Create an appsettings.json file to hold your connection string information. Example:

{ "ConnectionStrings": { "SampleConnection": "server=localhost;userid=root;pwd=;port=3305;database=sakila;sslmode=none;" } }

Note: Adjust your server settings accordingly to your MySQL server configuration and user.

Notice the sslmode key in the connection string. This tells the server to use a plain connection instead of a secured one. If you like to use ssl mode then change this value to Preferred.

4. Modify the project.json to add the EntityFrameworkCore dependencies, add the MySQL references and specify that the appsettings.json file must be copied to the output (buildOptions section) so it becomes available to the application when building it.

The result should look like the following:

{ "version": "1.0.0-*", "buildOptions": { "debugType": "portable", "emitEntryPoint": true, "copyToOutput": { "include": "appsettings.json" } }, "dependencies": { "Microsoft.Extensions.Configuration": "1.0.0", "Microsoft.Extensions.Configuration.Json": "1.0.0", "Microsoft.EntityFrameworkCore": "1.0.0", "MySql.Data.Core": "7.0.4-IR-191", "MySql.Data.EntityFrameworkCore": "7.0.4-IR-191" }, "frameworks": { "netcoreapp1.0": { "dependencies": { "Microsoft.NETCore.App": { "type": "platform", "version": "1.0.0" } }, "imports": [ "dnxcore50", "portable-net452+win81"] } } }

5. After project.json modifications a restore of the packages in the application is mandatory. This command will download all the dependencies needed. In the console window type the following:

> dotnet restore

6. Create a database context for Entity Framework
Inside a text editor, or the IDE you prefer to use, create a EmployeesContext.cs file.
Here is an example of a context with some entities.

namespace ConsoleApplication { using Microsoft.EntityFrameworkCore; /// <summary> /// The entity framework context with a Employees DbSet /// </summary> public class EmployeesContext : DbContext { public EmployeesContext(DbContextOptions<EmployeesContext> options) : base(options) { } public DbSet<Employee> Employees { get; set; } } /// <summary> /// Factory class for EmployeesContext /// </summary> public static class EmployeesContextFactory { public static EmployeesContext Create(string connectionString) { var optionsBuilder = new DbContextOptionsBuilder<EmployeesContext>(); optionsBuilder.UseMySQL(connectionString); //Ensure database creation var context = new EmployeesContext(optionsBuilder.Options); context.Database.EnsureCreated(); return context; } } /// <summary> /// A basic class for an Employee /// </summary> public class Employee { public Employee() { } public int Id { get; set; } [MaxLength(30)] public string Name { get; set; } [MaxLength(50)] public string LastName { get; set; } } }

7. Replace the contents of the Program.cs file with the following code:

namespace ConsoleApplication { using System; using Microsoft.Extensions.Configuration; public class Program { public static void Main(string[] args) { var builder = new ConfigurationBuilder() .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true); var configuration = builder.Build(); string connectionString = configuration.GetConnectionString("SampleConnection"); // Create an employee instance and save the entity to the database var entry = new Employee() { Name = "John", LastName = "Winston" }; using (var context = EmployeesContextFactory.Create(connectionString)) { context.Add(entry); context.SaveChanges(); } Console.WriteLine($"Employee was saved in the database with id: {entry.Id}"); } } }

8. Build the application with the following command:

> dotnet build

9. Run the application

> dotnet run

The output from this console application is:

Employee was saved in the database with id: 1

Conclusion

Entity Framework is a well known technology for data access in .NET applications. With this new Core version, developers can create applications for Windows, Linux and OSX without changing the application code. The MySQL provider and EF core is a great combination for applications that target the new .NET Core version.

We love to hear your thoughts or any comments you have about our product. Please send us your feedback at our forums, fill a bug at our community site, or leave us any comment at the social media channels in Twitter or Facebook.

Enjoy and thanks for the support!

On behalf of the MySQL Connector/Net team


PlanetMySQL Voting: Vote UP / Vote DOWN

Thoughts on MaxScale new license

MaxScale has been open source until now, just like all MariaDB projects. But the 2.0 version is released under a new license called BSL, which basically makes the covered work non-free until the Change Date (in this case 2019-01-01), when the license will be converted to GPL.

Looks like open source friendly, after all. The license will be GPL, just be patient. And the code is available. Right?

No. Cmpletely wrong. For plenty of reasons.

Some reasons

It is a lock-in. No matter how many times Monty repeats that there is no lock-in, we have a brain. If you don’t allow anyone to fix bugs except for yourself, it is a lock-in. If you force your users to buy your support, they won’t buy your competitors support.

MariaDB business moves to a non-free product. Yes, 1.4 is free an this won’t change. And yes, when 3.0 will be out, 2.0 will be free. But why should they maintain a free version, if money comes from non-free versions? Monty says that open source religion doesn’t put bread on the table. I suppose that maintaining free branches also doesn’t put bread on the table.

I wasn’t able to find any official EOL date for any MaxScale version – if there is one, please comment below.

MariaDB moves innocation to the non-free world. New features are non-free. When they will be old, they will be free. Monty also stated that this is the correct way to make money for a lot of projects. And he seems to advice this model to start-ups that use his venture capital, OpenOcean. Suddenly, BSL seems to be the only way for projects to survive. Is he protecting others projects interests, or using them of his own marketing?

MariaDB accused Oracle several times. When Oracle implemented a couple features and only distributed them in a non-GPL edition (threadpool, PAM authentication), MariaDB told that they had the same features as open source. Which was great. Except that… now MySQL Router is open source, MaxScale 2.0 is not. Now Monty has several justifications for this. But I fail to understand why open core is evil and BSL is good.

I mentioned Monty too many times. Is this an attack against Monty? Definitely not, but all articles I could find express Monty’s opinion, not MariaDB Corporation or anyone else’s opinion. I cannot answer the silence.

 

What is the MariaDB Foundation?

MariaDB Corporation has the legal right to make MaxScale non-free. They own it. They sometimes call it MariaDB MaxScale. They can: they also own MariaDB trademark.

So, what’s the role of MariaDB Foundation?

They claim they safeguard MariaDB. They don’t mention the ecosystem, the community, or other tools. They don’t mention, of course, MaxScale. Which is quite strange: they claimed that their model is Apache Foundation, which supports an entire ecosystem in many ways, and owns the trademarks.

Also, the board of directors has 6 members. 3 are from MariaDB Foundation. In this situation, they cannot have an independent opinion on MariaDB Corporation actions.

A curious aspect is that they declare they follow Ubuntu Code of Conduct. Please read its last paragraph and drawn your own conclusions.

My position on MariaDB and MaxScale

I am still grateful to MariaDB Corporation for creating and maintaining MariaDB (and to some of their engineers for creating MySQL).

From a technical point of view, they have many interesting features that are not in MySQL. Some of them come from the community, for example the CONNECT engine and their implementation of encryption. And the reason is that MariaDB is very open to the community.

Which brings us to a less technical point of view: MariaDB openness. Their JIRA account allows us to see the bugs (including their current status…). You can also see who is working on what, when next versions will be released, and what they will have. The team is active on the mailing lists and IRC. The documentation is a wiki and the license is free.

I have been a MariaDB supporter for year. I wrote Mastering MariaDB and I am one of their Community Ambassadors chosen by Colin Charles (who recently left MariaDB). Will my position about MariaDB project change? I don’t know, it’s too early to answer. For sure, I won’t deny that its openness is amazing and should be a model for everyone. (And I hope this won’t change)

And my position about MaxScale has changed? Of course it did. I wouldn’t use it for personal projects. Of course I could provide support but, given the license change, it seems to me unlikely. There are free alternatives: ProxySQL, MySQL Router, HAProxy. PoxySQL is by far the most interesting, if you ask me.

My position has changed forever? The answer depends on another question: will MariaDB admin its big mistake? I have no logic reasons to be optimistic, but I still hope it will. In the past they have apparently been open to criticism. After a complain in this blog, they made MaxScale binaries freely available, and I wrote a thank you post. What I couldn’t know is that they were preparing to close MaxScale next versions.

 



PlanetMySQL Voting: Vote UP / Vote DOWN

Fired for supporting open source

I have been fired for speaking out about the GPL and MariaDB actions that have caused great harm to our ecosystem.

It has been pointed out that I have a non-compete agreement. None of my tools compete with MariaDB and I have no non-public knowledge of MariaDB technology. GPLScale remains free software under the GNU GPL license and it is my right to fork a github repo. I am not paid to work on GPLScale and I don't intend to get paid to maintain it by anyone. All my projects are labors of love.


Who wants to hire me? I'm dedicated, honest, open, and I have integrity. I'm willing to risk everything for what I believe in.

Email me at:
greenlion at gmail dot com


----


I have been a proponent of GPL for a long time, and I don't need publicity.
http://swanhart.livejournal.com/128586r.html

https://libraries.io/github/greenlion


https://blog.jcole.us/2008/07/23/on-mysql-forks-and-mysqls-non-open-source-documentation/
Justin Swanhart on July 23, 2008 at 12:09
Brian,

The “thing is” that we shouldn’t have to write it. MySQL may be ‘open source’, but it isn’t ‘open software’. With all the talk recently of proprietary extensions, the documentation licensing issues, the whole ‘enterprise’ vs ‘community’ debacle, well, MySQL has totally shown that they are not ‘Open Software’ company.

WHEN I WAS SEVENTEEN I BROKE THE LAW AND DROVE FROM PITTSBURGH PA TO BOSTON MA FOR THE FIRST FREE SOFTWARE FOUNDATION CONFERENCE. I sat down and had breakfast with STALLMAN, TORVALDS, AND RAYMOND! My whole life is open source. I've made my living off it and I intend to make sure others have the same opportunities that I have.


Posted via m.livejournal.com.


PlanetMySQL Voting: Vote UP / Vote DOWN

A look at Unicode with bash on Windows

When I wrote this blog about “bash on Windows” a few days ago I omitted one issue, that I already knew about. This is because it needs some elaboration that did not ‘fit in’ the previous blog. So I will do it here.

It is about Unicode. Unicode always was a pain in “cmd” and with the arrival of “bash” in Windows, this has become more significant and important. Actually on any recent *nix platform user will not do anything to make Unicode work ‘out of the box’ in the console and display all or almost all scripts (though I have noticed that the completeness of the ‘monospace’ font mostly used in the Linux console varies between Linux distros – with some distros you will not get all scripts shown here in the console).

But not so in “cmd” and thus also not in bash on Windows. I will illustrate this using the ‘mysql’ command-line client in “bash” with a result set that requires Unicode for display. You may download an SQL-dump of the simple MySQL table I use here.

See this MySQL result set in SQLyog:

In a Linux console it works fine – and without any special setting other than “SET NAMES UTF8;” in the ‘mysql’ client in case the MySQL/MariaDB server runs with another default character set. This is the XFCE terminal in OpenSuSE 42.1 (LEAP) with MariaDB’s ‘mysql’ flavor connecting to the same server. Only one imperfection is seen here: Arabic is not written from right to left as it should (you may check with Wikipedia or Google Translate if you are in doubt who is right here: SQLyog or the Linux console). But OK – we are talking about a console and not a word processor.

In “cmd” (whether using “bash” or not) it does not:

Now, this is expected, actually. You will need to specify “cp 65001” for “cmd” to make it use UTF8. You will also need to replace the default “Consolas” font with a Unicode font (from the settings of the “cmd” window). “Lucida console” is normally recommended. But it does not work with non-latin/cyrillic scripts. The font is incomplete and/or “cmd” does not understand how to use it with non-latin/cyrillic scripts:

Now, you may actually use a TrueType font in “cmd”. Monospace TrueType fonts installed on the system are available from the console settings. Let’s try with “Courier New” (what I used with SQLyog and where it worked perfectly). It makes little difference – only Arabic now comes to the console (and also here characters are also printed incorrectly from left to right).

There is no solution because no font will display non-latin/cyrillc scripts properly even when “cmd” uses “cp 65001” and understands (should at least) the characters as UTF8-encoded Unicode characters. Actually, it surprises me that accented latin strings from different ANSI codepages (Portuguese, Latvian, Czech and Turkish all belong to different ANSI codepages) and also Russian are printed correctly without specifying “cp 65001”. I think this may be a recent improvement in “cmd”. Now, “cmd” understands the characters correctly (in ‘UTF8-mode’). It just does not display them. This becomes clear if you copy from the console into any program/interface that handles UTF8 properly – a text editor, a browser form or whatever. See below in Notepad. And here I actually used the “Lucida console” font too, so the font is basically OK, it seems.

It very much looks like “cmd” was designed for ANSI and “Windows Unicode”/UTF16LE (where a single character max. is 2 bytes long) only and fails with UTF8 characters 3 (or more) bytes long, because of some internal truncation taking place. But even if so, this still does not explain that Arabic does not display with “Lucida console” and does with “Courier New”. So there is more to it with right-to-left writing systems.

 

I think it would be nice if Microsoft:

1) made UTF8 work in “cmd” with all (or almost all) scripts.
2) made “cmd” switch to  “Lucida console” font and also switch to “cp 65001” automatically when “bash” is invoked. You may easily forget to specify “cp 65001” (because you don’t need on other environments where bash runs) and you will have to ‘exit’ from bash and start what you were doing all over again.

The post A look at Unicode with bash on Windows appeared first on Webyog Blog.


PlanetMySQL Voting: Vote UP / Vote DOWN

Binary Serializers

DBMS client applications need to store SQL query results in local memory or local files. The format is flat and the fields are ordered -- that's "serialization". The most important serializer format uses human-readable markup, like

and the important ones in the MySQL/MariaDB world are CSV (what you get with SELECT ... INTO OUTFILE or LOAD INFILE), XML (what you get with --xml or LOAD XML), and JSON (for which there are various solutions if you don't use MySQL 5.7).

The less important serializer format uses length, like
[length of value] [value]
and this, although it has the silly name "binary serialization", is what I want to talk about.

The length alone isn't enough, we also need to know the type, so we can decode it correctly. With CSV there are hints such as "is the value enclosed in quotes", but with binary serializers the value contains no hints. There has to be an indicator that says what the type is. There might be a single list of types for all of the records, in which case the format is said to "have a schema". Or there might be a type attached to each record, like
[type] [length of value] [value]
in which case the format is often called "TLV" (type-length-value).

Binary serializers are better than markup serializers if you need "traversability" -- the ability to skip to field number 2 without having to read every byte in field number 1. Binary TLV serializers are better than binary with-schema serializers if you ned "flexibility" -- when not every record has the same number of fields and not every field has the same type. But of course TLV serializers might require slightly more space.

A "good" binary serializer will have two Characteristics:
#1 It is well known, preferably a standard with a clear specification, but otherwise a commonly-used format with a big sponsor. Otherwise you have to write your own library and you will find out all the gotchas by re-inventing a wheel. Also, if you want to ship your file for import by another application, it would be nice if the other application knew how to import it.
#2 It can store anything that comes out of MySQL or MariaDB.

Unfortunately, as we'll see, Characteristic #1 and Characteristic #2 are contradictory. The well-known serializers usually were made with the objective of storing anything that comes out of XML or JSON, or that handled quirky situations when shipping over a wire. So they're ready for things that MySQL and MariaDB don't generate (such as structured arrays) but not ready for things that MySQL and MariaDB might generate (such as ... well, we'll see as I look at each serializer).

To decide "what is well known" I used the Wikipedia article Comparison of data serialization formats. It's missing some formats (for example sereal) but it's the biggest list I know of, from a source that's sometimes neutral. I selected the binary serializers that fit Characteristic #1. I evaluated them according to Characteristic #2.

I'll look at each serializer. Then I'll show a chart. Then you'll draw a conclusion.

Avro

Has schemas. Not standard but sponsored by Apache.

I have a gripe. Look at these two logos. The first one is for the defunct British/Canadian airplane maker A.V.Roe (from Wikipedia). The second one is for the binary serializer format Apache Avro (from their site).

Although I guess that the Apache folks somehow have avoided breaking laws, I think that taking A.V.Roe's trademark is like wearing medals that somebody else won. But putting my gripe aside, let's look at a technical matter.

The set of primitive type names is:
null: no value

Well, of course, in SQL a NULL is not a type and it is a value. This is not a showstopper, because I can declare a union of a null type and a string type if I want to allow nulls and strings in the same field. Um, okay. But then comes the encoding rule:

null is written as zero bytes.

I can't read that except as "we're like Oracle 12c, we think empty strings are NULLs".

ASN.1

TLV. Standard.

ASN means "abstract syntax notation" but there are rules for encoding too, and ASN.1 has a huge advantage: it's been around for over twenty years. So whenever any "why re-invent the wheel?" argument starts up on any forum, somebody is bound to ask why all these whippersnapper TLVs are proposed considering ASN.1 was good enough for grand-pappy, eh?

Kidding aside, it's a spec that's been updated as recently as 2015. As usual with official standards, it's hard to find a free-and-legitimate copy, but here it is: the link to a download of "X.690 (08/2015) ITU-T X.690 | ISO/IEC 8825-1 ISO/IEC 8825-1:2015 Information technology -- ASN.1 encoding rules: Specification of Basic Encoding Rules (BER), Canonical Encoding Rules (CER) and Distinguished Encoding Rules (DER)" from the International Telecommunication Union site: http://www.itu.int/rec/T-REC-X.690-201508-I/en.

It actually specifies how to handle exotic situations, such as
** If it is a "raw" string of bits, are there unused bits in the final byte?
** If the string length is greater than 2**32, is there a way to store it?
** Can I have a choice between BMP (like MySQL UCS2) and UTF-8 and other character sets?
** Can an integer value be greater than 2**63?
... You don't always see all these things specified except in ASN.1.

Unfortunately, if you try to think of everything, your spec will be large and your overhead will be large, so competitors will appear saying they have something "simpler" and "more compact". Have a look at trends.google.com to see how ASN.1 once did bestride the narrow world like a colossus, but nowadays is not more popular than all the others.

BSON

TLV. Sponsored by MongoDB.

Although BSON is "used mainly as a data storage and network transfer format in the MongoDB [DBMS]", anybody can use it. There's a non-Mongo site which refers to independent libraries and discussion groups.

BSON is supposed to make you think "binary JSON" but in fact all the binary serializers that I'm discussing (and I few that I'm not discussing such as UBJSON) can do a fair job of representing JSON-marked-up-text in binary format. Some people even claim that MessagePack does a better job of that than BSON does.

There is a "date" but it is milliseconds since the epoch, so it might be an okay analogue for MySQL/MariaDB TIMESTAMP but not for DATETIME.

CBOR

TLV. Proposed standard.

CBOR is not well known but there's an IETF Internet Standards Document for it (RFC 7049 Concise Binary Object Representation), so I reckoned it's worth looking at. I don't give that document much weight, though -- it has been in the proposal phase since 2013.

The project site page mentions JSON data model, schemalessness, raw binary strings, and concise encoding -- but I wanted to see distinguishing features. There are a few.

I was kind of surprised that there are two "integer" types: one type is positive integers, the other type is negative integers.
In other words -5 is
[type = negative number] [length] [value = 5]
rather than the Two's Complement style
[type = signed number] [length] [value = -5]
but that's just an oddness rather than a problem.

There was an acknowledgment in the IETF document that "CBOR is inspired by MessagePack". But one of MessagePack's defects (the lack of a raw string type) has been fixed now. That takes away one of the reasons that I'd have for regarding CBOR as a successor to MessagePack.

Fast Infoset

TLV. Uses a standard.

After seeing so much JSON, it's nice to run into an international standard that specifies a binary encoding format for the XML Information Set (XML Infoset) as an alternative to the XML document format". Okay, they get points for variety.

However, it's using ASN.1's underlying encoding methods, so I won't count it as a separate product.

MessagePack

TLV. Not standard but widely used.

MessagePack, also called MsgPack, is popular and is actually used as a data storage format for Pinterest and Tarantool.

It's got a following among people who care a lot about saving bytes; for example see this Uber survey where MessagePack beat out some of the other formats that I'm looking at here.

One of the flaws of MessagePack, from my point of view, is its poor handling for character sets other than UTF-8. But I'll admit: when MessagePack's original author is named Sadayuki Furuhashi, I'm wary about arguing that back in Japan UTF-8 is not enough. For some of the arguing that happened about supporting other character sets with MessagePack, see this thread. Still, I think my "The UTF-8 world is not enough" post is valid for the purposes I'm discussing.

And the maximum length of a string is 2**32-1 bytes, so you can forget about dumping a LONGBLOB. I'd have the same trouble with BSON but BSON allows null-terminated strings.

OPC-UA

TLV. Sort of a standard for a particular industry group.

Open Platform Communications - Unified Architecture has a Binary Encoding format.

Most of the expected types are there: boolean, integer, float, double, string, raw string, and datetime. The datetime description is a bit weird though: number of 100 nanosecond intervals since January 1, 1601 (UTC). I've seen strange cutover dates in my time, but this is a new one for me.

For strings, there's a way to indicate NULLs (hurrah).

I have the impression that OPC is an organization for special purposes (field devices, control systems, etc.) and I'm interested in general-purpose formats, so didn't look hard at this.

Protocol Buffers

Has schemas. Not standard but sponsored by Google.

Like Avro, Google's Protocol Buffers have a schema for the type and so they are schema + LV rather than TLV. But MariaDB uses them for its Dynamic Columns feature, so everybody should know about them.

Numbers and strings can be long, but there's very little differentiation -- essentially you have integers, double-precision floating point numbers, and strings. So, since I was objecting earlier when I saw that other serialization formats didn't distinguish (say) character sets, I have to be fair and say: this is worse. When the same "type" tag can be used for multiple different types, it's not specific enough.

Supposedly the makers of Protocol Buffers were asked why they didn't use ASN.1 and they answered "We never heard of it before". That's from a totally unreliable biased source but I did stop and ask myself: is that really so unbelievable? In this benighted age?

Thrift

Can be TLV but depends on protocol. Not standard but sponsored by Apache, used a lot by Facebook.

I looked in vain for what one might call a "specification" of Thrift's binary serialization, and finally found an old stackoverflow discussion that said: er, there isn't any. There's a "Thrift Missing Guide" that tells me the base types, and a Java class describer for one of the protocols to help me guess the size limits.

Thrift's big advantage is that it's language neutral, which is why it's popular and there are many libraries and high-level tutorials. That makes it great as a communication format, which is what it's supposed to be. However, the number of options is small and the specification is so vague that I can't call it "good" according to the criteria I stated earlier.

The Chart

I depend on each serializer's specification, I didn't try anything out, I could easily have made some mistakes.

For the "NULL is a value" row, I say No (and could have added "Alackaday!") for all the formats that say NULL is a data type. Really the only way to handle NULL is with a flag so this would be best:
[type] [length] [flag] [value]
and in fact, if I was worried about dynamic schemas, I'd be partial to Codd's "two kinds of NULLs" arguments, in case some application wanted to make a distinction between not-applicable-value and missing-value.

For most of the data-type rows, I say Yes for all the formats that have explicit defined support. This does not mean that it's impossible to store the value -- for example it's easy to store a BOOLEAN with an integer or with a user-defined extension -- but then you're not using the format specification so some of its advantages are lost.

For dates (including DATETIME TIMESTAMP DATE etc.), I did not worry if the precision and range were less than what MySQL or MariaDB can handle. But for DECIMAL, i say No if the maximum number of digits is 18 or if there are no post-decimal digits.

For LONGBLOB, I say No if the maximum number of bytes is 2**32.

For VARCHAR, I say Yes if there's any way to store any encoded characters (rather than just bytes, which is what BINARY and BLOB are). In the "VARCHAR+" row I say Yes if there is more than one character set, although this doesn't mean much -- the extra character sets don't match with MySQL/MariaDB's variety.

I'll say again that specifications allow for "extensions", for example with ASN.1 you can define your own tags, but I'm only looking at what's specific in the specification.

Avro ASN.1 BSON CBOR Message Pack OPC UA Protocol Buffers Thrift NULL is a value no no no no no YES no no BOOLEAN YES YES YES YES YES YES no YES

INTEGER YES YES YES YES YES YES YES YES

BIGINT YES YES YES YES YES YES YES YES

FLOAT YES YES YES YES YES YES no no

DOUBLE YES YES YES YES YES YES YES YES

BINARY / BLOB YES YES YES YES YES YES YES YES

VARCHAR YES YES YES YES YES YES no YES

Dates no YES YES YES no YES no no

LONGBLOB YES YES no YES no no YES no

DECIMAL no YES no YES no no no no

VARCHAR+ no YES no no no YES no no

BIT no YES no no no no no no
Your Conclusion

You have multiple choice:

(1) Peter Gulutzan is obsessed with standards and exactness,
(2) Well, might as well use one of these despite its defects
(3) We really need yet another binary serializer format.

ocelotgui news

Recently there were some changes to the ocelot.ca site to give more prominence to the ocelotgui manual, and a minor release -- ocelotgui version 1.02 -- happened on August 15.


PlanetMySQL Voting: Vote UP / Vote DOWN

How to stop offending queries with ProxySQL

This blog discusses how to find and address badly written queries using ProxySQL.

All of us are very good in writing good queries. We know this to always be true!

But sometimes a bad query escapes our control and hits our database. There is the new guy, the probie, who just joined the company and is writing all his code using SELECT * instead of WHERE. We’ve told him “STOP” millions of times, but he refuses to listen. Or a there is a new code injection, and it will take developers some time to fix and isolate the part of the code that is sending killing queries to our database.

The above are true stories; things that happen every day in at least few environments.

Isolating the bad query isn’t the main problem: that is something that we can do very fast. The issue is identifying the code that is generating the query, and disabling that code without killing the whole application.

That part can take days.

ProxySQL allows us to act fast and stop any offending query in seconds. I will show you how.

Let us say our offending query does this:

SELECT * from history;

Where history is a table of two Tb partitioned by year in our DWH.

That query will definitely create some issue on the database. It’s easy to identify this query as badly designed.

Unfortunately, it was inserted in the ETL process that uses a multi-thread approach and auto-recovery. Now when you kill it, the process restarts it. After, it takes developers some time to stop that code. In the meantime, your reporting system serving your company in real-time is so slooow (or down).

With ProxySQL, you can stop that query in one second:

INSERT INTO mysql_query_rules (rule_id, active, match_pattern, error_msg, apply) VALUES (89,1,'^SELECT * from history$','Query not allowed',1); LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;

Done, your database never receives that query again! Now the application gets a message saying that the query is not allowed.

And look, it’s possible to do things even better:

INSERT INTO mysql_query_rules (rule_id, active, match_digest, flagOUT, apply) VALUES (89,1,'^SELECT * FROM history', 100, 0);

INSERT INTO mysql_query_rules (rule_id, active, flagIN, match_digest, destination_hostgroup, apply) VALUES (1001,1, 100, ‘WHERE’, 502, 1); INSERT INTO mysql_query_rules (rule_id, active, flagIN, error_msg, apply) VALUES (1002,1, 100, ‘Query not allowed’, 1); LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;

In this case, ProxySQL checks for any query having SELECT * FROM history. If the query has a WHERE clause, then it redirects it to the server for execution. If the query does not have a WHERE it stops the query and sends an error message to the application.

Conclusion

This is a very basic example of offending query. But I think it makes clear how ProxySQL helps any DBA in stopping them quickly in the case of an emergency.
This gives the DBAs and the developers time to coordinate a better plan of action to permanently fix the issue.

References

https://github.com/sysown/proxysql
http://www.proxysql.com/2015/09/proxysql-tutorial-setup-in-mysql.html
https://github.com/sysown/proxysql/blob/v1.2.2/doc/configuration_howto.md
https://github.com/sysown/proxysql/blob/v1.2.2/INSTALL.md


PlanetMySQL Voting: Vote UP / Vote DOWN

Percona Server 5.7.14-7 is now available

Percona announces the GA release of Percona Server 5.7.14-7 on August 23, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Based on MySQL 5.7.14, including all the bug fixes in it, Percona Server 5.7.14-7 is the current GA release in the Percona Server 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.14-7 milestone at Launchpad.

New Features: Bugs Fixed:
  • Fixed potential cardinality 0 issue for TokuDB tables if ANALYZE TABLE finds only deleted rows and no actual logical rows before it times out. Bug fixed #1607300 (#1006, #732).
  • TokuDB database.table.index names longer than 256 characters could cause a server crash if background analyze table status was checked while running. Bug fixed #1005.
  • PAM Authentication Plugin would abort authentication while checking UNIX user group membership if there were more than a thousand members. Bug fixed #1608902.
  • If DROP DATABASE would fail to delete some of the tables in the database, the partially-executed command is logged in the binlog as DROP TABLE t1, t2, ... for the tables for which drop succeeded. A slave might fail to replicate such DROP TABLE statement if there exist foreign key relationships to any of the dropped tables and the slave has a different schema from the master. Fix by checking, on the master, whether any of the database to be dropped tables participate in a Foreign Key relationship, and fail the DROP DATABASE statement immediately. Bug fixed #1525407 (upstream #79610).
  • PAM Authentication Plugin didn’t support spaces in the UNIX user group names. Bug fixed #1544443.
  • Due to security reasons ld_preload libraries can now only be loaded from the system directories (/usr/lib64, /usr/lib) and the MySQL installation base directory.
  • In the client library, any EINTR received during network I/O was not handled correctly. Bug fixed #1591202 (upstream #82019).
  • SHOW GLOBAL STATUS was locking more than the upstream implementation which made it less suitable to be called with high frequency. Bug fixed #1592290.
  • The included .gitignore in the percona-server source distribution had a line *.spec, which means someone trying to check in a copy of the percona-server source would be missing the spec file required to build the RPMs. Bug fixed #1600051.
  • Audit Log Plugin did not transcode queries. Bug fixed #1602986.
  • If the changed page bitmap redo log tracking thread stops due to any reason, then shutdown will wait for a long time for the log tracker thread to quit, which it never does. Bug fixed #1606821.
  • Changed page tracking was initialized too late by InnoDB. Bug fixed #1612574.
  • Fixed stack buffer overflow if --ssl-cipher had more than 4000 characters. Bug fixed #1596845 (upstream #82026).
  • Audit Log Plugin events did not report the default database. Bug fixed #1435099.
  • Canceling the TokuDB Background ANALYZE TABLE job twice or while it was in the queue could lead to server assertion. Bug fixed #1004.
  • Fixed various spelling errors in comments and function names. Bug fixed #728 (Otto Kekäläinen).
  • Implemented set of fixes to make PerconaFT build and run on the AArch64 (64-bit ARMv8) architecture. Bug fixed #726 (Alexey Kopytov).
Other bugs fixed:

#1542874 (upstream #80296), #1610242, #1604462 (upstream #82283), #1604774 (upstream #82307), #1606782, #1607359, #1607606, #1607607, #1607671, #1609422, #1610858, #1612551, #1613663, #1613986, #1455430, #1455432, #1581195, #998, #1003, and #730.

The release notes for Percona Server 5.7.14-7 are available in the online documentation. Please report any bugs on the launchpad bug tracker .


PlanetMySQL Voting: Vote UP / Vote DOWN

ConFoo Montreal 2017 Calling for Papers

ConFoo Montreal: March 8th-10th 2016

Want to get your web development ideas in front of a live audience? The call for papers for the ConFoo Montreal 2017 web developer conference is open! If you have a burning desire to hold forth about PHP, Java, Ruby, Python, or any other web development topics, we want to see your proposals. The window is open only from August 21 to September 20, 2016, so hurry. An added benefit: If your proposal is selected and you live outside of the Montreal area, we will cover your travel and hotel.

You’ll have 45 minutes to wow the crowd, with 35 minutes for your topic and 10 minutes for Q&A. We can’t wait to see your proposals. Knock us out!

ConFoo Montreal will be held on March 8-10, 2017. For those of you who already know about our conference, be aware that this annual tradition will still be running in addition to ConFoo Vancouver. Visit our site to learn more about both events.


PlanetMySQL Voting: Vote UP / Vote DOWN

Budapest MySQL Meetup

I just created the Budapest MySQL Meetup group. I hope there will be interest for that, the first event is under organising.  Check it if you are near Budapest!   Share This:
PlanetMySQL Voting: Vote UP / Vote DOWN

Register for Part 1 of our MySQL Query Tuning Trilogy

Remember to join us Tuesday, August 30th for the first part of our upcoming webinar trilogy on MySQL Query Tuning. This first of three in-depth webinar sessions led by Krzysztof Książek, Senior Support Engineer at Severalnines, covers MySQL query tuning process and tools.

When done right, Tuning MySQL queries and indexes can increase the performance of your application and decrease response times. We will be covering this complex topic over the course of three webinars of 60 minutes each, so feel free to also register for parts 2 & 3 here.

In this first part of the trilogy we will discuss building, collecting, analysing, tuning and testing processes as well as the main tools involved, tcpdump and pt-query-digest. Register below to join us and get your questions answered around MySQL query tuning.

Date & Registration Part 1: Query tuning process and tools

Tuesday, August 30th

Register

Feel free to also register for Parts 2 & 3.

Agenda
  • MySQL Query Tuning Trilogy: Process and tools
  • Query tuning process
    • Build
    • Collect
    • Analyse
    • Tune
    • Test
  • Tools
    • tcpdump
    • pt-query-digest
Speaker

Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience in managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard. He’s the main author of the Severalnines blog and webinar series: Become a MySQL DBA.

We look forward to “seeing” you there!

Tags:
PlanetMySQL Voting: Vote UP / Vote DOWN

Pages