Planet MySQL

MySQL 8.0 all new Error Logging

MySQL error log contains diagnostics messages such as errors, warnings and notes that occur during MySQL startup, shutdown and while the server is running. For example, a InnoDB table is corrupted and need to repaired, This will be recorded in the error log. MySQL 8.0 Error uses the MySQL component architecture for log event filtering and writing. The MySQL system variable log_error_services controls which log components to enable and the rules for filtering the log events. The component table in the mysql system database contains the information about currently loaded comments and shows which components have been registered with INSTALL COMPONENT. To confirm the components installed, you may use the SQL below:

SELECT * FROM mysql.component;

Currently the available log components are in lib/plugins:

  • component_log_filter_dragnet.so
  • component_log_sink_json.so
  • component_log_sink_syseventlog.so
  • component_log_sink_test.so
Error Log configuration / system variables

The log_error_services system variable controls which log components to enable for error logging

mysql> select @@log_error_services; +----------------------------------------+ | @@log_error_services | +----------------------------------------+ | log_filter_internal; log_sink_internal | +----------------------------------------+ 1 row in set (0.00 sec)

The default value indicates that log evens first pass through the built-in filter controller, log_filter_interval and later through the built-in log writer component, log_sink_interval. Typically, a sink processes log events into log messages that have a particular format and writes these messages to its associated output, such as a file or the system logThe combination of  log_filter_internal and log_sink_internal implements the default error log filtering and output behavior.

The output destination of error log can be collected from system variable log_error .  You can configure the destination of error log either to the system log or JSON file.

You can make mysqld to write the error log to system log (Event Log on Windows and syslog on Linux and Unix systems):

INSTALL COMPONENT 'file://component_log_sink_syseventlog'; SET GLOBAL log_error_services = 'log_filter_internal; log_sink_syseventlog';

You can enable JSON writer to record the error log by first loading the writer component and then modifying log_error_services system variable:

INSTALL COMPONENT 'file://component_log_sink_json'; SET GLOBAL log_error_services = 'log_filter_internal; log_sink_json';

traditional error log:

2019-03-10T08:36:59.950769Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 13222 2019-03-10T08:37:00.253523Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2019-03-10T08:37:00.267812Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.15' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL. 2019-03-10T08:37:00.429164Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/var/run/mysqld/mysqlx.sock' bind-address: '::' port: 33060 2019-03-10T08:37:37.635761Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.15) MySQL Community Server - GPL. 2019-03-10T08:37:37.985380Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 13410 2019-03-10T08:37:38.277912Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2019-03-10T08:37:38.291494Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.15' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.

JSON error log:

{ "prio" : 0, "err_code" : 10910, "source_line" : 2191, "source_file" : "mysqld.cc", "function" : "clean_up", "msg" : "/usr/sbin/mysqld: Shutdown complete (mysqld 8.0.15) MySQL Community Server - GPL.", "time" : "2019-03-10T09:21:18.722864Z", "err_symbol" : "ER_SERVER_SHUTDOWN_COMPLETE", "SQL_state" : "HY000", "subsystem" : "Server", "label" : "System" }

  • Filtering MySQL error with log_error_verbosity:
    • Errors only – 1
    • Errors and warnings  – 2
    • Errors, warnings and notes – 3
      • If log_error_verbosity is configured to 2 or higher, the MySQL server even logs the statement that are unsafe for statement-based logging / replication. if the value is 3, the server logs aborted connections and access-denied errors for fresh connection attempts. It is recommended to configure log_error_verbosity with 2 or higher to record detailed information about what is happening to MySQL infrastructure.
  • How MySQL 8.0 component based error logging filters are different ? 

We are used to default built-in error log filters that are configured with MySQL system variable log_error_verbosity (default is 2). But, MySQL 8.0 has another component that allows you to filter on rules that you define: log_filter_dragnet. I have explained below step-by-step on how to setup Rule-Based Error Log Filtering using log_filter_dragnet :

INSTALL COMPONENT 'file://component_log_filter_dragnet'; SET GLOBAL log_error_services = 'log_filter_dragnet; log_sink_internal';

To limit information events to no more than one per 60 seconds:

mysql> SET GLOBAL dragnet.log_error_filter_rules = -> 'IF prio>=INFORMATION THEN throttle 1/60.'; Query OK, 0 rows affected (0.00 sec)

To throttle  plugin-shutdown messages to only 5 per 5 minutes (300 seconds):

IF err_code == ER_PLUGIN_SHUTTING_DOWN_PLUGIN THEN throttle 1.

To throttle errors and warnings to 1000 per hour and information messages to 100 per hour:

IF prio <= INFORMATION THEN throttle 1000/3600 ELSE throttle 100/3600.

and we can monitor the available dragnet rule:

mysql> select * from global_variables where VARIABLE_NAME like 'dragnet%'\G *************************** 1. row *************************** VARIABLE_NAME: dragnet.log_error_filter_rules VARIABLE_VALUE: IF prio>=INFORMATION THEN throttle 1/60. 1 row in set (0.00 sec)

Conclusion

MySQL 8.0 Error Logging Services are more powerful compared to the versions before and you can now filter error logging much better by creating your own components 

 

The post MySQL 8.0 all new Error Logging appeared first on MySQL Consulting, Support and Remote DBA Services.

Laravel Eloquent Collection Tutorial With Example | Laravel 5.8 Guide

Laravel Eloquent Collection Tutorial With Example is today’s topic. The Eloquent collection object extends the Laravel base collection, so it naturally inherits the dozens of methods used to work with an underlying array of Eloquent models fluently. All multi-result sets returned by Eloquent are instances of an Illuminate\Database\Eloquent\Collection object, including results retrieved via the get method or accessed via a relationship.

Laravel Eloquent Collection Tutorial

We will start this tutorial by installing fresh Laravel. Right now, Laravel 5.8 is the latest version of Laravel. If you are new to Laravel 5.8, then check out my Laravel 5.8 CRUD tutorial for starters. Install Laravel using the following command.

composer create-project --prefer-dist laravel/laravel blog

Okay, now set up the database inside the .env file.

DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=blog DB_USERNAME=root DB_PASSWORD=root

Next step is to migrate the tables inside the database using the following command.

php artisan migrate Seed Database

Laravel includes the simple method of seeding your database with test data using seed classes. All the seed classes are stored in the database/seeds directory. Seed classes may have any name you want but probably it should follow some sensible convention, such as UsersTableSeeder, etc. By default, the DatabaseSeeder class is defined for you. From this class, you can use the call() method to run other seed classes, allowing you to control the seeding order.

Create a UsersTableSeeder.php file using the following command.

php artisan make:seeder UsersTableSeeder

We will use Model factories to generate fake data.

Now, write the following code inside the UsersTableSeeder.php file.

<?php // UsersTableSeeder.php use Illuminate\Database\Seeder; use Illuminate\Support\Str; class UsersTableSeeder extends Seeder { /** * Run the database seeds. * * @return void */ public function run() { $faker = Faker\Factory::create(); for($i=0;$i<50;$i++) { \App\User::create([ 'name' => $faker->name, 'email' => $faker->unique()->safeEmail, 'email_verified_at' => now(), 'password' => '$2y$10$TKh8H1.PfQx37YgCzwiKb.KjNyWgaHb9cbcoQgdIVFlYg7B77UdFm', // secret 'remember_token' => Str::random(10), ]); } } }

So, it will generate 50 random users records in the database.

Now, we need to modify the DatabaseSeeder.php file.

<?php // DatabaseSeeder.php use Illuminate\Database\Seeder; class DatabaseSeeder extends Seeder { /** * Seed the application's database. * * @return void */ public function run() { $this->call(UsersTableSeeder::class); } }

Before you seed the file, we need to regenerate the Composer’s autoloader using the dump-autoload command.

composer dump-autoload

Okay, now go to the terminal and run the seed file using the following command.

php artisan db:seed

It will generate fake data in the users table.

 

So now, we have test data to work with, and we can query the database using Laravel Eloquent Collection.

Query Data using Laravel Eloquent Collection

Now, write the following code inside the routes >> web.php file.

<?php // web.php Route::get('/', function () { $users = \App\User::all(); foreach ($users as $user) { echo '<pre>'; echo $user->name; echo '</pre>'; } });

In the above code, we are displaying only the name of the users in preformatted HTML view.

If you go to the root route, then you will see something like below image. Of course, the data will be different because the faker library randomly generates it. It will be 50 names.

 

All collections also serve as the iterators, allowing us to loop over them as if they were simple PHP arrays.

The collections are much more potent than arrays and expose the variety of map / reduce operations that may be chained using the intuitive interface.

Laravel chunk() Collection Method

The chunk() method breaks a collection into multiple, smaller collections of the given size.

Write the following code inside the web.php file.

<?php // web.php Route::get('/', function () { $users = \App\User::all(); $chunks = $users->chunk(2); $data = $chunks->toArray(); echo '<pre>'; print_r($data); echo '</pre>'; });

The output of the above code is following.

 

The chunk() method is especially useful in views when working with the grid system such as Bootstrap.

Laravel Custom Collections

If you need to use the custom Collection object with your extension methods, you may override the newCollection method on your model. See the following example of the User.php model.

<?php // User.php namespace App; use Illuminate\Notifications\Notifiable; use Illuminate\Contracts\Auth\MustVerifyEmail; use Illuminate\Foundation\Auth\User as Authenticatable; use App\CustomCollection; class User extends Authenticatable { use Notifiable; /** * The attributes that are mass assignable. * * @var array */ protected $fillable = [ 'name', 'email', 'password', ]; /** * The attributes that should be hidden for arrays. * * @var array */ protected $hidden = [ 'password', 'remember_token', ]; /** * The attributes that should be cast to native types. * * @var array */ protected $casts = [ 'email_verified_at' => 'datetime', ]; /** * Create a new Eloquent Collection instance. * * @param array $models * @return \Illuminate\Database\Eloquent\Collection */ public function newCollection(array $models = []) { return new CustomCollection($models); } }

Once you have defined the newCollection method, you will receive an instance of your custom collection anytime Eloquent returns the collection instance of that model. If you would like to use the custom collection for every model in your application, you should override a newCollection method on the base model class that is extended by all of your models. In the above example, we have override inside the User.php model class.

Okay, now create a CustomCollection.php file inside the app folder.

<?php // CustomCollection.php namespace App; use Illuminate\Support\Collection; class CustomCollection extends Collection { public function gotAllUsers() { dd($this->items); } }

Here, $this->items have all the users records. We can access these records inside the CustomCollection class.

Finally, write the following code inside the web.php file.

<?php // web.php Route::get('/', function () { $users = \App\User::get(); $users->gotAllUsers(); });

Refresh the root route, and the output is following. It is up to 50 records.

 

Finally, Laravel Eloquent Collection Tutorial With Example is over.

The post Laravel Eloquent Collection Tutorial With Example | Laravel 5.8 Guide appeared first on AppDividend.

MySQL and PHP Basics Part I

I have had some requests to write some blogs on the basics of using PHP and MySQL together.  This will not be a series for the experienced as it will start at a level where I will go into a lot of details but expect very few prerequisites from the reader.  If this is not you, please move on. If it is you and you read something you do not understand, please contact me to show me where I assumed too much.

PHP and MySQL are both in their mid twenties and both vital in the worlds of developers.  With the big improvements in PHP 7 and MySQL 8, I have found a lot of developers flocking to both but stymied by the examples they see as their are many details not explained. So let's get to the explaining!
1. Use the latest software
If you are not using PHP 7.2 or 7.3 (or maybe 7.1) then you are missing out in features and performance.  The PHP 5.x series is deprecated, no longer support, and is quickly disappearing.  
MySQL 8.0 is likewise a big advancement but many sites are using earlier versions.  If you are not on 5.6 with plans to upgrade to 5.7 then you are about to be left behind.  If you are running an earlier version then you are using antique code. Also get your MySQL from MySQL as your Linux distro may not be current, especially for the smaller ones.  The APT and DEB repos can be  found here and there are Docket containers available too. 
In many cases it is a fight to keep your software core tools up to date, or fairly up to to date.  The time and heartache in fighting problems resolved in a later version of software or creating a work around for a feature not in your older version will eventually bite you hard and should be viewed as a CRM (Career Limiting Move).  BTW hiring managers look for folks with current skills not skill for a decade old version of the skills.
2. Do not pick one connector over another (yet!)
PHP is a very rich environment for developers and it has three viable options for connecting to MySQL databases.  Please note that the older mysql connector is deprecated, no longer support, and is to be avoided.  It was replaced by the mysqli or mysqlnd (native driver) and is officially supported by Oracle's MySQL Engineers.  Next is the PDO (public data objects) connector that is designed to be database agnostic but there is no overall maintainer who watches out for the code but Oracle MySQL Engineers do try to fix MySQL related issues if they do not impinge on other PDO code. And the newest, using the new MySQL X DevAPI and X protocol is the X DevAPI connector which supports both SQL and NoSQL interfaces.

The good news for developers is that you can install all three, side by side, with no problem.  For those staring out the ability to transpose from connector can be a little confusing as they work just a bit differently from each other but the ability to use two or more is a good skill to have.  Such much like being able to drive a car with an automatic or manual transmission, it does give you more professional skills.
Next time we will install PHP, MySQL, the three connectors, and some other cool stuff so you can start using PHP to access your MySQL servers.

Fun with Bugs #81 - On MySQL Bug Reports I am Subscribed to, Part XVII

Two weeks passed since my previous review of public MySQL bug reports I consider interesting enough to subscribe to them. Over this period I picked up a dozen or so new public bug reports that I'd like to briefly review today.

Here is my recent subscriptions list, starting from the oldest bug reports:
  • Bug #94431 - "Can't upgrade from 5.7 to 8.0 if any database have a hyphen in their name". It seems one actually needs a database like that created in MySQL 5.6 with at least one InnoDB table having FULLTEXT index to hit the problem. Great finding by Phil Murray. Note that after several unsuccessful attempts by others the bug was eventually reproduced and verified by Jesper Wisborg Krogh. Let's hope we'll see it fixed in MySQL 8.0.16.
  • Bug #94435 - "mysql command hangs up and cosume CPU almost 100%". It was reported by Masaaki HIROSE, whose previous related/similar Bug #94219 - "libmysqlclient enters and infinite loop and consume CPU usage 100%" ended up as "Not a bug" (wrongly, IMHO, as nobody cared enough to reproduce the steps instead of commenting on their correctness and checking something else). Bug reporter had not only insisted and provided all the details, but also tried to analyze the reasons of the bug and provided links to other potentially related bug reports (Bug #88428 - "mysql_real_query hangs with EINTR errno (using YASSL)" and Bug #92394 - "libmysqlclient enters infinite loop after signal (race condition)"). Great job and nice to see the bug "Verified" eventually.
  • Bug #94441 - "empty ibuf aio reads in innodb status". This regression vs MySQL 5.6 was noted by Nikolai Ikhalainen from Percona. MariaDB 10.3.7 is also affected, unfortunately:
    ...
    I/O thread 9 state: native aio handle (write thread)
    Pending normal aio reads: [0, 0, 0, 0] , aio writes: [0, 0, 0, 0] ,
     ibuf aio reads:, log i/o's:, sync i/o's:Pending flushes (fsync) log: 0; buffer pool: 0
    1344 OS file reads, 133 OS file writes, 2 OS fsyncs
    ...
  • Bug #94448 - "Rewrite LOG_BLOCK_FIRST_REC_GROUP during recovery may be dangerous.". Yet another MySQL 8 regression (not marked with "regression" tag) was found by Kang Wang.
  • Bug #94476 - "mysql semisync replication stuck with master in Waiting to finalize termination". It has "Need feedback" status at the moment. I've subscribed to this report from Shirish Keshava Murthy mostly to find out how a report that may look like a free support request will be processed by Oracle engineers. Pure curiosity, for now.
  • Bug #94504 - "AIO::s_log seems useless". This problem was reported by Yuhui Wang. It's a regression in a sense that part of the code is no longer needed (and seems not to be used) in MySQL 8, but still remains.
  • Bug #94541 - "Assertion on import via Transportable Tablespace". This bug reported by  Daniël van Eeden was verified based on code review and some internal discussion. We do not know if any other version besides 5.7.25 is affected, though. The assertion itself:
    InnoDB: Failing assertion: btr_page_get_prev(next_page, mtr) == btr_pcur_get_block(cursor)->page.id.page_no()does not seem to be unique. We can find it in MDEV-18455 also (in other context).
  • Bug #94543 - "MySQL does not compile with protobuf 3.7.0". I care about build/compiling bugs historically, as I mostly use MySQL binaries that I built myself from GitHub source. So, I've immediately subscribed to this bug report from Laurynas Biveinis.
  • Bug #94548 - "Optimizer error evaluating JSON_Extract". This bug was reported by Dave Pullin. From my quick test it seems MariaDB 10.3.7 is also affected. Error message is different in the failing case, but the point is the same - the function is not evaluated if the column from derived table that is built using the function is not referenced in the SELECT list. This optimization is questionable and may lead to hidden "bombs" in the application code.
  • Bug #94550 - "generated columns referring to current_timestamp fail". I tried to check simple test case in this bug report by Mario Beck on MariaDB 10.3.7, but it does not seem to accept NOT NULL constraint for generated stored columns at all:MariaDB [test]> CREATE TABLE `t2` (
    -> `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
    -> `content` varchar(42) DEFAULT NULL,
    -> `bucket` tinyint(4) GENERATED ALWAYS AS ((floor((to_seconds(`created_at
    `) / 10)) % 3)) STORED NOT NULL);
    ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that
    corresponds to your MariaDB server version for the right syntax to use near 'NOT
    NULL)' at line 4I do not see this option in formal syntax described here as well. But in case of MariaDB we can actually make sure the generated column is never NULL by adding CHECK constraint like this:
    MariaDB [test]> CREATE TABLE `t2` (    ->   `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
        ->   `content` varchar(42) DEFAULT NULL,
        ->   `bucket` tinyint(4) GENERATED ALWAYS AS ((floor((to_seconds(`created_at`) / 10)) % 3)) STORED);
    Query OK, 0 rows affected (0.434 sec)

    MariaDB [test]> INSERT INTO t2 (content) VALUES ("taraaaa");
    Query OK, 1 row affected (0.070 sec)

    MariaDB [test]> alter table t2 add constraint cnn CHECK (`bucket` is NOT NULL);
    Query OK, 1 row affected (1.159 sec)
    Records: 1  Duplicates: 0  Warnings: 0

    MariaDB [test]> INSERT INTO t2 (content) VALUES ("tarabbb");
    Query OK, 1 row affected (0.029 sec)

    MariaDB [test]> INSERT INTO t2 (content) VALUES ("");
    Query OK, 1 row affected (0.043 sec)

    MariaDB [test]> select * from t2;
    +---------------------+---------+--------+
    | created_at          | content | bucket |
    +---------------------+---------+--------+
    | 2019-03-09 17:28:03 | taraaaa |      0 |
    | 2019-03-09 17:29:43 | tarabbb |      1 |
    | 2019-03-09 17:29:50 |         |      2 |
    +---------------------+---------+--------+
    3 rows in set (0.002 sec)

    MariaDB [test]> show create table t2\G*************************** 1. row ***************************
           Table: t2
    Create Table: CREATE TABLE `t2` (
      `created_at` timestamp NOT NULL DEFAULT current_timestamp(),
      `content` varchar(42) DEFAULT NULL,
      `bucket` tinyint(4) GENERATED ALWAYS AS (floor(to_seconds(`created_at`) / 10)
    MOD 3) STORED,
      CONSTRAINT `cnn` CHECK (`bucket` is not null)

    ) ENGINE=InnoDB DEFAULT CHARSET=latin1
    1 row in set (0.011 sec)So, maybe after all we can state that MariaDB is NOT affected.
  • Bug #94552 - "innodb.virtual_basic fails when valgrind is enabled". I still wonder if anyone in Oracle runs MTR test suite on Valgrind-enabled (-DWITH_VALGRIND=1 cmake option) at least in the process of official release (and if they check the failures). It seems not to be the case based on this bug report from Manuel Ung.
  • Bug #94553 - "Crash in trx_undo_rec_copy". Bernardo Perez noted that as a side effect of still "Verified" Bug #82734 - "trx_undo_rec_copy needlessly relies on buffer pool page alignment" (that affects both MySQL 5.7 and 8.0) we may get crashes while working with generated columns. I hope to see them both fixed soon, but for now Bug #94553 has status "Need Feedback", probably in a hope to get a repeatable test case. I'll watch it carefully.
  • Bug #94560 - "record comparison in spatial index non-leaf rtree node seems incorrect". I doubt spatial indexes of InnoDB are widely used, and I have no doubts there are many bugs waiting to be discovered in this area. This specific bug was reported by Jie Zhou who had also suggested a fix.
  • Bug #94610 - "Server stalls because ALTER TABLE on partitioned table holds dict mutex". My former colleague Justin Swanhart reported this bug just yesterday, so no wonder it is not verified yet. It refers to a well known verified old Bug #83435 - "ALTER TABLE is very slow when using PARTITIONED table"  (that I've also subscribed to immediately) from Roel Van de Paar, affecting both MySQL 5.6 and 5.7. I hope to see this bug verified and fixed soon, as recently I see this kind of state for main thread:
    Main thread process no. 3185, id 140434206619392, state: enforcing dict cache limittoo often in INNODB STATUS outputs to my liking...
As you could note, I still try to check (at least in some cases) if MariaDB is also affected by the same problem. I think it's a useful check both for me (as I work mostly with MariaDB as a support engineer) and for the reader (to know if switching to MariaDB may help in any way or if there are any chances for MariaDB engineers to contribute anything useful, like a fix).

"Hove, actually". For years residents of Hove used this humorous reply when they live in Brighton... "Regression, actually" is what I want to say (seriously) about every other MySQL bug report I subscribe to... So, you see Hove and many regression bugs above! To summarize:
  1. Sometimes Oracle engineers demonstrate proper collective effort to understand and carefully verify public bug reports. Good to know they are not ready to give up fast!
  2. I have to copy-paste this item from my previous post. As the list above proves, Oracle engineers still do not use "regression" tag when setting "Verified" status for obviously regression bugs. I think bug reporters should care then to always set it when they report regression of any kind.
  3. It seems there no regular MTR test runs for Valgrind builds performed by Oracle engineers, or maybe they just ignore failures.

Laravel 5.8 Form Validation Tutorial With Example

Laravel 5.8 Form Validation Tutorial With Example is today’s topic. Laravel  Framework provides many different approaches to validate your application’s form data. By default, Laravel’s base controller class uses the ValidateRequests trait which provides the convenient method to validate incoming HTTP request with a variety of powerful validation rules. We can use the Validation differently in Laravel. We can either use inside the controller’s method or create a FormRequest class to validate the incoming requests. In this tutorial, we will all ways to validate Laravel.

Laravel 5.8 Form Validation Tutorial

Okay, now the first step is to install Laravel 5.8. If you are new to Laravel 5.8, then check out my Laravel 5.8 CRUD tutorial on this blog. So, install the Laravel 5.8 using the following command. Right now, Laravel 5.8 is the latest version. So in the future, you may need to specify the version while installing the Laravel 5.8.

composer create-project --prefer-dist laravel/laravel blog

Okay, now go inside the project folder and open the project in the code editor.

Now, create a MySQL database and also connect that database to Laravel 5.8. Write the database credentials inside the .env file.

DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=blog DB_USERNAME=root DB_PASSWORD=root

Create a FormController.php file using the following command.

php artisan make:controller FormController

Create two methods inside the FormController.php file.

<?php // FormController.php namespace App\Http\Controllers; use Illuminate\Http\Request; class FormController extends Controller { public function create() { } public function store(Request $request) { } }

Now, write the two routes inside the routes >> web.php file.

// web.php Route::get('form', 'FormController@create')->name('form.create'); Route::post('form', 'FormController@store')->name('form.store');

Now, create a model and migration file using the following command.

php artisan make:model Form -m

Write the following code inside the [timestamp]_create_forms_table.php file.

// create_forms_table.php public function up() { Schema::create('forms', function (Blueprint $table) { $table->bigIncrements('id'); $table->string('item_name'); $table->string('sku_no'); $table->integer('price'); $table->timestamps(); }); }

Now, create a table using the following command.

php artisan migrate

Also, to prevent mass assignment exception, add the $fillable property.

<?php // Form.php namespace App; use Illuminate\Database\Eloquent\Model; class Form extends Model { protected $fillable = ['item_name', 'sku_no', 'price']; }

Inside the views folder, create layout.blade.php file and add the following code.

<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Laravel 5.8 Form Validation Example Tutorial</title> <link href="{{ asset('css/app.css') }}" rel="stylesheet" type="text/css" /> </head> <body> <div class="container"> @yield('content') </div> <script src="{{ asset('js/app.js') }}" type="text/js"></script> </body> </html>

Now, in the same folder, create one file called create.blade.php and add the following code.

<!-- create.blade.php --> @extends('layout') @section('content') <style> .uper { margin-top: 40px; } </style> <div class="card uper"> <div class="card-header"> Add Item </div> <div class="card-body"> @if ($errors->any()) <div class="alert alert-danger"> <ul> @foreach ($errors->all() as $error) <li>{{ $error }}</li> @endforeach </ul> </div><br /> @endif <form method="post" action="{{ route('form.store') }}"> <div class="form-group"> @csrf <label for="name">Item Name:</label> <input type="text" class="form-control" name="item_name"/> </div> <div class="form-group"> <label for="price">SKU Number :</label> <input type="text" class="form-control" name="sku_no"/> </div> <div class="form-group"> <label for="quantity">Item Price :</label> <input type="text" class="form-control" name="price"/> </div> <button type="submit" class="btn btn-primary">Create Item</button> </form> </div> </div> @endsection

Now, write the FormController’s create() function.

// FormController.php public function create() { return view('create'); }

Now, you can access the form on this URL: http://localhost:8000/form.

 

Writing The Validation Logic

Okay, now we can write the validation logic inside FormController’s store() function.

// FormController.php public function store(Request $request) { $validatedData = $request->validate([ 'item_name' => 'required|max:255', 'sku_no' => 'required|alpha_num', 'price' => 'required|numeric', ]); \App\Form::create($validatedData); return response()->json('Form is successfully validated and data has been saved'); }

As you can see, we have passed the desired validation rules into the validate() method. Again, if the validation fails, the proper response will automatically be generated. If the validation passes, our controller will continue executing normally and save the data in the database, and we get the json response.

If we submit the form without filling any values, then we get the error response like the following.

 

Stopping On First Validation Failure

Sometimes you may have requirement to stop running validation rules on an attribute after the first validation failure. To do so, assign the bail rule to the attribute.

// FormController.php $validatedData = $request->validate([ 'item_name' => 'bail|required|max:255', 'sku_no' => 'required|alpha_num', 'price' => 'required|numeric', ]);

In this example, if the max rule on the item_name attribute fails, the max rule won’t checked. Rules will be validated in the order they are assigned.

Displaying Validation Errors

Laravel will automatically redirect the user back to their previous location. Also, all of the validation errors will automatically be flashed to a session.

Notice that we did not have to explicitly bind the error messages to a view in our GET route. It is because Laravel will check for the errors in the session data, and automatically bind them to the view if they are available.

In our example, we have iterated the $errors array variable inside the create.blade.php file. That is why we user can see the errors.

@if ($errors->any()) <div class="alert alert-danger"> <ul> @foreach ($errors->all() as $error) <li>{{ $error }}</li> @endforeach </ul> </div><br /> @endif Form Request Validation in Laravel 5.8

In the above example, we have written the validation rules inside the controller function. We can also create a separate file to write the validation rules. For more complex validation scenarios, you may wish to create a “form request.” Form requests are the custom request classes that contain validation logic. To create a form request class, use the make: request Artisan CLI command.

php artisan make:request FieldRequest

It will create a file inside the app >> Http >> Requests folder called FieldRequest.php file.

Let’s add a few validation rules inside the rules method.

// FieldRequest.php public function rules() { return [ 'item_name' => 'bail|required|max:255', 'sku_no' => 'required|alpha_num', 'price' => 'required|numeric', ]; }

Also, you need to return true from the authorize() method inside the FieldRequest.php file. If you plan to have an authorization logic in another part of your application, return true from the authorize() method.

// FieldRequest.php public function authorize() { return true; }

So, now, you do not need to re-write these rules inside the FormController.php’s store() function.

You need to import the FieldRequest namespace inside the FormController.php file and pass the FormRequest as a dependency injection to the store function.

<?php // FormController.php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Http\Requests\FieldRequest; class FormController extends Controller { public function create() { return view('create'); } public function store(FieldRequest $request) { $validatedData = $request->validated(); \App\Form::create($validatedData); return response()->json('Form is successfully validated and data has been saved'); } }

If the validation fails, the redirect response will be generated to send a user back to their previous location. The errors will also be flashed to a session, so they are available for display.

Customizing The Error Messages

You may customize the error messages used by the form request by overriding the messages method. This method should return an array of attribute/rule pairs and their corresponding error messages.

// FieldRequest.php /** * Get the error messages for the defined validation rules. * * @return array */ public function messages() { return [ 'item_name.required' => 'An Item Name is required', 'sku_no.required' => 'An SKU NO is required', 'price.required' => 'The price is required', ]; }

Save the file and again submit the form without any values and you will see these error messages instead of default error messages.

 

Manually Creating Validators

If you do not want to use a validate() method on the request, you may create the validator instance manually using the Validator facade. The make method on the facade generates a new validator instance.

// FormController.php use Validator; public function store(Request $request) { $validatedData = Validator::make($request->all(), [ 'item_name' => 'bail|required|max:255', 'sku_no' => 'required|alpha_num', 'price' => 'required|numeric', ])->validate(); \App\Form::create($validatedData); return response()->json('Form is successfully validated and data has been saved'); }

It will also give us the same output. If you would like to create the validator instance manually but still take advantage of the automatic redirection offered by the requests to validate() method, you may call the validate() method on an existing validator instance.

For more validation, you can check out Laravel 5.8’s official documentation.

Finally, Laravel 5.8 Form Validation Tutorial With Example is over.

The post Laravel 5.8 Form Validation Tutorial With Example appeared first on AppDividend.

TTL - Perfect Accuracy by using an insertable VIEW

One more comment regarding TTL in MySQL:
If you are looking for perfect accuracy and never want to access rows that are older than the defined TTL you can hide the table t (from my previous post) behind a view. This view will automatically select only rows within TTL lifespan:
CREATE VIEW ttl as SELECT id, content, created_at FROM t 
       WHERE created_at >= NOW() - INTERVAL 10 SECOND;
This view is insertable, so you can fully use this view and you are not distracted by the additional column "bucket".
INSERT INTO ttl VALUES (NULL, "This is a test", NULL);
You could even exclude column "created_at" from the view definition, if there was not bug #94550. 'created_at' could be fully handled internally.
This view does not affect performance much. In my simple test it did not show any affect. Just better usability and better accuracy of TTL.
LimitationsYou cannot use foreign keys with your ttl'ed table and view. This is because partitioning and foreign keys are mutually exclusive. If you need foreign keys go with the simple delete event procedure and forget about the view.
Due to bug #94550 you have to set explicit_defaults_for_timestamp to OFF and you always have to insert NULL into column 'created_at'.
In this whole setup the TTL is mentioned in four locations: In the partitioning definition, in the definition of the generated column 'bucket', in the cleaning event procedure and in the WHERE clause of the view. This makes it easier to screw up the setup. Make sure you use the same value everywhere. Same applies for the number of partitions in the table definition as well as the cleaning event procedure.

SCaLE 17x

The Southern California Linux Expo has a MySQL Track this year and if you are around Pasadena, you should be here.  Besides the amazing expo hall, this is the biggest open source show on the left coast.  The MySQL track features talks on the MySQL Document Store, Kuberneties, query optimization, analytics, consistency at Facebook, and MySQL Security. 

High Availability on a Shoestring Budget - Deploying a Minimal Two Node MySQL Galera Cluster

We regularly get questions about how to set up a Galera cluster with just 2 nodes.

The documentation clearly states you should have at least 3 Galera nodes to avoid network partitioning. But there are some valid reasons for considering a 2 node deployment, e.g., if you want to achieve database high availability but have a limited budget to spend on a third database node. Or perhaps you are running Galera in a development/sandbox environment and prefer a minimal setup.

Related resources  ClusterControl for Galera Cluster  High-Availability Openstack on a shoestring budget: Deploying a Minimal 3-node Cluster  How to Deploy a Production-Ready MySQL or MariaDB Galera Cluster using ClusterControl

Galera implements a quorum-based algorithm to select a primary component through which it enforces consistency. The primary component needs to have a majority of votes, so in a 2 node system, there would be no majority resulting in split brain. Fortunately, it is possible to add a garbd (Galera Arbitrator Daemon), which is a lightweight stateless daemon that can act as the odd node. Arbitrator failure does not affect the cluster operations and a new instance can be reattached to the cluster at any time. There can be several arbitrators in the cluster.

ClusterControl has support for deploying garbd on non-database hosts.

Normally a Galera cluster needs at least three hosts to be fully functional, however, at deploy time, two nodes would suffice to create a primary component. Here are the steps:

  1. Deploy a Galera cluster of two nodes,
  2. After the cluster has been deployed by ClusterControl, add garbd on the ClusterControl node.

You should end up with the below setup:

Deploy the Galera Cluster

Go to the ClusterControl Deploy section to deploy the cluster.

After selecting the technology that we want to deploy, we must specify User, Key or Password and port to connect by SSH to our hosts. We also need the name for our new cluster and if we want ClusterControl to install the corresponding software and configurations for us.

After setting up the SSH access information, we must select vendor/version and we must define the database admin password, datadir and port. We can also specify which repository to use.

Even though ClusterControl warns you that a Galera cluster needs an odd number of nodes, only add two nodes to the cluster.

Deploying a Galera cluster will trigger a ClusterControl job which can be monitored at the Jobs page.

ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE Install Garbd

Once deployment is complete, install garbd on the ClusterControl host. We have the option to deploy garbd from ClusterControl, but this option won’t work if we want to deploy it in the same ClusterControl server. This is to avoid some issue related to the database versions and package dependencies.

So, we must install it manually, and then import garbd to ClusterControl.

Let’s see the manual installation of Percona Garbd on CentOS 7.

Create the Percona repository file:

$ vi /etc/yum.repos.d/percona.repo [percona-release-$basearch] name = Percona-Release YUM repository - $basearch baseurl = http://repo.percona.com/release/$releasever/RPMS/$basearch enabled = 1 gpgcheck = 0 [percona-release-noarch] name = Percona-Release YUM repository - noarch baseurl = http://repo.percona.com/release/$releasever/RPMS/noarch enabled = 1 gpgcheck = 0 [percona-release-source] name = Percona-Release YUM repository - Source packages baseurl = http://repo.percona.com/release/$releasever/SRPMS enabled = 0 gpgcheck = 0

Then, install the Percona XtraDB Cluster garbd package:

$ yum install Percona-XtraDB-Cluster-garbd-57

Now, we need to configure garbd. For this, we need to edit the /etc/sysconfig/garb file:

$ vi /etc/sysconfig/garb # Copyright (C) 2012 Codership Oy # This config file is to be sourced by garb service script. # A comma-separated list of node addresses (address[:port]) in the cluster GALERA_NODES="192.168.100.192:4567,192.168.100.193:4567" # Galera cluster name, should be the same as on the rest of the nodes. GALERA_GROUP="Galera1" # Optional Galera internal options string (e.g. SSL settings) # see http://galeracluster.com/documentation-webpages/galeraparameters.html # GALERA_OPTIONS="" # Log file for garbd. Optional, by default logs to syslog # Deprecated for CentOS7, use journalctl to query the log for garbd # LOG_FILE=""

Change the GALERA_NODES and GALERA_GROUP parameter according to the Galera nodes configuration. We also need to remove the line # REMOVE THIS AFTER CONFIGURATION before starting the service.

And now, we can start the garb service:

$ service garb start Redirecting to /bin/systemctl start garb.service

Now, we can import the new garbd into ClusterControl.

Go to ClusterControl -> Select Cluster -> Add Load Balancer.

Then, select Garbd and Import Garbd section.

Here we only need to specify the hostname or IP Address and the port of the new Garbd.

Importing garbd will trigger a ClusterControl job which can be monitored at the Jobs page. Once completed, you can verify garbd is running with a green tick icon at the top bar:

That’s it!

Our minimal two-node Galera cluster is now ready!

Tags:  galera galera cluster garbd MySQL MariaDB budget high availability

Laravel 5.8 CRUD Tutorial With Example For Beginners

Laravel 5.8 CRUD Tutorial With Example For Beginners is today’s topic. You can upgrade your Laravel’s 5.8 version by going to this link. Laravel 5.8 continues the improvements made in Laravel 5.7 by introducing the following features.

  1. has-one-through Eloquent relationships.
  2. Improved email validation.
  3. convention-based automatic registration of authorization policies.
  4. DynamoDB cache and session drivers.
  5. Improved scheduler timezone configuration.
  6. Support for assigning multiple authentication guards to broadcast channels.
  7. PSR-16 cache driver compliance, improvements to the artisan serve command.
  8. PHPUnit 8.0 support.
  9. Carbon 2.0 support.
  10. Pheanstalk 4.0 support, and a variety of other bug fixes and usability improvements.

You can find the detailed guide on Laravel 5.8 releases.

Server Requirements

The following are the server requirements.

  1. PHP >= 7.1.3
  2. OpenSSL PHP Extension
  3. PDO PHP Extension
  4. Mbstring PHP Extension
  5. Tokenizer PHP Extension
  6. XML PHP Extension
  7. Ctype PHP Extension
  8. JSON PHP Extension
  9. BCMath PHP Extension
Laravel 5.8 CRUD Tutorial

You can install Laravel 5.8 via global installer or using the Composer Create-Project command.

composer create-project --prefer-dist laravel/laravel laravel58crud

 

Now, go inside the project and open the project in your favorite editor.

cd laravel58crud code . Step 1: Configure the MySQL Database

Now, first, in MySQL, you need to create the database, and then we need to connect that database to the Laravel application. You can also use phpmyadmin to create the database.

I have created a MySQL database called laravel58crud and now write the MySQL credentials inside the .env file.

DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=laravel58crud DB_USERNAME=root DB_PASSWORD=root

So now you will be able to connect the MySQL database.

Laravel always ships with migration files, so you can able to generate the tables in the database using the following command.

php artisan migrate

 

We will create a CRUD operation on Books. So the user can create, read, update, and delete the books from the database. So, let’s create a model and migration files.

Step 2: Create a model and migration files.

Type the following command to create a model and migration files.

php artisan make:model Book -m

It will create a Book.php file and [timestamp]create_books_table.php migration file.

Now, open the migration file inside the database >> migrations >> [timestamp]create_books_table file and add the following schema inside it.

public function up() { Schema::create('books', function (Blueprint $table) { $table->bigIncrements('id'); $table->string('book_name'); $table->string('isbn_no'); $table->integer('book_price'); $table->timestamps(); }); }

Now, create a table in the database using the following command.

php artisan migrate

 

So, in the database, the table is created successfully.

Now, add the fillable property inside Book.php file.

// Book.php <?php namespace App; use Illuminate\Database\Eloquent\Model; class Book extends Model { protected $fillable = ['book_name', 'isbn_no', 'book_price']; } Step 3: Create routes and controller

First, create the BookController using the following command.

php artisan make:controller BookController --resource

Now, inside routes >> web.php file, add the following line of code.

<?php // BookController.php Route::get('/', function () { return view('welcome'); }); Route::resource('books', 'BookController');

Actually, by adding the following line, we have registered the multiple routes for our application. We can check it using the following command.

php artisan route:list

 

Okay, now open the BookController.php file, and you can see that all the functions declarations are there.

<?php // BookController.php namespace App\Http\Controllers; use Illuminate\Http\Request; class BookController extends Controller { /** * Display a listing of the resource. * * @return \Illuminate\Http\Response */ public function index() { // } /** * Show the form for creating a new resource. * * @return \Illuminate\Http\Response */ public function create() { // } /** * Store a newly created resource in storage. * * @param \Illuminate\Http\Request $request * @return \Illuminate\Http\Response */ public function store(Request $request) { // } /** * Display the specified resource. * * @param int $id * @return \Illuminate\Http\Response */ public function show($id) { // } /** * Show the form for editing the specified resource. * * @param int $id * @return \Illuminate\Http\Response */ public function edit($id) { // } /** * Update the specified resource in storage. * * @param \Illuminate\Http\Request $request * @param int $id * @return \Illuminate\Http\Response */ public function update(Request $request, $id) { // } /** * Remove the specified resource from storage. * * @param int $id * @return \Illuminate\Http\Response */ public function destroy($id) { // } } Step 4: Create the views

Inside resources >> views folder, create the following three files.

  1. create.blade.php
  2. edit.blade.php
  3. index.blade.php

Also inside views folder, we also need to create a layout file. So create one file inside the views folder called layout.blade.php. Add the following code inside the layout.blade.php file.

<!-- layout.blade.php --> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Laravel 5.8 CRUD Example Tutorial</title> <link href="{{ asset('css/app.css') }}" rel="stylesheet" type="text/css" /> </head> <body> <div class="container"> @yield('content') </div> <script src="{{ asset('js/app.js') }}" type="text/js"></script> </body> </html>

So basically, this file is our main template file, and all the other view files will extend this file. Here, we have already included the Bootstrap 4 by adding the app.css.

Next step would be to code the create.blade.php file. So write the following code inside it.

<!-- create.blade.php --> @extends('layout') @section('content') <style> .uper { margin-top: 40px; } </style> <div class="card uper"> <div class="card-header"> Add Book </div> <div class="card-body"> @if ($errors->any()) <div class="alert alert-danger"> <ul> @foreach ($errors->all() as $error) <li>{{ $error }}</li> @endforeach </ul> </div><br /> @endif <form method="post" action="{{ route('books.store') }}"> <div class="form-group"> @csrf <label for="name">Book Name:</label> <input type="text" class="form-control" name="book_name"/> </div> <div class="form-group"> <label for="price">Book ISBN Number :</label> <input type="text" class="form-control" name="isbn_no"/> </div> <div class="form-group"> <label for="quantity">Book Price :</label> <input type="text" class="form-control" name="book_price"/> </div> <button type="submit" class="btn btn-primary">Create Book</button> </form> </div> </div> @endsection

Okay, now we need to open the BookController.php file, and on the create function, we need to return a view, and that is the create.blade.php file.

// BookController.php public function create() { return view('create'); }

Save the file and start the Laravel development server using the following command.

php artisan serve

Go to the http://localhost:8000/books/create. 

You can see something like this.

 

Step 5: Add Validation rules and save data

Now, the first step inside the BookController.php is that import the namespace of Book model inside the BookController.php file.

// BookController.php <?php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Book;

Now, write the following code inside the BookController.php file’s store() function.

// BookController.php public function store(Request $request) { $validatedData = $request->validate([ 'book_name' => 'required|max:255', 'isbn_no' => 'required|alpha_num', 'book_price' => 'required|numeric', ]); $book = Book::create($validatedData); return redirect('/books')->with('success', 'Book is successfully saved'); }

Here, what we have done is first check for all three fields of the form.

If any of the rules are failed by the incoming data, then it will directly go to the form with the error messages.

We need to loop through that error messages inside the create.blade.php file which we have already done it.

If you leave all the form fields empty, then you will find an error message like this image.

 

Now, if you fill the form fields correctly, then it will create a new row in the database. I have created a new book.

 

Step 6: Display the data.

Now, we need to write the BookController’s index function to return the index view with the data fetched from the database. Write the following code inside the index() function.

// BookController.php public function index() { $books = Book::all(); return view('index', compact('books')); }

Okay, now create the file called index.blade.php inside the views folder and add the following code.

<!-- index.blade.php --> @extends('layout') @section('content') <style> .uper { margin-top: 40px; } </style> <div class="uper"> @if(session()->get('success')) <div class="alert alert-success"> {{ session()->get('success') }} </div><br /> @endif <table class="table table-striped"> <thead> <tr> <td>ID</td> <td>Book Name</td> <td>ISBN Number</td> <td>Book Price</td> <td colspan="2">Action</td> </tr> </thead> <tbody> @foreach($books as $book) <tr> <td>{{$book->id}}</td> <td>{{$book->book_name}}</td> <td>{{$book->isbn_no}}</td> <td>{{$book->book_price}}</td> <td><a href="{{ route('books.edit',$book->id)}}" class="btn btn-primary">Edit</a></td> <td> <form action="{{ route('books.destroy', $book->id)}}" method="post"> @csrf @method('DELETE') <button class="btn btn-danger" type="submit">Delete</button> </form> </td> </tr> @endforeach </tbody> </table> <div> @endsection

Here, we have looped through the books array and display the data in the tabular format.

Also, we have added two buttons for edit and delete operation.

Step 7: Create Edit and Update Operation

First, we need to add the following code inside the BookController.php file’s edit function.

// BookController.php public function edit($id) { $book = Book::findOrFail($id); return view('edit', compact('book')); }

Now, create a new file inside the views folder called edit.blade.php and add the following code.

<!-- edit.blade.php --> @extends('layout') @section('content') <style> .uper { margin-top: 40px; } </style> <div class="card uper"> <div class="card-header"> Edit Book </div> <div class="card-body"> @if ($errors->any()) <div class="alert alert-danger"> <ul> @foreach ($errors->all() as $error) <li>{{ $error }}</li> @endforeach </ul> </div><br /> @endif <form method="post" action="{{ route('books.update', $book->id) }}"> <div class="form-group"> @csrf @method('PATCH') <label for="name">Book Name:</label> <input type="text" class="form-control" name="book_name" value="{{$book->book_name}}"/> </div> <div class="form-group"> <label for="price">Book ISBN Number :</label> <input type="text" class="form-control" name="isbn_no" value="{{$book->isbn_no}}"/> </div> <div class="form-group"> <label for="quantity">Book Price :</label> <input type="text" class="form-control" name="book_price" value="{{$book->book_price}}"/> </div> <button type="submit" class="btn btn-primary">Update Book</button> </form> </div> </div> @endsection

In this file, you can show the values of the particular row using its unique id inside the form fields.

So, when you hit this URL: http://localhost:8000/books/1/edit, you will see something like below image.

 

Now, add the following code inside the BookController’s update() function.

// BookController.php public function update(Request $request, $id) { $validatedData = $request->validate([ 'book_name' => 'required|max:255', 'isbn_no' => 'required|alpha_num', 'book_price' => 'required|numeric', ]); Book::whereId($id)->update($validatedData); return redirect('/books')->with('success', 'Book is successfully updated'); }

So now, you can edit and update the data into the database successfully.

Step 8: Create Delete Functionality

Write the following code inside the BookController’s destroy function.

// BookController.php public function destroy($id) { $book = Book::findOrFail($id); $book->delete(); return redirect('/books')->with('success', 'Book is successfully deleted'); }

Now, go to this URL: http://localhost:8000/books and try to delete the book.

You can see that you have successfully deleted the book.

So, our complete BookController.php code looks like below.

<?php // BookController.php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Book; class BookController extends Controller { /** * Display a listing of the resource. * * @return \Illuminate\Http\Response */ public function index() { $books = Book::all(); return view('index', compact('books')); } /** * Show the form for creating a new resource. * * @return \Illuminate\Http\Response */ public function create() { return view('create'); } /** * Store a newly created resource in storage. * * @param \Illuminate\Http\Request $request * @return \Illuminate\Http\Response */ public function store(Request $request) { $validatedData = $request->validate([ 'book_name' => 'required|max:255', 'isbn_no' => 'required|alpha_num', 'book_price' => 'required|numeric', ]); $book = Book::create($validatedData); return redirect('/books')->with('success', 'Book is successfully saved'); } /** * Display the specified resource. * * @param int $id * @return \Illuminate\Http\Response */ public function show($id) { // } /** * Show the form for editing the specified resource. * * @param int $id * @return \Illuminate\Http\Response */ public function edit($id) { $book = Book::findOrFail($id); return view('edit', compact('book')); } /** * Update the specified resource in storage. * * @param \Illuminate\Http\Request $request * @param int $id * @return \Illuminate\Http\Response */ public function update(Request $request, $id) { $validatedData = $request->validate([ 'book_name' => 'required|max:255', 'isbn_no' => 'required|alpha_num', 'book_price' => 'required|numeric', ]); Book::whereId($id)->update($validatedData); return redirect('/books')->with('success', 'Book is successfully updated'); } /** * Remove the specified resource from storage. * * @param int $id * @return \Illuminate\Http\Response */ public function destroy($id) { $book = Book::findOrFail($id); $book->delete(); return redirect('/books')->with('success', 'Book is successfully deleted'); } }

So, we have completed a Laravel 5.8 CRUD operations tutorial with the example from scratch.

I have put this code on Github so you can check it out as well.

GITHUB CODE

The post Laravel 5.8 CRUD Tutorial With Example For Beginners appeared first on AppDividend.

TTL - Time-to-Live in MySQL

A customer recently asked for a TTL feature in MySQL. The idea is to automatically delete rows from a certain table after a defined lifespan, e.g. 60 seconds. This feature is common in many NoSQL databases, but it is not available in MySQL. However MySQL offers all you need to implement this. And due to partitioning much more efficient than only deleting rows. Let's test it.
tl;drPartition the table and truncate partitions in a regular event procedure, that does the trick and comes at a fraction of the cost of regularly deleting rows.
The test caseThe table needs a column to keep track of row age. This can be either a "created_at" column or an "expires_at" column. ("expires_at" has the additional advantage that each row can have an individual lifespan. Not possible in many NoSQL solutions.)
So my table is
CREATE TABLE `t` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY,
  `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `content` varchar(42) DEFAULT NULL,
);
I tested two variants to implement a 10 seconds TTL on my table "t":
The simple solutionRun an event every 10 seconds to delete rows that have been created more than 10s ago.
DELIMITER |
CREATE EVENT ttl_delete
        ON SCHEDULE EVERY 10 SECOND STARTS '2019-03-04 16:00:00' DISABLE
        DO BEGIN
                DELETE FROM t WHERE created_at < NOW() - INTERVAL 10 SECOND;
        END |
DELIMITER ;
And index on "created_at" might improve performance for the DELETE job. But in any case it is quite expensive to scan the table and remove roughly 50% of the rows of a table, at least if the INSERT rate is high.
The efficient solutionInstead of DELETing we can use the much faster TRUNCATE operation. Obviously we do not want to TRUNCATE the whole table but if we distribute the inserted rows into partitions it is safe to truncate any partition that contains outdated rows. Let's define three partitions (or buckets): One that is currently being written to, one that holds rows of the last 10 seconds and one partition that can be truncated because the rows are older than 10 seconds. Key is to calculate the bucket from the current time. This can be done with the expression FLOOR(TO_SECONDS(NOW()/10)) % 3, or more generic FLOOR(TO_SECONDS(NOW()/ttl))% number_of_buckets 
Now we can partition the table by this expression. For that we add a generated column to calculate the bucket from the column "created_at" and partition the table by column "bucket". The table now looks like this:
CREATE TABLE `t` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `content` varchar(42) DEFAULT NULL,
  `bucket` tinyint(4) GENERATED ALWAYS AS     
       (floor(TO_SECONDS(`created_at`) / 10) % 3) STORED NOT NULL,
  PRIMARY KEY (`id`,`bucket`)
)  PARTITION BY LIST (`bucket`)
(PARTITION p0 VALUES IN (0),
 PARTITION p1 VALUES IN (1),
 PARTITION p2 VALUES IN (2));
And the event procedure is like this:
DELIMITER |
CREATE EVENT ttl_truncate
    ON SCHEDULE EVERY 10 SECOND STARTS '2019-03-04 16:00:00' DISABLE
    DO BEGIN
            CASE FLOOR(TO_SECONDS(NOW())/10)%3
            WHEN 0 THEN ALTER TABLE test.t TRUNCATE PARTITION p1;
            WHEN 1 THEN ALTER TABLE test.t TRUNCATE PARTITION p2;
            WHEN 2 THEN ALTER TABLE test.t TRUNCATE PARTITION p0;
            END CASE;
        END|
DELIMITER ;
Watching the rows come and goTo verify that the procedure works as expected I created a small monitor procedure that displays each second the number of rows per partition. Then it is easy to follow in which partition data is currently added and when a partition gets truncated.
DELIMITER |
CREATE PROCEDURE monitor()
BEGIN  
  WHILE 1=1 DO
   SELECT "p0" AS "part", count(*) FROM t PARTITION (p0)
           UNION SELECT "p1", count(*) FROM t PARTITION (p1)
           UNION SELECT "p2", count(*) FROM t PARTITION (p2);
   SELECT now() AS "NOW", floor(to_seconds(now())/10)%3 AS "Bucket";
   SELECT sleep(1);
  END WHILE;
END|
DELIMITER ;
This procedure is not ideal. Too many count(*) will create quite some locking. But it is accurate. The alternative is to read data from INFORMATION_SCHEMA.partitions, but this does not give the exact row count, which I needed for verification. 
Increasing AccuracyIf TTL is 10 seconds, deleting or truncating every 10 seconds means you have at least 10 seconds of rows available. In reality you will have 10 to 20 seconds worth of data, so on average 15 seconds (assuming a constant INSERT rate). If you run the cleaner job more often (say once per second) the average number of rows is 10.5 seconds worth of data. This comes at the cost of running the cleaning event more often. But it might be very beneficial to increase this accuracy because all other queries benefit from less data to operate on and less memory consumed by expired rows.
If you go with the simple solution of a regular DELETE event, it is sufficient to schedule the event more often.
If you prefer the TRUNCATE PARTITION option, it is necessary to increase the number of partitions or buckets to 12 (= 2 + TTL / how often to run the cleaning job).
The expression for the calculated bucket column will be
`bucket` tinyint(4) GENERATED ALWAYS AS     
       (floor(TO_SECONDS(`created_at`) / TTL) % #buckets) STORED NOT NULL
and the partitioning needs to be adapted as well.
And the CASE construct in the cleaner event must be extended for each newly existing bucket/partition:
        WHEN n THEN ALTER TABLE test.t TRUNCATE PARTITION p(n+1);
What happens if...... the event stops?
Then you keep all your rows which will likely create some follow-up problems. As always: Proper monitoring is key. Think about MySQL Enterprise Monitor for example.

... the event procedure runs at inaccurate timing due to overall system load?
No big problem. It will never run too early. So it will never remove rows too early. If it runs too late it will clean rows too late so you have more garbage in your table which might affect other queries. The real TTL is increased if this happens.
Performance Considerations By no means I am able to run proper performance tests. I am running on a Win10 laptop, VirtualBox with Oracle Linux and MySQL runs inside a Docker container. So plenty of reason to achieve bad numbers. But to compare the two implementations it should be sufficient.
I have extended the cleaner events to report the time needed to execute the event procedure. Here the example of the simple cleaner job:
CREATE EVENT ttl_delete
  ON SCHEDULE EVERY 10 SECOND STARTS '2019-03-04 16:00:00' DISABLE
  DO BEGIN
    DECLARE t1,t2 TIME(6);
    SET t1=current_time(6);

    DELETE FROM t WHERE created_at < NOW() - INTERVAL 10 SECOND;
    SET t2=current_time(6);
    INSERT INTO ttl_report VALUES ("DELETE simple", now(),
                                   timediff(t2,t1));

  END|
DELIMITER ;
The load was generated by mysqlslap, which only inserted rows in the table. Each test run starts the respective cleaner event, runs the mysqlslap load and then stops the cleaner event.
mysql -h 127.0.0.1 -uroot -pXXX -e \
        "USE test; ALTER event ttl_delete ENABLE;"
 
mysqlslap -h 127.0.0.1 -uroot -pXXX --create-schema=test \   
   --concurrency=5 --iterations=20 --number-of-queries=10000 
   --query="INSERT INTO test.t (created_at, content) VALUES 
            (NULL,md5(id));"
 
mysql -h 127.0.0.1 -uroot -pXXX -e \
    "USE test; ALTER event ttl_delete DISABLE;"

mysql -h 127.0.0.1 -uroot -pXXX -e \
      "USE test; ALTER event ttl_truncate ENABLE;"
 
mysqlslap -h 127.0.0.1 -uroot -pXXX --create-schema=test 
   --concurrency=5 --iterations=20 --number-of-queries=10000 
   --query="INSERT INTO test.t (created_at, content) VALUES 
           (NULL,md5(id));"
mysql -h 127.0.0.1 -uroot -pXXX -e \
    "USE test; ALTER event ttl_truncate DISABLE;"

The results are clearly in favor of truncating partitions. And the difference should be even higher the higher the INSERT rate gets. My poor setup achieved only less than 1000 inserts per second... 

select who, avg(how_long) from ttl_report GROUP BY who;
+---------------+--------------------+
| who           |avg(how_long)       |
+---------------+--------------------+
| DELETE simple | 1.1980474444444444 |
| truncate      | 0.0400163333333333 |
+---------------+--------------------+
3 rows in set (0.0014 sec)
 
Side noteYou might wonder why my test load is
  INSERT INTO test.t (created_at, content) VALUES (NULL,'foo');"
Why do I mention the column "created_at" but then store NULL to give it the default of current_timestamp? If I omit the created_at column in this INSERT statement I get an error from the generated column due to bug #94550. Setting explicit_defaults_for_timestamp to OFF and then mentioning the timestamp column during INSERT is a workaround.

dbdeployer community - Part 2: Percona XtraDB Cluster

This was not on the radar. I have never been proficient in Galera clusters and related technologies, and thus I hadn’t given much thought to Percona Xtradb Cluster (PXC), until Alkin approached me at FOSDEM, and proposed to extend dbdeployer features to support PXC. He mentioned that many support engineers at Percona use dbdeployer) on a daily basis and that the addition of PXC would be welcome.

I could not follow up much during the conference, but we agreed on making a proof-of-concept in an indirect way: if several nodes of PXC can run in the same host using shell scripts, dbdeployer could reproduce that behavior.

A few weeks later, when dbdeployer had already been enhanced with flavors and capabilities, I got the script that can deploy several nodes in the same host. It’s a simplification of the ones used in Percona PXC tests, which got me started.

I followed a method similar to the one I used for MySQL Group replication. The technology is similar, although the MySQL Team used a different approach for the installation. The basic principle is that the cluster needs two ports per node: in addition to the regular MySQL port, there is a communication port (SST or Snapshot State Transfer port) that is needed to exchange cluster data. Using this information, and following the sample in the script, I could produce a prototype that surprisingly worked at the first try!

The cluster did deploy, and the replication test, which comes free of charge when you implement a replication-type sandbox using standard templates, worked flawlessly.

Then I hooked the deployment method into dbdeployer concurrency engine, which is able to deploy several nodes at once. Here I hit the first problem. In PXC, the nodes are not equal at startup. The first node needs to be initialised without other nodes addresses, and it becomes the reference for other nodes to join the cluster. If I provided complete references for all nodes (as I do for MySQL Group Replication,) it didn’t work.

After some talk with Percona engineers on Slack, I figured out that the nodes can be deployed together, and the second and third node will just wait for the first one to come online and then join. That worked in principle, or when I deployed sequentially, but not when they are deployed all at once. Fortunately, dbdeployer has several ways of enabling debugging output, and after a few unsuccessful attempts I got the reason: PXC initialisation happens using rsync on port 4444. When the nodes are started sequentially, the receiving node takes control of port 4444 without conflicts, gets the job done and releases the port. When we deploy all nodes at once, there is a race for the possession of the synchronisation port, and a random node will win it, leaving the others waiting forever.

Thus, I modified the installation to allocate a different rsync port for each node, and after that the concurrent installation worked as well.

The last obstacle was the discovery that there is yet another port (IST, or Incremental State Transfer port), which is always one number bigger than the SST port. Thus, if the SST port is, say, 5555, the IST port is set to 5556. This means that, unlike other dbdeployer clusters, I can’t set port numbers incrementally, but I need to set them with an interval. I did that, and the cluster came with a default allocation of four ports per node (MySQL, rsync, SST, IST). If we also enable MySQLX, which comes includes as PXC binaries are based on MySQL 5.7, we would set 5 ports per node, and a majestic 15 ports for a three-node cluster.

Anyway, the support for Percona XtraDB Cluster is available in dbdeployer 1.21.0. Let’s see a sample session to use the new functionality.

$ $ dbdeployer --version
dbdeployer version 1.21.0

$ dbdeployer unpack --prefix=pxc ~/downloads/Percona-XtraDB-Cluster-5.7.25-rel28-31.35.1.Linux.x86_64.ssl100.tar.gz
[...]
Renaming directory $HOME/opt/mysql/Percona-XtraDB-Cluster-5.7.25-rel28-31.35.1.Linux.x86_64.ssl100 to $HOME/opt/mysql/pxc5.7.25

Before trying the cluster, it would be a good practice to make sure that your system can deploy a single node.

$ dbdeployer deploy single pxc5.7.25
Database installed in $HOME/sandboxes/msb_pxc5_7_25
run 'dbdeployer usage single' for basic instructions'
. sandbox server started

$ $HOME/sandboxes/msb_pxc5_7_25/test_sb
[...]
# Tests : 11
# PASS : 11
# fail : 0

$ dbdeployer delete msb_pxc5_7_25
[...]

And now for the real test:

$ dbdeployer deploy replication --topology=pxc pxc5.7.25
Installing and starting node 1
. sandbox server started
Installing and starting node 2
...... sandbox server started
Installing and starting node 3
..... sandbox server started
Replication directory installed in $HOME/sandboxes/pxc_msb_pxc5_7_25
run 'dbdeployer usage multiple' for basic instructions'

We should now see all the allocated ports.

$ dbdeployer sandboxes --header
name type version ports
---------------------------- ------------------------ ----------- ----------------------------------------------------------------------------
pxc_msb_pxc5_7_25 : Percona-Xtradb-Cluster pxc5.7.25 [26226 26352 26353 26364 26227 26354 26355 26365 26228 26356 26357 26366 ]

If we want more detail, we can look at the sandbox description file:

$ cat $HOME/sandboxes/pxc_msb_pxc5_7_25/sbdescription.json
{
"basedir": "$HOME/opt/mysql/pxc5.7.25",
"type": "Percona-Xtradb-Cluster",
"version": "pxc5.7.25",
"flavor": "pxc",
"port": [
26226,
26352,
26353,
26364,
26227,
26354,
26355,
26365,
26228,
26356,
26357,
26366
],
"nodes": 3,
"node_num": 0,
"dbdeployer-version": "1.21.0",
"timestamp": "Thu Mar 7 17:20:03 CET 2019",
"command-line": "dbdeployer deploy replication --topology=pxc pxc5.7.25"
}

Now we can run the replication test. Given that we have a cluster where all nodes are masters, the test will create a table in each node, and read the result in each slave (again, each node):

$ $HOME/sandboxes/pxc_msb_pxc5_7_25/test_replication
# master 1
# master 2
# master 3
# slave 1
ok - '3' == '3' - Slaves received tables from all masters
# slave 2
ok - '3' == '3' - Slaves received tables from all masters
# slave 3
ok - '3' == '3' - Slaves received tables from all masters
# pass: 3
# fail: 0

It’s a simple test, but it tells us that the cluster is fully functional.

Thanks to Alkin and other Percona engineers who have tested the prototype in real time.

Reducing High CPU on MySQL: a Case Study

In this blog post, I want to share a case we worked on a few days ago. I’ll show you how we approached the resolution of a MySQL performance issue and used Percona Monitoring and Management PMM to support troubleshooting. The customer had noticed a linear high CPU usage in one of their MySQL instances and was not able to figure out why as there was no much traffic hitting the app. We needed to reduce the high CPU usage on MySQL. The server is a small instance:

Models | 6xIntel(R) Xeon(R) CPU E5-2430 0 @ 2.20GHz 10GB RAM

This symptom can be caused by various different reasons. Let’s see how PMM can be used to troubleshoot the issue.

CPU

It’s important to understand where the CPU time is being consumed: user space, system space, iowait, and so on. Here we can see that CPU usage was hitting almost 100% and the majority of the time was being spent on user space. In other words, the time the CPU was executing user code, such as MySQL. Once we determined that the time was being spent on user space, we could discard other possible issues. For example, we could eliminate the possibility that a high amount of threads were competing for CPU resources, since that would cause an increase in context switches, which in turn would be taken care of by the kernel – system space.

With that we decided to look into MySQL metrics.

MySQL

As expected, there weren’t a lot of threads running—10 on average—and MySQL wasn’t being hammered with questions/transactions. It was running from 500 to 800 QPS (queries per second). Next step was to check the type of workload that was running on the instance:

In red we can see that almost all commands are SELECTS. With that in mind, we checked the handlers using 

SHOW STATUS LIKE 'Handler%' to verify if those selects were doing an index scan, a full table scan or what.

Blue in this graph represents

Handler_read_rnd_next , which is the counter MySQL increments every time it reads a row when it’s doing a full table scan. Bingo!!! Around 350 selects were reading 2.5 million rows. But wait—why was this causing CPU issues rather than IO issues? If you refer to the first graph (CPU graph) we cannot see iowait.

That is because the data was stored in the InnoDB Buffer Pool, so instead of having to read those 2.5M rows per second from disk, it was fetching them from memory. The stress had moved from disk to CPU. Now that we identified that the issue had been caused by some queries or query, we went to QAN to verify the queries and check their status:

First query, a

SELECT  on table store.clients was responsible for 98% of the load and was executing in 20+ seconds.

EXPLAIN confirmed our suspicions. The query was accessing the table using type ALL, which is the last type we want as it means “Full Table Scan”. Taking a look into the fingerprint of the query, we identified that it was a simple query:


The query was filtering clients based on the status field

SELECT * FROM store.clients WHERE status = ? As shown in the indexes, that column was not indexed. Talking with the customer, this turned out to be a query that was introduced as part of a new software release.

From that point, we were confident that we had identified the problem. There could be more, but this particular query was definitely hurting the performance of the server. We decided to add an index and also sent an annotation to PMM, so we could refer back to the graphs to check when the index has been added, check if CPU usage had dropped, and also check Handler_read_rnd_next.

To run the alter we decided to use pt-online-schema-change as it was a busy table, and the tool has safeguards to prevent the situation from becoming even worse. For example, we wanted to pause or even abort the alter in the case of the number of Threads_Running exceeding a certain threshold. The threshold is controlled by

--max-load  (25 by default) and --critical-load  (50 by default):pmm-admin annotate "Started ALTER store.clients ADD KEY (status)" && \ pt-online-schema-change --alter "ADD KEY (status)" --execute u=root,D=store,t=clients && \ pmm-admin annotate "Finished ALTER store.clients ADD KEY (status)" Your annotation was successfully posted. No slaves found. See --recursion-method if host localhost.localdomain has slaves. Not checking slave lag because no slaves were found and --check-slave-lag was not specified. Operation, tries, wait: analyze_table, 10, 1 copy_rows, 10, 0.25 create_triggers, 10, 1 drop_triggers, 10, 1 swap_tables, 10, 1 update_foreign_keys, 10, 1 Altering `store`.`clients`... Creating new table... Created new table store._clients_new OK. Altering new table... Altered `store`.`_clients_new` OK. 2019-02-22T18:26:25 Creating triggers... 2019-02-22T18:27:14 Created triggers OK. 2019-02-22T18:27:14 Copying approximately 4924071 rows... Copying `store`.`clients`: 7% 05:46 remain Copying `store`.`clients`: 14% 05:47 remain Copying `store`.`clients`: 22% 05:07 remain Copying `store`.`clients`: 30% 04:29 remain Copying `store`.`clients`: 38% 03:59 remain Copying `store`.`clients`: 45% 03:33 remain Copying `store`.`clients`: 52% 03:06 remain Copying `store`.`clients`: 59% 02:44 remain Copying `store`.`clients`: 66% 02:17 remain Copying `store`.`clients`: 73% 01:50 remain Copying `store`.`clients`: 79% 01:23 remain Copying `store`.`clients`: 87% 00:53 remain Copying `store`.`clients`: 94% 00:24 remain 2019-02-22T18:34:15 Copied rows OK. 2019-02-22T18:34:15 Analyzing new table... 2019-02-22T18:34:15 Swapping tables... 2019-02-22T18:34:27 Swapped original and new tables OK. 2019-02-22T18:34:27 Dropping old table... 2019-02-22T18:34:32 Dropped old table `store`.`_clients_old` OK. 2019-02-22T18:34:32 Dropping triggers... 2019-02-22T18:34:32 Dropped triggers OK. Successfully altered `store`.`clients`. Your annotation was successfully posted.

Results


As we can see, above, CPU usage dropped to less than 25%, which is 1/4 of the previous usage level. Handler_read_rnd_next dropped and we can’t even see it once pt-osc has finished. We had a small increase on Handler_read_next as expected because now MySQL is using the index to resolve the WHERE clause. One interesting outcome is that the instance was able to increase it’s QPS by 2x after the index was added as CPU/Full Table Scan was no longer limiting performance. On average, query time has dropped from 20s to only 661ms.

Summary:
  1. Applying the correct troubleshooting steps to your problems is crucial:
    a) Understand what resources have been saturated.
    b) Understand what if anything is causing an error.
    c) From there you can divert into the areas that are related to that resource and start to narrow down the issue.
    d) Tackle the problems bit by bit.
  2. Having the right tools for the job key for success. PMM is a great example of a tool that can help you quickly identify, drill in, and fix bottlenecks.
  3. Have realistic load tests. In this case, they had tested the new release on a concurrency level that was not like their production
  4. By identifying the culprit query we were able to:
    a.) Drop average query time from 20s to 661ms
    b.) Increase QPS by 2x
    c.) Reduce the usage of CPU to 1/4 of its level prior to our intervention

Disclosure: For security reasons, sensitive information, such as database, table, column names have been modified and graphs recreated to simulate a similar problem.

dbdeployer community - Part 1: TiDB

After a conference, when I take stock of what I have learned, I usually realise that the best achievements are the result of interacting with other attendees during the breaks, rather than simply listening to the lectures. It might be because I follow closely the blogosphere and thus the lectures have few surprises in store for me, or perhaps because many geeks take the conference as an excuse to refresh dormant friendships, catch up with technical gossip, and ask their friends some questions that were too sensitive to be discussed over Twitter and have been waiting for a chance of an in-person meeting to see the light of the day.

I surely had some of such questions, and I took advantage of the conference to ask them. As it often happens, I got satisfactory responses, but the latest FOSDEM conference was different than usual, because I got the best experience from the questions that others did ask me.

As it turned out, others were waiting for a chance to discuss things over coffee or food, and I saw that my pet project (dbdeployer) is a lot more popular than I thought, and it is being used silently in several environments. It should not be surprising if you read several MySQL reports on bugs at bugs.mysql.com where it is common the usage of sandboxes to reproduce user issues. Anyway, I got some praise, some requests, a few ideas for improvements, advance notice of an incoming graphical interface, and a few concrete collaboration proposals.

One of such proposals came from Morgan Tocker, who suggested enhancing dbdeployer to support TiDB. At first, it seemed uninteresting, as TiDB is designed to be distributed, and installing just a component didn’t immediately look useful. However, Morgan pointed out that it could be used as a tool to test compatibility with existing applications, and as such it could gain much more value than I initially thought. We decided to try a quick hackathon to make a proof of concept.

It was a great pleasure to figure out, in just over one hour of close interaction, that dbdeployer design for flexibility was up to the task. We managed to make TiDB work with dbdeployer simply by exporting, editing, and re-loading a few templates.

The exercise showed strengths and limitations in both projects. We agreed that dbdeployer had to lose some assumptions (such as “I am working with a MySQL server”) and become able to recognise which flavor of MySQL-lookalike we are dealing with. At the same time, we noted that TiDB is not compatible when it comes to deployment and bootstrap: it is so simple and straightforward that its initialisation doesn’t fit in the complex operation that is a MySQL server warm-up.

Pleased with the initial success, we kept in touch and, after dbdeployer acquired the ability of telling one flavor from another, we put together the various pieces to make dbdeployer recognise and install TiDB. We found and fixed several bugs in both project, and finally released dbdeployer 1.19.0, which can use a TiDB server transparently.

What does transparently mean? It means that tests for TiDB deployment can run alongside tests for other MySQL servers, and the sandbox scripts (such as start, stop, use, status, and test_sb) work as expected and produce a compatible output. Thus, there is a TiDB test running together with another dozen MySQL versions.

Now, if you want, you can evaluate TiDB in your computer without installing the full stack. It won’t be as fast as the real thing: what is installed as a single node is a slower emulation of the real database, but it is enough to give you an idea of what queries you can and cannot run in TiDB, and perhaps try to see if your application could run on TiDB at all.

The collaboration with TiDB was especially useful because the changes needed to smooth the TiDB integration have made made dbdeployer better suited to add support for more not-quite-mysql servers, such as the one that we’ll see in the next post.

But before reaching that point, here’s an example of TiDB deployment on Linux:

$ wget https://download.pingcap.org/tidb-master-linux-amd64.tar.gz
[...]
2019-02-24 04:46:26 (2.26 MB/s) - 'tidb-master-linux-amd64.tar.gz' saved [16304317/16304317]

$ dbdeployer unpack tidb-master-linux-amd64.tar.gz --unpack-version=3.0.0 --prefix=tidb
Unpacking tarball tidb-master-linux-amd64.tar.gz to $HOME/opt/mysql/tidb3.0.0
1
Renaming directory /home/msandbox/opt/mysql/tidb-master-linux-amd64 to /home/msandbox/opt/mysql/tidb3.0.0

TiDB tarballs doesn't come with a client. We need to use one from MYSQL 5.7. Rather than downloading the huge tarball from MySQL site, we can get a smaller one from a GitHub repository, using dbdeployer itself (NB: this reduced tarball is only for Linux)


$ dbdeployer remote list
Files available in https://raw.githubusercontent.com/datacharmer/mysql-docker-minimal/master/dbdata/available.json
5.7 -> [mysql-5.7.24 mysql-5.7.25]
8.0 -> [mysql-8.0.13 mysql-8.0.15]
4.1 -> [mysql-4.1.22]
5.0 -> [mysql-5.0.15 mysql-5.0.96]
5.1 -> [mysql-5.1.72]
5.5 -> [mysql-5.5.61 mysql-5.5.62]
5.6 -> [mysql-5.6.41 mysql-5.6.43]

$ dbdeployer remote get mysql-5.7.25
File /home/msandbox/mysql-5.7.25.tar.xz downloaded

$ dbdeployer unpack mysql-5.7.25.tar.xz
[...]
Renaming directory /home/msandbox/opt/mysql/mysql-5.7.25 to /home/msandbox/opt/mysql/5.7.25

Now we are ready to install TiDB:

$ dbdeployer deploy single tidb3.0.0 --client-from=5.7.25
Creating directory /home/msandbox/sandboxes
Database installed in $HOME/sandboxes/msb_tidb3_0_0
run 'dbdeployer usage single' for basic instructions'
. sandbox server started

Once installed, a TiDB sandbox behaves like a MySQL sandbox.

$ $HOME/sandboxes/msb_tidb3_0_0/use
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.10-TiDB-v3.0.0-beta-111-g266ff4b6f MySQL Community Server (Apache License 2.0)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql [localhost:3000] {msandbox} ((none)) >

Dockerizing Laravel with Nginx MySQL and Docker Compose on Ubuntu 18.04 LTS

In this guide, we're going to show you how to Dockerize the Laravel project with PHP-FPM, MySQL database, and the Nginx web server using the Docker Compose on the Ubuntu Server 18.04. We're going to create a new docker image for the Laravel Project, and then create the docker-compose.yml script that contains some services including, the App/Laravel itself, Nginx web server, and MySQL database.

Dealing with Unreliable Networks When Crafting an HA Solution for MySQL or MariaDB

Long gone are the days when a database was deployed as a single node or instance - a powerful, standalone server which was tasked to handle all the requests to the database. Vertical scaling was the way to go - replace the server with another, even more powerful one. During these times, one didn’t really have to be bothered by network performance. As long as the requests were coming in, all was good.

But nowadays, databases are built as clusters with nodes interconnected over a network. It is not always a fast, local network. With businesses reaching global scale, database infrastructure has also to span across the globe, to stay close to customers and to reduce latency. It comes with additional challenges that we have to face when designing a highly available database environment. In this blog post, we will look into the network issues that you may face and provide some suggestions on how to deal with them.

Two Main Options for MySQL or MariaDB HA

We covered this particular topic quite extensively in one of the whitepapers, but let’s look at the two main ways of building high availability for MySQL and MariaDB.

Galera Cluster

Galera Cluster is shared-nothing, virtually synchronous cluster technology for MySQL. It allows to build multi-writer setups that can span across the globe. Galera thrives in low-latency environments but it can also be configured to work with long WAN connections. Galera has a built-in quorum mechanism which ensures that data will not be compromised in case of the network partitioning of some of the nodes.

MySQL Replication

MySQL Replication can be either asynchronous or semi-synchronous. Both are designed to build large scale replication clusters. Like in any other master-slave or primary-secondary replication setup, there can be only one writer, the master. Other nodes, slaves, are used for failover purposes as they contain the copy of the data set from the maser. Slaves can also be used for reading the data and offloading some of the workload from the master.

Both solutions have their own limits and features, both suffer from different problems. Both can be affected by unstable network connections. Let’s take a look at those limitations and how we can design the environment to minimize the impact of an unstable network infrastructure.

Galera Cluster - Network Problems

First, let’s take a look at Galera Cluster. As we discussed, it works best in a low-latency environment. One of the main latency-related problems in Galera is the way how Galera handles the writes. We will not go into all the details in this blog, but further reading in our Galera Cluster for MySQL tutorial. The bottom line is that, due to the certification process for writes, where all nodes in the cluster have to agree on whether the write can be applied or not, your write performance for single row is strictly limited by the network roundtrip time between the writer node and the most far away node. As long as the latency is acceptable and as long as you do not have too many hot spots in your data, WAN setups may work just fine. The problem starts when the network latency spikes from time to time. Writes will then take 3 or 4 times longer than usual and, as a result, databases may start to be overloaded with long-running writes.

One of great features of Galera Cluster is its ability to detect the cluster state and react upon network partitioning. If a node of the cluster cannot be reached, it will be evicted from the cluster and it will not be able to perform any writes. This is crucial in maintaining the integrity of the data during the time when the cluster is split - only the majority of the cluster will accept writes. Minority will complain. To handle this, Galera introduces a vast array of checks and configurable timeouts to avoid false alerts on very transient network issues. Unfortunately, if the network is unreliable, Galera Cluster will not be able to work correctly - nodes will start to leave the cluster, join it later. It will be especially problematic when we have Galera Cluster spanning across WAN - separated pieces of the cluster may disappear randomly if the interconnecting network will not work properly.

How to Design Galera Cluster for an Unstable Network?

First things first, if you have network problems within the single datacenter, there is not much you can do unless you will be able to solve those issues somehow. Unreliable local network is a no go for Galera Cluster, you have to reconsider using some other solution (even though, to be honest, unreliable network will always be a problematic). On the other hand, if the problems are related to WAN connections only (and this is one of the most typical cases), it may be possible to replace WAN Galera links with regular asynchronous replication (if the Galera WAN tuning did not help).

There are several inherent limitations in this setup - the main issue is that the writes used to happen locally. Now, all the writes will have to head to the “master” datacenter (DC A in our case). This is not as bad as it sounds. Please keep in mind that in an all-Galera environment, writes will be slowed down by the latency between nodes located in different datacenters. Even local writes will be affected. It will be more or less the same slowdown as with asynchronous setup in which you would send the writes across WAN to the “master” datacenter.

Using asynchronous replication comes with all of the problems typical for the asynchronous replication. Replication lag may become a problem - not that Galera would be more performant, it’s just that Galera would slow down the traffic via flow control while replication does not have any mechanism to throttle the traffic on the master.

Another problem is the failover: if the “master” Galera node (the one which acts as the master to the slaves in other datacenters) would fail, some mechanism has to be created to repoint slaves to another, working master node. It might be some sort of a script, it is also possible to try something with VIP where the “slave” Galera cluster slaves off Virtual IP which is always assigned to the alive Galera node in the “master” cluster.

The main advantage of such setup is that we do remove the WAN Galera link which means that our “master” cluster will not be slowed down by the fact that some of the nodes are separated geographically. As we mentioned, we lose the ability to write in all of the data-centers but latency-wise writing across the WAN is the same as writing locally to the Galera cluster which spans across WAN. As a result the overall latency should improve. Asynchronous replication is also less vulnerable to the unstable networks. Worst case scenario, the replication link will break and it will be recreated when the networks converge.

ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE How to Design MySQL Replication for an Unstable Network?

In the previous section, we covered Galera cluster and one solution was to use asynchronous replication. How does it look like in a plain asynchronous replication setup? Let’s look at how an unstable network can cause the biggest disruptions in the replication setup.

First of all, latency - one of the main pain points for Galera Cluster. In case of replication, it is almost a non-issue. Unless you use semi-synchronous replication that is - in such case, increased latency will slow down writes. In asynchronous replication, latency has no impact on the write performance. It may, though, have some impact on the replication lag. It is not anything as significant as it was for Galera but you may expect more lag spikes and overall less stable replication performance if the network between nodes suffers from high latency. This is mostly due to the fact that the master may as well serve several writes before data transfer to the slave can be initiated on high latency network.

The network instability may definitely impact replication links but it is, again, not that critical. MySQL slaves will attempt to reconnect to their masters and replication will commence.

The main issue with MySQL replication is actually something that Galera Cluster solves internally - network partitioning. We are talking about the network partitioning as the condition in which segments of the network are separated from each other. MySQL replication utilizes one single writer node - master. No matter how you design your environment, you have to send your writes to the master. If the master is not available (for whatever reasons), application cannot do its job unless it runs in some sort of read-only mode. Therefore there is a need to pick the new master as soon as possible. This is where the issues show up.

First, how to tell which host is a master and which one is not. One of the usual ways is to use the “read_only” variable to distinguish slaves from the master. If node has read_only enabled (set read_only=1), it is a slave (as slaves should not handle any direct writes). If the node has read_only disabled (set read_only=0), it is a master. To make things safer, a common approach is to set read_only=1 in MySQL configuration - in case of a restart, it is safer if the node shows up as a slave. Such “language” can be understood by proxies like ProxySQL or MaxScale.

Let’s take a look at an example.

We have application hosts which connect to the proxy layer. Proxies perform the read/write split sending SELECTs to slaves and writes to master. If master is down, failover is performed, new master is promoted, proxy layer detects that and start sending writes to another node.

If node1 restarts, it will come up with read_only=1 and it will be detected as a slave. It is not ideal as it is not replicating but it is acceptable. Ideally, the old master should not show up at all until it is rebuilt and slaved off the new master.

Way more problematic situation is if we have to deal with network partitioning. Let’s consider the same setup: application tier, proxy tier and databases.

When the network makes the master not reachable, the application is not usable as no writes make it to their destination. New master is promoted, writes are redirected to it. What will happen then if the network issues cease and the old master becomes reachable? It has not been stopped, therefore it is still using read_only=0:

You’ve now ended up in a split brain, when writes were directed to two nodes. This situation is pretty bad as to merge diverged datasets may take a while and it is quite a complex process.

What can be done to avoid this problem? There is no silver bullet but some actions can be taken to minimize the probability of a split brain to happen.

First of all, you can be smarter in detecting the state of the master. How do the slaves see it? Can they replicate from it? Maybe some of the slaves still can connect to the master, meaning that the master is up and running or, at least, making it possible to stop it should that be necessary. What about the proxy layer? Do all of the proxy nodes see the master as unavailable? If some can still connect, than you can try to utilize those nodes to ssh into the master and stop it before the failover?

Related resources  MySQL Performance Benchmarking: MySQL 5.7 vs MySQL 8.0  MySQL Performance Cheat Sheet  How to Improve Replication Performance in a MySQL or MariaDB Galera Cluster

The failover management software can also be smarter in detecting the state of the network. Maybe it utilizes RAFT or some other clustering protocol to build a quorum-aware cluster. If a failover management software can detect the split brain, it can also take some actions based on this like, for example, setting all nodes in the partitioned segment to read_only ensuring that the old master will not show up as writable when the networks converge.

You can also include tools like Consul or Etcd to store the state of the cluster. The proxy layer can be configured to use data from Consul, not the state of the read_only variable. It will be then up to the failover management software to make necessary changes in Consul so that all proxies will send the traffic to a correct, new master.

Some of those hints can even be combined together to make the failure detection even more reliable. All in all, it is possible to minimize the chances that the replication cluster will suffer from unreliable networks.

As you can see, no matter if we are talking about Galera or MySQL Replication, unstable networks may become a serious problem. On the other hand, if you design the environment correctly, you can still make it work. We hope this blog post will help you to create environments which will work stable even if the networks are not.

Tags:  MySQL MariaDB high availability ha network

PHP with Vue.js & MySQL: REST API CRUD Tutorial

In this tutorial, we'll build a RESTful CRUD application with PHP in the backend and Vue.js in the frontend. We'll also be using Axios for sending Ajax request to PHP from Vue. The Vue.js library, Axios client and Ajax technology allows you to fetch and display data in your application without the need to refresh the whole page each time. For database we'll be using MySQL, the most popular database used by PHP developers. Creating the MySQL Database In your terminal, start the MySQL client using: mysql -u root -p Enter your password when prompted and hit Enter. Next, create a database using the following SQL statement: mysql> create database vuedb; Next, create the following SQL table in your vuedb database: mysql> use vuedb; mysql> CREATE TABLE `contacts` ( `id` int(11) NOT NULL PRIMARY KEY AUTO_INCREMENT, `name` varchar(100) NOT NULL, `email` varchar(100) NOT NULL, `city` varchar(100), `country` varchar(100), `job` varchar(100) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Create The PHP & MySQL CRUD App Now, let's create a PHP and MySQL CRUD application. Open a new terminal, navigate to your working directory then create a folder for your project: $ cd ~/demos $ mkdir php-vuejs-crud Next, navigate in your project's folder and add an index.php file: $ cd php-vuejs-crud $ touch index.php Open the index.php file and add the following code: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>PHP| MySQL | Vue.js | Axios Example</title> <script src="https://cdn.jsdelivr.net/npm/vue"></script> <script src="https://unpkg.com/axios/dist/axios.min.js"></script> </head> <body> </body> </html> We first include Vue.js and Axios from their CDNs. Next, in the body of the document, add a <table> to display fetched data: <h1>Contact Management</h1> <div id='vueapp'> <table border='1' width='100%' style='border-collapse: collapse;'> <tr> <th>Name</th> <th>Email</th> <th>Country</th> <th>City</th> <th>Job</th> </tr> <tr v-for='contact in contacts'> <td>{{ contact.name }}</td> <td>{{ contact.email }}</td> <td>{{ contact.country }}</td> <td>{{ contact.city }}</td> <td>{{ contact.job }}</td> </tr> </table> We use the v-for directive to iterate over the contacts array and display each contact. Next, add a <form> tag: </br> <form> <label>Name</label> <input type="text" name="name" v-model="name"> </br> <label>Email</label> <input type="email" name="email" v-model="email"> </br> <label>Country</label> <input type="text" name="country" v-model="country"> </br> <label>City</label> <input type="text" name="city" v-model="city"> </br> <label>Job</label> <input type="text" name="job" v-model="job"> </br> <input type="button" @click="createContact()" value="Add"> </form> </div> We use the v-model directive to bind the input fields to their corresponding variables in the Vue instance we'll be creating next. And we use the @click event to bind the click event of the button to the createContact() method that will be defined in the Vue instance. Next, add a <script> tag and create a Vue app: <script> var app = new Vue({ el: '#vueapp', data: { name: '', email: '', country: '', city: '', job: '', contacts: [] }, mounted: function () { console.log('Hello from Vue!') this.getContacts() }, methods: { getContacts: function(){ }, createContact: function(){ }, resetForm: function(){ } } }) </script> </body> </html> We declared three methods, let's implement them! The getContacts() method gets contacts from the PHP endpoint using Axios: getContacts: function(){ axios.get('api/contacts.php') .then(function (response) { console.log(response.data); app.contacts = response.data; }) .catch(function (error) { console.log(error); }); } The createContact() methods creates a new contact in the MySQL database by sending a POST request with Axios and FormData: createContact: function(){ console.log("Create contact!") let formData = new FormData(); console.log("name:", this.name) formData.append('name', this.name) formData.append('email', this.email) formData.append('city', this.city) formData.append('country', this.country) formData.append('job', this.job) var contact = {}; formData.forEach(function(value, key){ contact[key] = value; }); axios({ method: 'post', url: 'api/contacts.php', data: formData, config: { headers: {'Content-Type': 'multipart/form-data' }} }) .then(function (response) { //handle success console.log(response) app.contacts.push(contact) app.resetForm(); }) .catch(function (response) { //handle error console.log(response) }); } The resetForm() method resets the form: resetForm: function(){ this.name = ''; this.email = ''; this.country = ''; this.city = ''; this.job = ''; } Create an API Endpoint Now, let's create an endpoint that provides contacts data in a JSON format to our Vue frontend. Create an api folder inside your project's root folder: $ mkdir api Navigate inside the api folder and create a contacts.php file and add the following content: <?php $host = "localhost"; $user = "root"; $password = "YOUR_MYSQL_DB_PASSWORD"; $dbname = "vuedb"; $id = ''; $con = mysqli_connect($host, $user, $password,$dbname); $method = $_SERVER['REQUEST_METHOD']; $request = explode('/', trim($_SERVER['PATH_INFO'],'/')); //$input = json_decode(file_get_contents('php://input'),true); if (!$con) { die("Connection failed: " . mysqli_connect_error()); } switch ($method) { case 'GET': $id = $_GET['id']; $sql = "select * from contacts".($id?" where id=$id":''); break; case 'POST': $name = $_POST["name"]; $email = $_POST["email"]; $country = $_POST["country"]; $city = $_POST["city"]; $job = $_POST["job"]; $sql = "insert into contacts (name, email, city, country, job) values ('$name', '$email', '$city', '$country', '$job')"; break; } // run SQL statement $result = mysqli_query($con,$sql); // die if SQL statement failed if (!$result) { http_response_code(404); die(mysqli_error($con)); } if ($method == 'GET') { if (!$id) echo '['; for ($i=0 ; $i<mysqli_num_rows($result) ; $i++) { echo ($i>0?',':'').json_encode(mysqli_fetch_object($result)); } if (!$id) echo ']'; } elseif ($method == 'POST') { echo json_encode($result); } else { echo mysqli_affected_rows($con); } $con->close(); Finally, you can serve your PHP application using the following command from the root of your project: $ php -S 127.0.0.1:8080 This is a screenshot of the application, after posting some data using the form: For the same styling, add the following CSS: Conclusion In this tutorial, we've used PHP, MySQL, Vue.js and Axios to create a simple REST API CRUD example application.

Upcoming Webinar Thurs 3/7: Enhancing MySQL Security

Join Percona Support Engineer, Vinicius Grippa, as he presents his talk Enhancing MySQL Security on Thursday, March 7th, 2019 at 7:00 AM PST (UTC-8) / 10:00 AM EST (UTC-5).

Register Now

Security is always a challenge when it comes to data. What’s more, regulations like GDPR add a whole new layer on top of it, with rules more and more restrictive to access and manipulate data. Join us in this presentation to check security best practices, as well as traditional and new features available for MySQL including features coming with the new MySQL 8.

In this talk, DBA’s and sysadmins will walk through the security features available on the OS and MySQL. For instance, these features include:

– SO security
– SSL
– ACL
– TDE
– Audit Plugin
– MySQL 8 features (undo, redo and binlog encryption)
– New caching_sha2_password
– Roles
– Password Management
– FIPS mode

In order to learn more register for this webinar on Enhancing MySQL Security.

How to Upgrade Amazon Aurora MySQL from 5.6 to 5.7

Over time, software evolves and it is important to stay up to date if you want to benefit from new features and performance improvements.  Database engines follow the exact same logic and providers are always careful to provide an easy upgrade path. With MySQL, the mysql_upgrade tool serves that purpose.

A database upgrade process becomes more challenging in a managed environment like AWS RDS where you don’t have shell access to the database host and don’t have access to the SUPER MySQL privilege. This post is a collaboration between Fattmerchant and Percona following an engagement focused on the upgrade of the Fattmerchant database from Amazon Aurora MySQL 5.6 to Amazon Aurora MySQL 5.7. Jacques Fu, the CTO of Fattmerchant, is the co-author of this post.  Our initial plan was to follow a path laid out previously by others but we had difficulties finding any complete and detailed procedure outlining the steps. At least, with this post, there is now one.

Issues with the regular upgrade procedure

How do we normally upgrade a busy production server with minimal downtime?  The simplest solution is to use a slave server with the newer version. Such a procedure has the side benefit of providing a “staging” database server which can be used to test the application with the new version. Basically we need to follow these steps:

  1. Enable replication on the old server
  2. Make a consistent backup
  3. Restore the backup on a second server with the newer database version – it can be a temporary server
  4. Run mysql_upgrade if needed
  5. Configure replication with the old server
  6. Test the application against the new version. If the tests includes conflicting writes, you may have to jump back to step 3
  7. If tests are OK and the new server is in sync, replication wise, with the old server, stop the application (only for a short while)
  8. Repoint the application to the new server
  9. Reset the slave
  10. Start the application

If the new server was temporary, you’ll need to repeat most of the steps the other way around, this time starting from the new server and ending on the old one.

What we thought would be a simple task turned out to be much more complicated. We were preparing to upgrade our database from Amazon Aurora MySQL 5.6 to 5.7 when we discovered that there was no option for an in-place upgrade. Unlike a standard AWS RDS MySQL (RDS MySQL upgrade 5.6 to 5.7) at the time of this article you cannot perform an in-place upgrade or even restore a backup across the major versions of Amazon Aurora MySQL.

We initially chose Amazon Aurora for the benefits of the tuning work that AWS provided out of the box, but we realized with any set of pros there comes a list of cons. In this case, the limitations meant that something that should have been straightforward took us off the documented path.

Our original high-level plan

Since we couldn’t use an RDS snapshot to provision a new Amazon Aurora MySQL 5.7 instance, we had to fallback to the use of a logical backup. The intended steps were:

  1. Backup the Amazon Aurora MySQL 5.6 write node with mysqldump
  2. Spin up an empty Amazon Aurora MySQL 5.7 cluster
  3. Restore the backup
  4. Make the Amazon Aurora MySQL 5.7 write node a slave of the Amazon Aurora MySQL 5.6 write node
  5. Once in sync, transfer the application to the Amazon Aurora MySQL 5.7 cluster

Even those simple steps proved to be challenging.

Backup of the Amazon Aurora MySQL 5.6 cluster

First, the Amazon Aurora MySQL 5.6 write node must generate binary log files. The default cluster parameter group that is generated when creating an Amazon Aurora instance does not enable these settings. Our 5.6 write node was not generating binary log files, so we copied the default cluster parameter group to a new “replication” parameter group and changed the “binlog_format” variable to MIXED.  The parameter is only effective after a reboot, so overnight we rebooted the node. That was a first short downtime.

At that point, we were able to confirm, using “show master status;” that the write node was indeed generating binlog files.  Since our procedure involves a logical backup and restore, we had to make sure the binary log files are kept for a long enough time. With a regular MySQL server the variable “expire_logs_days” controls the binary log files retention time. With RDS, you have to use the mysql.rds_set_configuration. We set the retention time to two weeks:

CALL mysql.rds_set_configuration('binlog retention hours', 336);

You can confirm the new setting is used with:

CALL mysql.rds_show_configuration;

For the following step, we needed a mysqldump backup along with its consistent replication coordinates. The option

--master-data   of mysqldump implies “Flush table with read lock;” while the replication coordinates are read from the server.  A “Flush table” requires the SUPER privilege and this privilege is not available in RDS.

Since we wanted to avoid downtime, it is out of question to pause the application for the time it would take to backup 100GB of data. The solution was to take a snapshot and use it to provision a temporary Amazon Aurora MySQL 5.6 cluster of one node. As part of the creation process, the events tab of the AWS console will show the binary log file and position consistent with the snapshot, it looks like this:

Consistent snapshot replication coordinates

From there, the temporary cluster is idle so it is easy to back it up with mysqldump. Since our dataset is large we considered the use of MyDumper but the added complexity was not worthwhile for a one time operation. The dump of a large database can take many hours. Essentially we performed:

mysqldump -h entrypoint-temporary-cluster -u awsrootuser -pxxxx \  --no-data --single-transaction -R -E -B db1 db2 db3 > schema.sql mysqldump -h entrypoint-temporary-cluster -nt --single-transaction \  -u awsrootuser -pxxxx -B db1 db2 db3 | gzip -1 > dump.sql.gz pt-show-grants -h entrypoint-temporary-cluster -u awsrootuser -pxxxx > grants.sql

The schema consist of three databases: db1, db2 and db3.  We have not included the mysql schema because it will cause issues with the new 5.7 instance. You’ll see why we dumped the schema and the data separately in the next section.

Restore to an empty Amazon Aurora MySQL 5.7 cluster

With our backup done, we are ready to spin up a brand new Amazon Aurora MySQL 5.7 cluster and restore the backup. Make sure the new Amazon Aurora MySQL 5.7 cluster is in a subnet with access to the Amazon Aurora MySQL 5.6 production cluster. In our schema, there a few very large tables with a significant number of secondary keys. To speed up the restore, we removed the secondary indexes of these tables from the schema.sql file and created a restore-indexes.sql file with the list of alter table statements needed to recreate them. Then we restored the data using these steps:

cat grants.sql | mysql -h entrypoint-new-aurora-57 -u awsroot -pxxxx cat schema-modified.sql | mysql -h entrypoint-new-aurora-57 -u awsroot -pxxxx zcat dump.sql.gz | mysql -h entrypoint-new-aurora-57 -u awsroot -pxxxx cat restore-indexes.sql | mysql -h entrypoint-new-aurora-57 -u awsroot -pxxxx

Configure replication

At this point, we have a new Amazon Aurora MySQL 5.7 cluster provisioned with a dataset at a known replication coordinates from the Amazon Aurora MySQL 5.6 production cluster.  It is now very easy to setup replication. First we need to create a replication user in the Amazon Aurora MySQL 5.6 production cluster:

GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'%' identified by 'agoodpassword';

Then, in the new Amazon Aurora MySQL 5.7 cluster, you configure replication and start it by:

CALL mysql.rds_set_external_master ('mydbcluster.cluster-123456789012.us-east-1.rds.amazonaws.com', 3306, 'repl_user', 'agoodpassword', 'mysql-bin-changelog.000018', 65932380, 0); CALL mysql.rds_start_replication;

The endpoint mydbcluster.cluster-123456789012.us-east-1.rds.amazonaws.com points to the Amazon Aurora MySQL 5.6 production cluster.

Now, if everything went well, the new Amazon Aurora MySQL 5.7 cluster will be actively syncing with its master, the current Amazon Aurora MySQL 5.6 production cluster. This process can take a significant amount of time depending on the write load and the type of instance used for the new cluster. You can monitor the progress with the show slave status\G command, the Seconds_Behind_Master will tell you how far behind in seconds the new cluster is compared to the old one.  It is not a measurement of how long it will take to resync.

You can also monitor throughput using the AWS console. In this screenshot you can see the replication speeding up over time before it peaks when it is completed.

Replication speed

Test with Amazon Aurora MySQL 5.7

At this point, we have an Amazon Aurora MySQL 5.7 cluster in sync with the production Amazon Aurora MySQL 5.6 cluster. Before transferring the production load to the new cluster, you need to test your application with MySQL 5.7. The easiest way is to snapshot the new Amazon Aurora MySQL 5.7 cluster and, using the snapshot, provision a staging Amazon Aurora MySQL 5.7 cluster. Test your application against the staging cluster and, once tested, destroy the staging cluster and any unneeded snapshots.

Switch production to the Amazon Aurora MySQL 5.7 cluster

Now that you have tested your application with the staging cluster and are satisfied how it behaves with Amazon Aurora MySQL 5.7, the very last step is to migrate the production load. Here are the last steps you need to follow:

  1. Make sure the Amazon Aurora MySQL 5.7 cluster is still in sync with the Amazon Aurora MySQL 5.6 cluster
  2. Stop the application
  3. Validate the Show master status; of the 5.6 cluster is no longer moving
  4. Validate from the Show slave status\G in the 5.7 cluster the Master_Log_File and Exec_Master_Log_Pos match the output of the “Show master status;” from the 5.6 cluster
  5. Stop the slave in the 5.7 cluster with CALL mysql.rds_stop_replication;
  6. Reset the slave in the 5.7 cluster with CALL mysql.rds_reset_external_master;
  7. Reconfigure the application to use the 5.7 cluster endpoint
  8. Start the application

The application is down from steps 2 to 8.  Although that might appear to be a long time, these steps can easily be executed within a few minutes.

Summary

So, in summary, although RDS Aurora doesn’t support an in place upgrade between Amazon Aurora MySQL 5.6 and 5.7, there is a possible migration path, minimizing downtime.  In our case, we were able to limit the downtime to only a few minutes.

Co-Author: Jacques Fu, Fattmerchant

 

Jacques is CTO and co-founder at the fintech startup Fattmerchant, author of Time Hacks, and co-founder of the Orlando Devs, the largest developer meetup in Orlando. He has a passion for building products, bringing them to market, and scaling them.

Un-split brain MySQL via gh-mysql-rewind

We are pleased to release gh-mysql-rewind, a tool that allows us to move MySQL back in time, automatically identify and rewind split brain changes, restoring a split brain server into a healthy replication chain.

I recently had the pleasure of presenting gh-mysql-rewind at FOSDEM. Video and slides are available. Consider following along with the video.

Motivation

Consider a split brain scenario: a "standard" MySQL replication topology suffered network isolation, and one of the replicas was promoted as new master. Meanwhile, the old master was still receiving writes from co-located apps.

Once the network isolation is over, we have a new master and an old master, and a split-brain situation: some writes only took place on one master; others only took place on the other. What if we wanted to converge the two? What paths do we have to, say, restore the old, demoted master, as a replica of the newly promoted master?

The old master is unlikely to agree to replicate from the new master. Changes have been made. AUTO_INCREMENT values have been taken. UNIQUE constraints will fail.

A few months ago, we at GitHub had exactly this scenario. An entire data center went network isolated. Automation failed over to a 2nd DC. Masters in the isolated DC meanwhile kept receiving writes. At the end of the failover we ended up with a split brain scenario - which we expected. However, an additional, unexpected constraint forced us to fail back to the original DC.

We had to make a choice: we've already operated for a long time in the 2nd DC and took many writes, that we were unwilling to lose. We were OK to lose (after auditing) the few seconds of writes on the isolated DC. But, how do we converge the data?

Backups are the trivial way out, but they incur long recovery time. Shipping backup data over the network for dozens of servers takes time. Restore time, catching up with changes since backup took place, warming up the servers so that they can handle production traffic, all take time.

Could we have reduces the time for recovery?

There are multiple ways to do that: local backups, local delayed replicas, snapshots... We have embarked on several. In this post I wish to outline gh-mysql-rewind, which programmatically identifies the rogue (aka "bad") transactions on the network isolated master, rewinds/reverts them, applies some bookkeeping and restores the demoted master as a healthy replica under the newly promoted master, thereby prepared to be promoted if needed.

General overview

gh-mysql-rewind is a shell script. It utilizes multiple technologies, some of which do not speak to each other, to be able to do the magic. It assumes and utilizes the following:

Some breakdown follows.

GTID

MySQL GTIDs keep track of all transactions executed on a given server. GTIDs indicate which server (UUID) originated a write, and ranges of transaction sequences. In a clean state, only one writer will generate GTIDs, and on all the replicas we would see the same GTID set, originated with the writer's UUID.

In a split brain scenario, we would see divergence. It is possible to use GTID_SUBTRACT(old_master-GTIDs, new-master-GTIDs) to identify the exact set of transactions executed on the old, demoted master, right after the failover. This is the essence of the split brain.

For example, assume that just before the network partition, GTID on the master was 00020192-1111-1111-1111-111111111111:1-5000. Assume after the network partition the new master has UUID of 00020193-2222-2222-2222-222222222222. It began to take writes, and after some time its GTID set showed 00020192-1111-1111-1111-111111111111:1-5000,00020193-2222-2222-2222-222222222222:1-200.

On the demoted master, other writes took place, leading to the GTID set 00020192-1111-1111-1111-111111111111:1-5042.

We will run...

SELECT GTID_SUBTRACT( '00020192-1111-1111-1111-111111111111:1-5042', '00020192-1111-1111-1111-111111111111:1-5000,00020193-2222-2222-2222-222222222222:1-200' ); > '00020192-1111-1111-1111-111111111111:5001-5042'

...to identify the exact set of "bad transactions" on the demoted master.

Row Based Replication

With row based replication, and with FULL image format, each DML (INSERT, UPDATE, DELETE) writes to the binary log the complete row data before and after the operation. This means the binary log has enough information for us to revert the operation.

Flashback

Developed by Alibaba, flashback has been incorporated in MariaDB. MariaDB's mysqlbinlog utility supports a --flashback flag, which interprets the binary log in a special way. Instead of printing out the events in the binary log in order, it prints the inverted operations in reverse order.

To illustrate, let's assume this pseudo-code sequence of events in the binary log:

insert(1, 'a') insert(2, 'b') insert(3, 'c') update(2, 'b')->(2, 'second') update(3, 'c')->(3, 'third') insert(4, 'd') delete(1, 'a')

A --flashback of this binary log would produce:

insert(1, 'a') delete(4, 'd') update(3, 'third')->(3, 'c') update(2, 'second')->(2, 'b') delete(3, 'c') delete(2, 'b') delete(1, 'a')

Alas, MariaDB and flashback do not speak MySQL GTID language. GTIDs are one of the major points where MySQL and MariaDB have diverged beyond compatibility.

The output of MariaDB's mysqlbinlog --flashback has neither any mention of GTIDs, nor does the tool take notice of GTIDs in the binary logs in the first place.

gh-mysql-rewind

This is where we step in. GTIDs provide the information about what went wrong. flashback has the mechanism to generate the reverse sequence of statements. gh-mysql-rewind:

  • uses GTIDs to detect what went wrong
  • correlates those GTID entries with binary log files: identifies which binary logs actually contain those GTID events
  • invokes MariaDB's mysqlbinlog --flashback to generate the reverse of those binary logs
  • injects (dummy) GTID information into the output
  • computes ETA

This last part is worth elaborating. We have created a time machine. We have the mechanics to make it work. But as any Sci-Fi fan knows, one of the most important parts of time travel is knowing ahead where (when) you are going to land. Are you back in the Renaissance? Or are you suddenly to appear on board the French Revolution? Better dress accordingly.

In our scenario it is not enough to move MySQL back in time to some consistent state. We want to know at what time we landed, so that we can instruct the rewinded server to join the replication chain as a healthy replica. In MySQL terms, we need to make MySQL "forget" everything that ever happened after the split brain: not only in terms of data (which we already did), but in terms of GTID history.

gh-mysql-rewind will do the math to project, ahead of time, at what "time" (i.e. GTID set) our time machine arrived. It will issue a `RESET MASTER; SET GLOBAL gtid_purged='gtid-of-the-landing-time'" to make our re-winded MySQL consistent not only with some past dataset, but also with its own perception of the point in time where that dataset existed.

Limitations

Some limitations are due to MariaDB's incompatibility with MySQL, some are due to MySQL DDL nature, some due to the fact gh-mysql-rewind is a shell script.

  • Cannot rewind DDL. DDLs are silently ignored, and will impose a problem when trying to re-apply them.
  • JSON, POINT data types are not supported.
  • The logic rewinds the MySQL server farther into the past than strictly required. This simplifies the code considerably, but imposed superfluous time to rewind+reapply, i.e. time to recover.
  • Currently, this only works one server at a time. If a group of 10 servers were network isolated together, the operation would need to run on each of these 10 servers.
  • Runs locally on each server. Requires both MySQL's mysqlbinlog as well as MariaDB's mysqlbinlog.
Testing

There's lot of moving parts to this mechanism. A mixture of technologies that don't normally speak to each other, injection of data, prediction of ETA... How reliable is all this?

We run continuous gh-mysql-rewind testing in production to consistently prove that it works as expected. Our testing uses a non-production, dedicated, functional replica. It contaminates the data on the replica. It lets gh-mysql-rewind automatically move it back in time, it joins the replica back into the healthy chain.

That's not enough. We actually create a scenario where we can predict, ahead of testing, what the time-of-arrival will be. We checksum the data on that replica at that time. After contaminating and effectively breaking replication, we expect gh-mysql-rewind to revert the changes back to our predicted point in time. We checksum the data again. We expect 100% match.

See the video or slides for more detail on our testing setup.

Status

At this time the tool in one of several solutions we hope to never need to employ. It is stable and tested. We are looking forward to a promising MySQL development that will provide GTID-revert capabilities using standard commands, such as SELECT undo_transaction('00020192-1111-1111-1111-111111111111:5042').

We have released gh-mysql-rewind as open source, under the MIT license. The public release is a stripped down version of our own script, which has some GitHub-specific integration. We have general ideas in incorporating this functionality into higher level tools.

gh-mysql-rewind is developed by the database-infrastructure team at GitHub.

Pages