Planet MySQL

mysql database backup shell script with status email

This post is for the backup script for MySQL database on Linux. The backup shell script works as follows: – The script takes backup using mysqldump and compresses it. – Upon success, it will attempt to ship the backup to specified offsite location. – Upon detecting failure in any of the above step, it will […]

Upcoming Webinar Tues 4/9: MySQL 8.0 Architecture and Enhancement

Please join Percona’s Bug Analyst, Lalit Choudhary as he presents his talk MySQL 8.0 Architecture and Enhancement on Tuesday, April 9th, 2019 at 6:00 AM PDT (UTC-7) / 9:00 AM EDT (UTC-4).

Register Now

The release of MySQL 8.0 offers much more to users compared to previous releases. There are major changes in architecture as well as adding differentiating features, and improvements to manage the database more efficiently.

In our talk we will go through, MySQL 8.0 architecture:
* On Disk
* In-memory

Examples and use cases will be shared showing the new features introduced in MySQL 8.0.

Register for MySQL 8.0 Architecture and Enhancement to learn more.

Community Coding For The Web

Open source barriers

Right now, it's too hard to contribute to open source.

This leads to a few things that are bad:

  1. People who have good ideas don't contribute
  2. Open source maintainers are overworked
  3. There's less useful open source software than there could be

We want to improve this, so:

We're building a community for web developers to build & share open source modules that work on any website.

There should be a place where you don't need anything "extra" to work on open source: no git, webpack, gems, repos, terminal commands, or non-code tools. If you know the code itself, you should be able to contribute directly without learning special commands.

We're working our way toward this with the first ever online code editor that compiles modules to work on any website:

More than a playground

There are a lot of code "playgrounds" out there for building demos or proofs of concept, but that can be pretty limiting. With the new approach, you can click

And that brings up code to add the actual module to your website. Not a sandboxed or iframe version -- the actual, working module. The code looks like this:

<html> <head> <!-- Paste right before your document </head> --> <script id="Anymod-script"> (function (m,o,d,u,l,a,r,i,z,e) { u[m]={Project:o,rq:[],Opts:r,ready:function(j){u[m].rq.push(j)}};function j(s){return encodeURIComponent(btoa(s))};z=l.getElementById(m+'-'+a);r=u.location; e=[d+'/page/'+o+'/'+j(r.pathname)+'/'+j(r.host)+'?t='+Date.now(),d];e.map(function(w){i=l.createElement(a);i.defer=1;i.src=w;z.parentNode.insertBefore(i,z);}); })('Anymod','855EM8','https://cdn.anymod.com/v2',window,document,'script',{ toolkit: true, tips: true, priority: 3 }); </script> </head> <body> <!-- Paste where you want the module: Animate CSS grid --> <div id="anymod-lllddn"></div> </body> </html> A world of possibilities

Our hope is that an open platform for normal web developers to find, modify, and use modules will help us all to create a huge ecosystem of free, open-source software that works on any website.

Right now there is a library of hundreds of "verified" modules that are ready to use, with thousands more that have been created by developers.

From pre-styled page sections to forms, modular CMS, galleries, team pages, ecommerce, and more, there are lots of ready-to-use modules already available.

Join the community

You can find, clone, and use modules on the platform, and we are building even more tools to help with collaboration.

Soon you will be able to see which modules are looking for contributors, make changes in the browser, and have your updates merged into the original. Both the original author and you will then be listed on the module, and you'll be able to see where it's being used.

Our hope is that we can make open source participation a real option for all web developers by lowering the barriers to contribution.

If you're interested in collaborating, using cool modules, or simply claiming your username, we'd love to have you!

https://anymod.com

Create PHP 7 MySQL Database Tables Using MySQLi & PDO

In this tutorial we'll learn how to use MySQLi and PDO to create MySQL database tables in PHP 7. You can use the CREATE DATABASE SQL instruction to create a database in your MySQL client so let's start by creating a database. Open a new terminal and run the following command: $ mysql -u root -p Enter your MySQL instance password when prompted. Note: The official tool for working with MySQL is the mysql client which get installed when you install MySQL in your machine. The MySQL client can be used through your terminal as a CLI tool. Next, you can run the following command to create a MySQL database: mysql> create database mydb; That's it! We now have a database. Let's now see how you can create a MySQL table using PHP, MySQLi and PDO. The mysqli extension is a relational database driver that allows you to access the functionality provided by MySQL 4.1 and above in PHP. It stands for MySQL Improved. Creating a MySQL Table in PHP Using MySQLi Let's start with the MySQLi extension. Create a server.php file and add the following variables: <?php $server = "localhost"; $dbuser = "root"; $dbpassword = "YOUR_DATABASE_PASSWORD"; $dbname = "mydb"; Note: Make sure to change your database user and password accrodingly. Next, create a connection to your MySQL database using the following code: $connection = new mysqli($server, $dbuser, $dbpassword, $dbname); if ($connection->connect_error) { die("Connection error: " . $connection->connect_error); } Next, create a SQL query to create the database table called contacts: $sqlQuery = "CREATE TABLE contacts ( id INT(11) UNSIGNED AUTO_INCREMENT PRIMARY KEY, firstName VARCHAR(35) NOT NULL, lastName VARCHAR(35) NOT NULL, email VARCHAR(55) )"; Next, run the SQL query using the following code: if ($connection->query($sqlQuery) === TRUE) { echo "Table created successfully!"; } else { echo "Error creating SQL table: " . $connection->error; } Finally, close your database connection using the following code: $connection->close(); ?> Using PDO to Create MySQL Database Table in PHP PDO stands for PHP Data Object. It's a set of PHP extensions that provide a core PDO class and database drivers for major database systems. You can also use PDO for connectiong and creating a MySQL database table: <?php $server = "localhost"; $dbuser = "root"; $dbpassword = "YOUR_DATABASE_PASSWORD"; $dbname = "mydb"; try { $connection = new PDO("mysql:host=$server;dbname=$dbname", $dbuser, $dbpassword); $connection->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $sqlQuery = "CREATE TABLE contacts ( id INT(11) UNSIGNED AUTO_INCREMENT PRIMARY KEY, firstName VARCHAR(35) NOT NULL, lastName VARCHAR(35) NOT NULL, email VARCHAR(55) )"; $connection->exec($sqlQuery); echo "Table created successfully!"; } catch(PDOException $e){ echo $sqlQuery . "<br>" . $e->getMessage(); } $connection = null; ?> Conclusion In this quick post, you have seen how you can create a MySQL database table in PHP using the MySQLi extension and PDO.

PHP PDO Tutorial: CRUD Example with MySQL

PDO stands for PHP Data Object and it's an extension that provides an interface for communicating with many supported popular database systems such as MySQL and Oracle, PostgreSQL and SQLite, etc. It's provided starting with PHP 5.1. Since PDO abstracts away all the differences between various database management systems, you only need to change the information about your database in your code in order to change the database system used in your PHP application. Setting up PDO PDO is added by default starting with PHP 5.1 but you need to set the necessary database driver in the php.ini file: extension=pdo.so extension=pdo_mysql.so Creating a MySQL Database Let's start by creating a MySQL using the mysql client. In your terminal, run the following command: $ mysql -u root -p Enter your MySQL database password when prompted. Next, run the following SQL instruction to create a database: mysql> create database mydb; That's it! We now have a database to work with. Creating a Database Table Next, let's create a database table. First select your mydb database using: mysql> use mydb; Next, run the following SQL instruction to create a contacts table: mysql > CREATE TABLE `contacts` ( `id` int(11) NOT NULL, `name` varchar(30) NOT NULL, `email` varchar(50) NOT NULL ) Connection to Database Using PDO Let's start by creating a folder for our project: $ mkdir php-pdo-example Next, navigate to your project's folder and create the index.php and db.php files: $ cd php-pdo-example $ touch index.php $ touch db.php Open the db.php file and add the following class that allows you to connect to your MySQL database: class DB { protected $conn = null; public function Connect() { try { $dsn = "mysql:dbname=mydb; host=localhost"; $user = <YOUR_DATABASE_USER>; $password = <YOUR_DATABASE_PASSWORD>; $options = array(PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC, ); $this->conn = new PDO($dsn, $user, $password, $options); return $this->conn; } catch (PDOException $e) { echo 'Connection error: ' . $e->getMessage(); } } public function Close() { $this->conn = null; } } In our DB class we first a protcted $conn variable that will hold the PDO instance. Next, we define two Open() and Close() methods which will be used to open and close the connection to database. Next, open the index.php file and include the db.php file: include 'db.php';   try{       $db = new DB();     $conn = $db->Open();     if($conn){         echo 'connected';     }     else{         echo $conn;     } } catch(PDOException $ex){     echo $ex->getMessage(); } We include the db.php file and we create an instance of the DB class. Finally we call the Open() method of the the DB instance. Running Database SQL Queries After connecting to the databse, we can now run SQL queries. Creating a Contact: SQL Insert Let's start by adding the code to create a contact in the database by running a SQL insert query. Open the index.php file and update it accordingly: include 'db.php';   try{       $db = new DB();     $conn = $db->Open();     if($conn){ $query = "INSERT INTO `contacts`(`name`, `email`) VALUES ('Contact 001','contact001@email.com')"; $conn->query($query);     }     else{         echo $conn;     } } catch(PDOException $ex){     echo $ex->getMessage(); } Reading Data: SQL Select Next, let's add the code for reading data from the database table. Create a read.php file and add the following code: include 'db.php';   try{       $db = new DB();     $conn = $db->Open();     if($conn){ $query = "SELECT * FROM contacts"; $result = $conn->query($query); foreach ($result as $row) { echo $row['name'] . "<br>"; echo $row['email'] . "<br>"; }     }     else{         echo $conn;     } } catch(PDOException $ex){     echo $ex->getMessage(); } You can also create update and delete opertions using the following SQL queries:  $query = "UPDATE `contacts` SET `email`= 'contact002@email.com' WHERE `id` = 1";  $query = "DELETE from `contacts` WHERE `id` = 1"; Conclusion In this quick tutorial we have seen how to create CRUD operations against a MySQL database using PDO and PHP.

Django 2 Tutorial & Example: Build a CRUD REST API for A Simple CRM

In this tutorial series, you'll learn about Django 2 by creating a CRUD example application with database, admin access, and REST API views.We'll be using MySQL as the database system. Throughout this beginner's tutorial for Django 2.0, we are going to learn to build web applications with Python and Django. This tutorial assumes no prior experience with Django, so we'll be covering the basic concepts and elements of the Django framework by emphasizing essential theory with practice. Basically, we are going to learn Django fundamental concepts while building a simple CRM web application. This tutorial doesn't only cover fundamental basics of Django but also advanced concepts such as how to use and integrate Django with modern front end frameworks like Angular 2+, Vue and React. You'll learn about CRUD, database ORM, how to create API views and URLs. What's Django? Django is an open source Python based web framework for building web applications quickly. It's a pragmatic framework designed for developers working on projects with strict dead lines. It's perfect for quickly creating prototypes and then continue building them after clients approval. It follows a Model View Controller (MVC) design pattern Django uses the Python language, a general purpose, powerful and feature-rich programming language. What's MVC? MVC is a software architectural design pattern which encourages the separation of concerns and effective collaboration between designers and developers when working on the same project. It basically divides or separates your app into three parts: Model: responsible for data storage and management, View: responsible of representing and rendering the user interface or view, Controller: responsible for handling logic to control the user interface and work with data model. Thanks to MVC, you as a developer can work in the model and controller parts without being concerned with the user interface (left to designers) so if anything changes on the side of designers on the user interface, you can rest assured that you will not be affected. Introduction to Python Python is a general purpose programing language that's suitable for developing all kind of applications including web applications. Python is known by a clean syntax and a large standard library which contains a wide range of modules that can be used by developers to build their applications instead of reinventing the wheel. Here is a list of features and characteristics of Python: Python is an Object Oriented Language just like Java or C++. Also like Java, Python is an interpreted language that runs on top of its own virtual machine which makes it a portable language that can runs across every machine and operating system such as Linux, Windows and MAC. Python is especially popular among the scientific community where it's used for creating numeric applications. Python is also known by the great performance of its runtime environment which makes it a good alternative to PHP for developing web applications. For more information you can head to http://python.org/ where you can also download Python binaries for supported systems. For Linux and MAC, Python is included by default so you don't have to install it. For Windows just head over to the official Python website and grab your installer. Just like any normal Windows program, the installation dead process is easy and straightforward. Why Using Django? Due to its popularity and large community, Python has numerous web frameworks among them Django. So what makes Django the right choice for you or your next project? Django is a batteries-included framework Django includes a set of batteries that can be used to solve common web problems without reinventing the wheel such as: the sites framework, the auth system, forms generation, an ORM for abstracting database systems, and a very powerful templating engine, caching system, RSS generation framework etc. The Django ORM Django has a powerful ORM (Object Relational Mapper) which allows developers to use Python OOP classes and methods instead of SQL tables and queries to work with SQL based databases. Thanks to the Django ORM, developers can work with any database system such as MySQL or PostgresSQL without knowing anything about SQL. In the same time the ORM doesn't get in the way. You can write custom SQL anytime you want especially if you need to optimize the queries against your server database for increased performance. Support for Internationalization: i18n You can use Django for writing web applications for other languages than English with a lot of ease thanks to its powerful support for internationalization or you can also create multi lingual websites The Admin Interface Django is a very suitable framework for quickly building prototypes thanks to its auto-generated admin interface. You can generate a full fledged admin application that can be used to do all sorts of CRUD operations against your database models you have registered with the admin module using a few lines of code. Community and Extensive Documentation Django has a great community that has contributed all sorts of awesome things to Django from tutorials and books to reusable open source packages that extend the core framework to include solutions for even more web development problems without reinventing the wheel or wasting time implementing what other developers have already created. Django has also one of the most extensive and useful documentation on the web which can gets you up and running with Django in no time. As a conclusion, if you are looking for a web framework full of features that makes building web applications fun and easy and that has all what you can expect from a modern framework. Django is the right choice for you if you are a Python developer. Python is a portable programming language that can be used anywhere its runtime environment is installed. Django is a Python framework which can be installed on any system which supports the Python language. In this tutorial part, we are going to see how to install Python and Django on the major available operating systems i.e Windows, Linux and MAC. Installing Python Depending on your operating system you may or may not need to install Python. In Linux and MAC OS Python is included by default. You may only need to update it if the installed version is outdated. Installing Python On Windows Python is not installed by default on Windows, so you'll need to grab the official installer from the official Python website at http://www.python.org/download/. Next launch the installer and follow the wizard to install Python just like any other Windows program. Also make sure to add Python root folder to system path environment variable so you can execute the Python executable from any directory using the command prompt. Next open a command prompt and type python. You should be presented with a Python Interactive Shell printing the current version of Python and prompting you to enter your Python commands (Python is an interpreted language) Installing Python on Linux If you are using a Linux system, there is a great chance that you already have Python installed but you may have an old version. In this case you can very easily update it via your terminal depending on your Linux distribution. For Debian based distributions, like Ubuntu you can use the apt package manager sudo apt-get install python This will update your Python version to the latest available version. For other Linux distributions you should look for equivalent commands to install or update Python which is not a daunting task if you already use a package manager to install packages for your system then you should follow the same process to install or update Python. Installing Python on MAC OS Just like Linux, Python is included by default on MAC but in case you have an old version you should be able to update it by going to [http://www.python.org/download/mac/](http://www.python.org/download/mac/ and grab a Python installer for MAC. Now if you managed to install or update Python on your own system or in case you have verified that you already have an updated version of Python installed on your system let's continue by installing Django. Installing PIP PIP is a Python package manager which's used to install Python packages from Python Package Index which is more advanced than easy_install the default Python package manager that's installed by default when you install Python. You should use PIP instaed of easy_install whenever you can but for installing PIP itself you should use easy_install. So let's first install PIP: Open your terminal and enter: $ sudo easy_install pip You can now install Django on your system using pip $ sudo pip install django While you can do this to install Django, globally on your system, it's strongly not recommend. Instead you need to use a virtual environement to install packages. Creating a MySQL Database We'll be using a MySQL database. In your terminal invoke the mysql client using the following command: $ mysql -u root -p Enter your MySQL password and hit Enter. Next, run the following SQL statement to create a database: mysql> create database crmdb; Creating a Virtual Environment Let's start our tutorial by creating a virtual environment. Open a new terminal, navigate to a working folder and run the following command: $ cd ~/demos $ python3 -m venv .env Next, activate the virtual environment using the following command: $ source .env/bin/activate Installing Django and Django REST Framework Now, that you have created and activated your virtual environment, you can install your Python packages using pip. In your terminal where you have activated the virtual environment, run the following commands to install the necessary packages: $ pip install django $ pip install djangorestframework You will also need to install the MySQL client for Python using pip: $ pip install mysqlclient Creating a Django Project Now, let's proceed to creating our django project. In your terminal, run the following command: $ django-admin startproject simplecrm This command will take care of creating a bunch of necessary files for the project. Executing the tree command in the root of our created project will show us the files that were created. . ├── simplecrm │   ├── __init__.py │   ├── settings.py │   ├── urls.py │   └── wsgi.py └── manage.py __init__ is the Python way to mark the containing folder as a Python package which means a Django project is a Python package. settings.py is the project configuration file. You can use this file to specify every configuration option of your project such as the installed apps, site language and database options etc. urls.py is a special Django file which maps all your web app urls to the views. wsgi.py is necessary for starting a wsgi application server. manage.py is another Django utility to manage the project including creating database and starting the local development server. These are the basic files that you will find in every Django project. Now the next step is to set up and create the database. Next, open the settings.py file and update the database setting to point to our crmdb database: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'crmdb', 'USER': 'root', 'PASSWORD': 'YOUR_DB_PASSWORD', 'HOST': 'localhost', 'PORT': '3306', } } Next, add rest_framework to the INSTALLED_APPS array: INSTALLED_APPS = [ # [...] 'rest_framework' ] Finally, migrate the database using the following commands: $ cd simplecrm $ python manage.py migrate You will be able to access your database from the 127.0.0.1:8000 address. Create an Admin User Let's create an admin user using the following command: $ python manage.py createsuperuser Creating a Django Application Next, let's create a Django application for encapsulating our core CRM functionality. In your terminal, run the following command: $ python manage.py startapp crmapp Next, you need to add it in the settings.py file: INSTALLED_APPS = [ # ... 'rest_framework', 'crmapp' ] Creating the Database Models Let's now proceed to create the database models for our application. We are going to create the following models: Contact Account Activity ContactStatus ContactSource ActivityStatus We have three main models which are Contact, Account and Activity. The last three models are simply lookup tables (They can be replaced by an enum). Open the crmapp/models.py file and the following code: from django.db import models from django.contrib.auth.models import User INDCHOICES = ( ('FINANCE', 'FINANCE'), ('HEALTHCARE', 'HEALTHCARE'), ('INSURANCE', 'INSURANCE'), ('LEGAL', 'LEGAL'), ('MANUFACTURING', 'MANUFACTURING'), ('PUBLISHING', 'PUBLISHING'), ('REAL ESTATE', 'REAL ESTATE'), ('SOFTWARE', 'SOFTWARE'), ) class Account(models.Model): name = models.CharField("Name of Account", "Name", max_length=64) email = models.EmailField(blank = True, null = True) phone = models.CharField(max_length=20, blank = True, null = True) industry = models.CharField("Industry Type", max_length=255, choices=INDCHOICES, blank=True, null=True) website = models.URLField("Website", blank=True, null=True) description = models.TextField(blank=True, null=True) createdBy = models.ForeignKey(User, related_name='account_created_by', on_delete=models.CASCADE) createdAt = models.DateTimeField("Created At", auto_now_add=True) isActive = models.BooleanField(default=False) def __str__(self): return self.name class ContactSource(models.Model): status = models.CharField("Contact Source", max_length=20) def __str__(self): return self.status class ContactStatus(models.Model): status = models.CharField("Contact Status", max_length=20) def __str__(self): return self.status class Contact(models.Model): first_name = models.CharField("First name", max_length=255, blank = True, null = True) last_name = models.CharField("Last name", max_length=255, blank = True, null = True) account = models.ForeignKey(Account, related_name='lead_account_contacts', on_delete=models.CASCADE, blank=True, null=True) email = models.EmailField() phone = models.CharField(max_length=20, blank = True, null = True) address = models.TextField(blank=True, null=True) description = models.TextField(blank=True, null=True) createdBy = models.ForeignKey(User, related_name='contact_created_by', on_delete=models.CASCADE) createdAt = models.DateTimeField("Created At", auto_now_add=True) isActive = models.BooleanField(default=False) def __str__(self): return self.first_name class ActivityStatus(models.Model): status = models.CharField("Activity Status", max_length=20) def __str__(self): return self.status class Activity(models.Model): description = models.TextField(blank=True, null=True) createdAt = models.DateTimeField("Created At", auto_now_add=True) contact = models.ForeignKey(Contact, on_delete=models.CASCADE, blank=True, null=True) def __str__(self): return self.description Creating Model Serializers After creating models we need to create the serializers. In the crmapp folder create a serializers.py file: $ cd crmapp $ touch serializers.py Next, open the file and add the following imports: from rest_framework import serializers from .models import Account, Activity, ActivityStatus, Contact, ContactSource, ContactStatus Next, add a serializer class for each model: class AccountSerializer(serializers.ModelSerializer): class Meta: model = Account fields = "__all__" class ActivitySerializer(serializers.ModelSerializer): class Meta: model = Activity fields = "__all__" class ActivityStatusSerializer(serializers.ModelSerializer): class Meta: model = ActivityStatus fields = "__all__" class ContactSerializer(serializers.ModelSerializer): class Meta: model = Contact fields = "__all__" class ContactSourceSerializer(serializers.ModelSerializer): class Meta: model = ContactSource fields = "__all__" class ContactStatusSerializer(serializers.ModelSerializer): class Meta: model = ContactStatus fields = "__all__" Creating API Views After creating the model serializers, let's now create the API views. Open the crmapp/views.py file and add the following imports: from rest_framework import generics from .models import Account, Activity, ActivityStatus, Contact, ContactSource, ContactStatus from .serializers import AccountSerializer, ActivitySerializer, ActivityStatusSerializer, ContactSerializer, ContactSourceSerializer, ContactStatusSerializer Next, add the following views: from rest_framework import generics from .models import Account, Activity, ActivityStatus, Contact, ContactSource, ContactStatus from .serializers import AccountSerializer, ActivitySerializer, ActivityStatusSerializer, ContactSerializer, ContactSourceSerializer, ContactStatusSerializer class AccountAPIView(generics.ListCreateAPIView): queryset = Account.objects.all() serializer_class = AccountSerializer class ActivityAPIView(generics.ListCreateAPIView): queryset = Activity.objects.all() serializer_class = ActivitySerializer class ActivityStatusAPIView(generics.ListCreateAPIView): queryset = ActivityStatus.objects.all() serializer_class = ActivitySerializer class ContactAPIView(generics.ListCreateAPIView): queryset = Contact.objects.all() serializer_class = ContactSerializer class ContactStatusAPIView(generics.ListCreateAPIView): queryset = ContactStatus.objects.all() serializer_class = ContactSerializer class ContactSourceAPIView(generics.ListCreateAPIView): queryset = ContactSource.objects.all() serializer_class = ContactSourceSerializer After creating these models, you need to create migrations using the following command: $ python manage.py makemigrations Next, you need to migrate your database using the following command: $ python manage.py migrate Creating API URLs Let's now create the API URLs to access our API views. Open the urls.py file and add the following imports: from django.contrib import admin from django.urls import path from crmapp import views Next, add the following content: urlpatterns = [ path('admin/', admin.site.urls), path(r'accounts', views.AccountAPIView.as_view(), name='account-list'), path(r'contacts', views.ContactAPIView.as_view(), name='contact-list'), path(r'activities', views.ActivityAPIView.as_view(), name='activity-list'), path(r'activitystatuses', views.ActivityStatusAPIView.as_view(), name='activity-status-list'), path(r'contactsources', views.ContactSourceAPIView.as_view(), name='contact-source-list'), path(r'contactstatuses', views.ContactStatusAPIView.as_view(), name='contact-status-list') ] Starting the local development server Django has a local development server that can be used while developing your project. It's a simple and primitive server which's suitable only for development not for production. To start the local server for your project, you can simply issue the following command inside your project root directory: $ python manage.py runserver Next navigate to the http://localhost:8000/ address with a web browser. You should see a web page with a message: It worked! Conclusion To conclude this tutorial, let's summarize what we have done. We have created a new Django project, created and migrated a MySQL database, built a simple CRM REST API with Django REST framework and started a local development server.

Angular 8|7 CRUD Tutorial: Python|Django REST API

Angular 8 is released! Read about its new features and how to update Angular 7 to v8. This tutorial is designed for developers that want to use Angular 8|7 to build front-end apps for their back-end REST APIs. Note: Check out how to build a developer's portfolio web application with Angular 7.1, Firebase and Firestore from these series: Angular 7|6 Tutorial Course: CLI, Components, Routing & Bootstrap 4, Angular 7|6 Tutorial Course: Angular NgModules (Feature and Root Modules), Angular 7|6 Tutorial Course: Nested Router-Outlet, Child Routes & forChild(), Angular 7|6 Tutorial Course: Authentication with Firebase (Email & Password), Angular 7|6 Tutorial Course: Securing the UI with Router Guards and UrlTree Parsed Routes You will see by example how to build a CRUD REST API with Angular and Python. The new features of Angular 7 include better performance, new powerful CLI additions and a new way to inject services. Throughout this tutorial, designed for beginners, you'll learn Angular by example by building a full-stack CRUD—Create, Read, Update and Delete—web application using the latest version of the most popular framework and platform for building mobile and desktop client side applications (also called SPAs or Single Page Applications), created and used internally by Google. In the back-end we'll use Python with Django, the most popular pythonic web framework designed for perfectionists with deadlines. In nutshell, you'll learn to generate Angular 8 apps, generate components and services and add routing. You'll also learn to use various features such as HttpClient for sending AJAX requests and HTTP calls and subscribing to RxJS 6 Observables etc. By the end of this Angular tutorial, you'll learn by building a real world example application: How to install the latest version of Angular CLI, How to use Angular CLI to generate a new Angular 8 project, How to use Angular to build a simple CRM application, What's a component and component-based architecture How to use RxJS 6 Observables and operators (map() and filter() etc.) How to create Angular components, How to add component routing and navigation, How to use HttpClient to consume a REST API etc. Prerequisites You will need to have the following prerequisites in order to follow this tutorial: A Python development environment. We use a Ubuntu system with Python 3.7 and pip installed but you can follow these instructions in a different system as long as you have Python 3 and pip installed. Also the commands shown here are bash commands which are available in Linux-based systems and macOS but if you use Windows CMD or Powershell , make sure to use the equivalent commands or install bash for Windows. Node.js and npm installed on your system. They are required by Angular CLI. The MySQL database management system installed on your system since we'll be using a MySQL database in our application. If you don't want to deal with MySQL installation, you can also use SQLite (a file database that doesn't require any installation) which is configured by default in your Django project. Familiarity with Python, Django and JavaScript (TypeScript). If you meet these requirements, you are good to go! Creating the Django REST API Please check out: Django 2 Tutorial & Example: Build a CRUD REST API for A Simple CRM Enabling CORS Since we'll be using two development servers running from two different ports (considered as two separate domains) we need to enable CORS (Cross Origin Resource Sharing) in our Django application. Start by installing django-cors-headers using pip $ pip install django-cors-headers Next, you need to add it to your project settings.py file: INSTALLED_APPS = ( ## [...] 'corsheaders' ) Next, you need to add corsheaders.middleware.CorsMiddleware middleware to the middleware classes in settings.py MIDDLEWARE = ( 'corsheaders.middleware.CorsMiddleware', # [...] ) You can then, either enable CORS for all domains by adding the following setting: CORS_ORIGIN_ALLOW_ALL = True You can find more configuration options from the docs. The example Angular application we'll be building is the front-end for the CRM RESTful API that will allow you to create accounts, leads, opportunities and contacts. It's a perfect example for a CRUD (Create, Read, Update and Delete) application built as an SPA (Single Page Application). The example application is work on progress so we'll be building it through a series of tutorials and will be updated to contain advanced features such as RxJS 6 and JWT authentication. We'll also use Bootstrap 4 and Angular Material for building and styling the UI components. You either need Bootstrap 4 or Angular Material for styling so depending on your choice you can follow separate tutorials: Building the UI with Angular Material Building the UI with Bootstrap 4 Installing the Angular CLI 8 Make sure you have Node.js installed, next run the following command in your terminal to install Angular CLI 8: $ npm install @angular/cli@next --global At the time of this writing @angular/cli v8.0.0-beta.11 is installed. You can check the installed version by running the following command: $ ng version Now, you're ready to create a project using Angular CLI 8. Simply run the following command in your terminal: ng new ngsimplecrm The CLI will automatically generate a bunch of files common to most Angular projects and install the required dependencies for your project. The CLI will prompt you if Would you like to add Angular routing? (y/N), type y. And Which stylesheet format would you like to use? Choose CSS and type Enter. Next, you can serve your application locally using the following command: $ cd ./ngsimplecrm $ ng serve You application will be running from http://localhost:4200. This is a screen-shot of home page of the application: Setting up Angular Material We'll be using Material Design to style our CRM UI so we need Angular Material to our project. Fortunately, this is only one command away. Open a new terminal and run the following commands: $ cd ./ngsimplecrm $ ng add @angular/material The command will ask you for to Choose a prebuilt theme name, or "custom" for a custom theme: (Use arrow keys) Indigo/Pink Deep Purple/Amber Pink/Blue Grey Purple/Green Choose Deep Purple/Amber or whatever theme you prefer. And if you want to Set up HammerJS for gesture recognition? (Y/n) Choose the default answer which Yes. And if you want to Set up browser animations for Angular Material? (Y/n) Also choose Yes. That's it, Angular Material (v7.3.7 as of this writing) is configured in your application. After that, you need to import the Angular Material components that you want to use in your project. Open the src/app/app.module.ts file and add the following changes: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; import { MatInputModule, MatButtonModule, MatCardModule, MatFormFieldModule,MatTableModule } from '@angular/material'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, AppRoutingModule, BrowserAnimationsModule, MatTableModule, MatCardModule, MatInputModule, MatFormFieldModule, MatButtonModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Components in Angular Now what's a component? A component is a TypeScript class with an HTML template and an optional set of CSS styles that control a part of the screen. Components are the most important concept in Angular. An Angular application is basically a tree of components with a root component (the famous AppComponent). The root component is the one contained in the bootstrap array in the main NgModule module in the app.module.ts file. One important aspect of components is re-usability. A component can be re-used throughout the application and even in other applications. Common and repeatable code that performs a certain task can be encapsulated into a re-usable component that can be called whenever we need the functionality it provides. Each bootstrapped component is the base of its own tree of components. Inserting a bootstrapped component usually triggers a cascade of component creations that fill out that tree. source Component-Based Architecture An Angular application is made of several components forming a tree structure with parent and child components. A component is an independent block of a big system (web application) that communicates with the other building blocks (components) of the system using inputs and outputs. A component has associated view, data and behavior and may have parent and child components. Components allow maximum re-usability, easy testing, maintenance and separation of concerns. Let's now see this practically. Head over to your Angular application project folder and open the src/app folder. You will find the following files: app.component.css: the CSS file for the component app.component.html: the HTML view for the component app.component.spec.ts: the unit tests or spec file for the component app.component.ts: the component code (data and behavior) app.module.ts: the application main module Except for the last file which contains the declaration of the application main (root) Module, all these files are used to create a component. It's the AppComponent: The root component of our application. All other components we are going to create next will be direct or un-direct children of the root component. Demystifying the AppComponent (The Root Component of Angular Applications) Go ahead and open the src/app/app.component.ts file and let's understand the code behind the main/root component of the application. First, this is the code: import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'app'; } We first import the Component decorator from @angular/core then we use it to decorate the TypeScript class AppComponent. The Component decorator takes an object with many parameters such as: selector: specifies the tag that can be used to call this component in HTML templates just like the standard HTML tags templateUrl: indicates the path of the HTML template that will be used to display this component (you can also use the template parameter to include the template inline as a string) styleUrls: specifies an array of URLs for CSS style-sheets for the component The export keyword is used to export the component so that it can be imported from other components and modules in the application. The title variable is a member variable that holds the string 'app'. There is nothing special about this variable and it's not a part of the canonical definition of an Angular component. Now let's see the corresponding template for this component. If you open src/app/app.component.html this is what you'll find: <div style="text-align:center"> <h1> Welcome to ! </h1> <img width="300" alt="Angular Logo" src="data:image/svg+xml;...."> </div> <h2>Here are some links to help you start: </h2> <ul> <li> <h2><a target="_blank" rel="noopener" href="https://angular.io/tutorial">Tour of Heroes</a></h2> </li> <li> <h2><a target="_blank" rel="noopener" href="https://github.com/angular/angular-cli/wiki">CLI Documentation</a></h2> </li> <li> <h2><a target="_blank" rel="noopener" href="https://blog.angular.io/">Angular blog</a></h2> </li> </ul> The template is a normal HTML file (almost all HTML tags are valid to be used inside Angular templates except for some tags such as <script>, <html> and <body> etc.) with the exception that it can contain template variables (in this case the title variable) or expressions ({{...}}) that can be used to insert values in the DOM dynamically. This is called interpolation or data binding. You can find more information about templates from the docs. You can also use other components directly inside Angular templates (via the selector property) just like normal HTML. If you are familiar with the MVC (Model View Controller) pattern, the component class plays the role of the Controller and the HTML template plays the role of the View. Angular 8 Components by Example After getting the theory behind Angular components, let's now create the components for our simple CRM application. Our REST API, built with Django, exposes these endpoints: /accounts: create or read a paginated list of accounts /accounts/<id>: read, update or delete an account /contacts: create or read a paginated list of contacts /contacts/<id>: read, update or delete a contact /api/activities: create or read a paginated list of activities /api/activities/<id>: read, update or delete an activity Before adding routing to our application we first need to create the application's components so based on the exposed REST API architecture we can initially divide our application into these components: AccountListComponent: this component displays and controls a tabular list of accounts AccountCreateComponent: this component displays and controls a form for creating or updating accounts ContactListComponent: displays a table of contacts ContactCreateComponent: displays a form to create or update a contact ActivityListComponent: displays a table of activities ActivityCreateComponent: displays a form to create or update an activity Let's use the Angular CLI to create the components $ ng generate component AccountList $ ng generate component AccountCreate $ ng generate component ContactList $ ng generate component ContactCreate $ ng generate component ActivityList $ ng generate component ActivityCreate This is the output of the first command: CREATE src/app/account-list/account-list.component.css (0 bytes) CREATE src/app/account-list/account-list.component.html (31 bytes) CREATE src/app/account-list/account-list.component.spec.ts (664 bytes) CREATE src/app/account-list/account-list.component.ts (292 bytes) UPDATE src/app/app.module.ts (418 bytes) You can see that the command generates all the files to define a component and also updates src/app/app.module.ts. If you open src/app/app.module.ts after running all commands, you can see that all components are automatically added to the AppModule declarations array: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; import { AccountListComponent } from './account-list/account-list.component'; import { AccountCreateComponent } from './account-create/account-create.component'; import { ContactListComponent } from './contact-list/contact-list.component'; import { ContactCreateComponent } from './contact-create/contact-create.component'; @NgModule({ declarations: [ AppComponent, AccountListComponent, AccountCreateComponent, ContactListComponent, ContactCreateComponent, ActivityListComponent, ActivityCreateComponent ], imports: [ BrowserModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Note: If you are creating components manually, you need to make sure to include them manually so they can be recognized as part of the module. Adding Angular Routing Now, let's add routing and navigation links in our application. This is the initial content of src/app/app-routing.module.ts: import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; const routes: Routes = [ ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { } The routes array will contain all the routes of the application. After creating the components we'll now see how to add the routes to this array. First, add the following imports: import { AccountListComponent } from './account-list/account-list.component'; import { AccountCreateComponent } from './account-create/account-create.component'; import { ContactListComponent } from './contact-list/contact-list.component'; import { ContactCreateComponent } from './contact-create/contact-create.component'; import { ActivityListComponent } from './activity-list/activity-list.component'; import { ActivityCreateComponent } from './activity-create/activity-create.component'; For now, we want to redirect the visitor to the /contacts path when the home URL is visited so the first path we'll add is: { path: '', redirectTo: 'contacts', pathMatch: 'full' }, The pathMatch property specifies the matching strategy. The full value means that we want to fully match the path. Next let's add the other paths: { path: '', redirectTo: 'contacts', pathMatch: 'full' }, { path: 'accounts', component: AccountListComponent }, { path: 'create-account', component: AccountCreateComponent }, { path: 'contacts', component: ContactListComponent }, { path: 'create-contact', component: ContactCreateComponent }, { path: 'activities', component: ActivityListComponent }, { path: 'create-activity', component: ActivityCreateComponent } Finally, open the src/app/app.component.html file and add the navigation links, then the router outlet: <a [routerLink]="'/accounts'"> Accounts </a> <a [routerLink]="'/create-account'"> Create Account </a> <a [routerLink]="'/contacts'"> Contacts </a> <a [routerLink]="'/create-contact'"> Create Contact </a> <a [routerLink]="'/activities'"> Activities </a> <a [routerLink]="'/create-activity'"> Create Activity </a> <div> <router-outlet></router-outlet> </div> An Example for Consuming the REST API Using Angular 8 HttpClient Now that we've created the different components and added routing and navigation, let's see an example of how to use the HttpClient of Angular 8 to consume the RESTful API back-end. First, you need to add the HttpClientModule module to the imports array of the main application module: // [...] import { HttpClientModule } from '@angular/common/http'; @NgModule({ declarations: [ // [...] ], imports: [ // [...] HttpClientModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Create Angular 8 Services A service is a global class that can be injected in any component. It's used to encapsulate code that can be common between multiple components in one place instead of repeating it throughout various components. Now, let's create the services that encapsulates all the code needed for interacting with the REST API. Using Angular CLI 8 run the following commands: $ ng generate service services/contact $ ng generate service services/activity $ ng generate service services/account Note: Since we have multiple services, we put them in a services folder. Open src/app/services/contact.service.ts then import and inject the HttpClient class: import { Injectable } from '@angular/core'; import { HttpClient} from '@angular/common/http'; @Injectable({ providedIn: 'root' }) export class ContactService { constructor(private httpClient: HttpClient) {} } Note: You will need to do the same for the other services. Angular 8 provides a way to register services/providers directly in the @Injectable() decorator by using the new providedIn attribute. This attribute accepts any module of your application or 'root' for the main app module. Now you don't have to include your service in the providers array of your module. Getting Contacts/Sending HTTP GET Request Example Let's start with the contacts API endpoint. First we'll add a method to consume this endpoint in our global API service, next we'll inject the API service and call the method from the corresponding component class (ContactListComponent) and finally we'll display the result (the list of contacts) in the component template. Open src/app/contact.service.ts and add the following method: import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; @Injectable({ providedIn: 'root' }) export class ContactService { API_URL = 'http://localhost:8000'; constructor(private httpClient: HttpClient) { } getFirstPage(){ return this.httpClient.get(`${this.API_URL}/contacts`); } } Next, open src/app/contact-list/contact-list.component.ts and inject the ContactService then call the getFirstPage() method: import { Component, OnInit } from '@angular/core'; import { ContactService } from '../services/contact.service'; @Component({ selector: 'app-contact-list', templateUrl: './contact-list.component.html', styleUrls: ['./contact-list.component.css'] }) export class ContactListComponent implements OnInit { displayedColumns : string[] = ['id', 'first_name', 'last_name', 'email', 'phone', 'account', 'address', 'description', 'createdBy', 'createdAt', 'isActive', 'actions']; dataSource = []; constructor(private contactService: ContactService) { } ngOnInit() { this.fetchContacts(); } fetchContacts(){ this.contactService.getFirstPage().subscribe((data: Array<object>) => { this.dataSource = data; console.log(data); }); } } Conclusion Throughout this Angular 8 tutorial for beginners, we've seen, by building a simple real world CRUD example, how to use different Angular concepts to create simple full-stack CRUD application with Angular and Django.

Session Management in Nodejs Using Redis as Session Store

We have covered session management in ExpressJs using global variable technique which of course will not work in case of shared server or concurrent execution of http requests which is most familiar production scenario.

Codeforgeek readers requested to provide solution for these issue and the optimal one is to use external session storage which is not dependent on application requests, answer is Redis cause this is the light weight and easy to use NoSQL database.

In this tutorial i am going to explain how to design and code session oriented express web applications by using Redis as external session storage.

DOWNLOAD

To get familiar with Session handling in ExpressJS I recommend to read our first article here.

Getting started with Redis :

If you have already installed Redis please go to next section. For those of you who are not familiar with Redis here is a little introduction.

Redis is key value pair cache and store. it is also referred as data structure server cause keys can contain List, Hash, sets, sorted sets etc.

Redis is fast cause it work in memory data set i.e it by default stores data in your memory than disk and if you from CS background you very well know CRUD operation on memory is way faster than disk, so is Redis.

if you restart Redis or shut it down you may lose all data unless you enable option to dump those data in hard disk. Be careful !

Installation:

1 : On mac

On Mac if you have brew install then just open up your terminal and type

brew install redis

Make sure you have command line tools installed cause it need GCC to compile it.

If you don’t have brew then please install it. It’s awesome !

2 : On ubuntu

Run following command on Ubuntu and rest will be done

sudo apt-get install redis-server

3 : On Windows

Well Redis does not support Windows ! Hard luck.

Basic REDIS command :

I am going to mention those command which i need to go with this tutorial. For detailed information please visit the nice demo built by awesome Redis team to make you pro in Redis.

1 : Starting Redis server.

Run this command on terminal.

redis-server &

2 : Open Redis CLI too.

Run this command on terminal.

redis-cli

3 : List all Keys.
Run this command on terminal.

KEYS *

4 : Retrieve information regarding particular key.
Run this command on terminal.

GET <key name>

Once you have installed Redis, by running first command you should something like this.

Express session with Redis

To add support of Redis you have to use Redis client and connect-redis. Create express-session and pass it to connect-redis object as parameter. This will initialize it.

Then in session middle ware, pass the Redis store information such as host, port and other required parameters.

Here is sample express code with Redis support. Have a look.

Express Session using Redis : var express = require('express');
var redis   = require("redis");
var session = require('express-session');
var redisStore = require('connect-redis')(session);
var bodyParser = require('body-parser');
var client  = redis.createClient();
var app = express();

app.set('views', __dirname + '/views');
app.engine('html', require('ejs').renderFile);

app.use(session({
    secret: 'ssshhhhh',
    // create new redis store.
    store: new redisStore({ host: 'localhost', port: 6379, client: client,ttl :  260}),
    saveUninitialized: false,
    resave: false
}));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: true}));

app.get('/',function(req,res){ 
    // create new session object.
    if(req.session.key) {
        // if email key is sent redirect.
        res.redirect('/admin');
    } else {
        // else go to home page.
        res.render('index.html');
    }
});

app.post('/login',function(req,res){
    // when user login set the key to redis.
    req.session.key=req.body.email;
    res.end('done');
});

app.get('/logout',function(req,res){
    req.session.destroy(function(err){
        if(err){
            console.log(err);
        } else {
            res.redirect('/');
        }
    });
});

app.listen(3000,function(){
    console.log("App Started on PORT 3000");
});

Notice the code where we are initiating the session. Have a look.

app.use(session({
    secret: 'ssshhhhh',
    // create new redis store.
    store: new redisStore({ host: 'localhost', port: 6379, client: client}),
    saveUninitialized: false,
    resave: false
}));

If Redis server running, then this is default configuration. Once you have configured it. Store your session key in the way we were doing in previous example.

req.session.key_name = value to set
// this will be set to redis, value may contain User ID, email or any information which you need across your application.

Fetch the information from redis session key.

req.session.key["keyname"] Our project:

To demonstrate this i have developed web application which will allow you to register and login and post some status. It’s simple but it will demonstrate you how to handle session using external storage.

Create project folder and copy these code to package.json.

Our package.json : package.json {
  "name": "Node-Session-Redis",
  "version": "0.0.1",
  "scripts": {
    "start": "node ./bin"
  },
  "dependencies": {
    "async": "^1.2.1",
    "body-parser": "^1.13.0",
    "connect-redis": "^2.3.0",
    "cookie-parser": "^1.3.5",
    "ejs": "^2.3.1",
    "express": "^4.12.4",
    "express-session": "^1.11.3",
    "mysql": "^2.7.0",
    "redis": "^0.12.1"
  }
}

Install dependencies by running following command.

npm install Our database:

Once completed let’s design simple database to support our application. Here is the diagram.

Database is simple and straight, please create database in MySQL and run following DDL queries.

SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0;
SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL,ALLOW_INVALID_DATES';

CREATE SCHEMA IF NOT EXISTS `redis_demo` DEFAULT CHARACTER SET latin1 ;
USE `redis_demo` ;

-- -----------------------------------------------------
-- Table `redis_demo`.`user_login`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `redis_demo`.`user_login` (
  `user_id` INT(11) NOT NULL AUTO_INCREMENT COMMENT '',
  `user_email` VARCHAR(50) NOT NULL COMMENT '',
  `user_password` VARCHAR(50) NOT NULL COMMENT '',
  `user_name` VARCHAR(50) NOT NULL COMMENT '',
  PRIMARY KEY (`user_id`)  COMMENT '',
  UNIQUE INDEX `user_email` (`user_email` ASC)  COMMENT '')
ENGINE = InnoDB
AUTO_INCREMENT = 7
DEFAULT CHARACTER SET = latin1;

-- -----------------------------------------------------
-- Table `redis_demo`.`user_status`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `redis_demo`.`user_status` (
  `user_id` INT(11) NOT NULL COMMENT '',
  `user_status` TEXT NOT NULL COMMENT '',
  `created_date` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '',
  INDEX `user_id` (`user_id` ASC)  COMMENT '',
  CONSTRAINT `user_status_ibfk_1`
    FOREIGN KEY (`user_id`)
    REFERENCES `redis_demo`.`user_login` (`user_id`))
ENGINE = InnoDB
DEFAULT CHARACTER SET = latin1;


SET SQL_MODE=@OLD_SQL_MODE;
SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS; Our Server code

Server file contains application routes, database support and Redis session support. We will first create connect to database and initialize Redis then according to Routes particular action will happen.

/bin/index.js /**
  Loading all dependencies.
**/
var express         =     require("express");
var redis           =     require("redis");
var mysql           =     require("mysql");
var session         =     require('express-session');
var redisStore      =     require('connect-redis')(session);
var bodyParser      =     require('body-parser');
var cookieParser    =     require('cookie-parser');
var path            =     require("path");
var async           =     require("async");
var client          =     redis.createClient();
var app             =     express();
var router          =     express.Router();

// Always use MySQL pooling.
// Helpful for multiple connections.

var pool    =   mysql.createPool({
    connectionLimit : 100,
    host     : 'localhost',
    user     : 'root',
    password : '',
    database : 'redis_demo',
    debug    :  false
});

app.set('views', path.join(__dirname,'../','views'));
app.engine('html', require('ejs').renderFile);

// IMPORTANT
// Here we tell Express to use Redis as session store.
// We pass Redis credentials and port information.
// And express does the rest !

app.use(session({
    secret: 'ssshhhhh',
    store: new redisStore({ host: 'localhost', port: 6379, client: client,ttl :  260}),
    saveUninitialized: false,
    resave: false
}));
app.use(cookieParser("secretSign#143_!223"));
app.use(bodyParser.urlencoded({extended: false}));
app.use(bodyParser.json());

// This is an important function.
// This function does the database handling task.
// We also use async here for control flow.

function handle_database(req,type,callback) {
   async.waterfall([
    function(callback) {
        pool.getConnection(function(err,connection){
          if(err) {
                   // if there is error, stop right away.
                   // This will stop the async code execution and goes to last function.
            callback(true);
          } else {
            callback(null,connection);
          }
        });
    },
    function(connection,callback) {
      var SQLquery;
      switch(type) {
       case "login" :
        SQLquery = "SELECT * from user_login WHERE user_email='"+req.body.user_email+"' AND `user_password`='"+req.body.user_password+"'";
        break;
            case "checkEmail" :
             SQLquery = "SELECT * from user_login WHERE user_email='"+req.body.user_email+"'";
            break;
        case "register" :
        SQLquery = "INSERT into user_login(user_email,user_password,user_name) VALUES ('"+req.body.user_email+"','"+req.body.user_password+"','"+req.body.user_name+"')";
        break;
        case "addStatus" :
        SQLquery = "INSERT into user_status(user_id,user_status) VALUES ("+req.session.key["user_id"]+",'"+req.body.status+"')";
        break;
        case "getStatus" :
        SQLquery = "SELECT * FROM user_status WHERE user_id="+req.session.key["user_id"];
        break;
        default :
        break;
    }
    callback(null,connection,SQLquery);
    },
    function(connection,SQLquery,callback) {
       connection.query(SQLquery,function(err,rows){
           connection.release();
        if(!err) {
            if(type === "login") {
              callback(rows.length === 0 ? false : rows[0]);
            } else if(type === "getStatus") {
                          callback(rows.length === 0 ? false : rows);
                        } else if(type === "checkEmail") {
                          callback(rows.length === 0 ? false : true);
                        } else {
                      callback(false);
            }
        } else {
             // if there is error, stop right away.
            // This will stop the async code execution and goes to last function.
            callback(true);
         }
    });
       }],
       function(result){
      // This function gets call after every async task finished.
      if(typeof(result) === "boolean" && result === true) {
        callback(null);
      } else {
        callback(result);
      }
    });
}

/**
    --- Router Code begins here.
**/

router.get('/',function(req,res){
    res.render('index.html');
});

router.post('/login',function(req,res){
    handle_database(req,"login",function(response){
        if(response === null) {
            res.json({"error" : "true","message" : "Database error occured"});
        } else {
            if(!response) {
              res.json({
                             "error" : "true",
                             "message" : "Login failed ! Please register"
                           });
            } else {
               req.session.key = response;
                   res.json({"error" : false,"message" : "Login success."});
            }
        }
    });
});

router.get('/home',function(req,res){
    if(req.session.key) {
        res.render("home.html",{ email : req.session.key["user_name"]});
    } else {
        res.redirect("/");
    }
});

router.get("/fetchStatus",function(req,res){
  if(req.session.key) {
    handle_database(req,"getStatus",function(response){
      if(!response) {
        res.json({"error" : false, "message" : "There is no status to show."});
      } else {
        res.json({"error" : false, "message" : response});
      }
    });
  } else {
    res.json({"error" : true, "message" : "Please login first."});
  }
});

router.post("/addStatus",function(req,res){
    if(req.session.key) {
      handle_database(req,"addStatus",function(response){
        if(!response) {
          res.json({"error" : false, "message" : "Status is added."});
        } else {
          res.json({"error" : false, "message" : "Error while adding Status"});
        }
      });
    } else {
      res.json({"error" : true, "message" : "Please login first."});
    }
});

router.post("/register",function(req,res){
    handle_database(req,"checkEmail",function(response){
      if(response === null) {
        res.json({"error" : true, "message" : "This email is already present"});
      } else {
        handle_database(req,"register",function(response){
          if(response === null) {
            res.json({"error" : true , "message" : "Error while adding user."});
          } else {
            res.json({"error" : false, "message" : "Registered successfully."});
          }
        });
      }
    });
});

router.get('/logout',function(req,res){
    if(req.session.key) {
    req.session.destroy(function(){
      res.redirect('/');
    });
    } else {
        res.redirect('/');
    }
});

app.use('/',router);

app.listen(3000,function(){
    console.log("I am running at 3000");
}); Explanation:

When user provides login credentials then we are checking it against our database. If it’s successful then we are setting database response in our Redis key store. This is where Session has been started.

Now as soon as User go to homepage, we are validating the session key and if it is there, then retrieving the user id from it to fire further MySQL queries.

When user click on Logout, we are calling req.session.destroy() function which in turn deletes the key from Redis and ends the session.

Views ( Index.html and Home.html )

Here is our home page code.

/view/index.html <html>
<head>
<title>Home</title>
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css">
<script src="https://code.jquery.com/jquery-1.11.3.min.js"></script>
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
<script>
$(document).ready(function(){
    $("#username").hide();
    $('#login-submit').click(function(e){
      if($(this).attr('value') === 'Register') {
        $.post("http://localhost:3000/register",{
               user_name : $("#username").val(),
               user_email : $("#useremail").val(),
               user_password : $("#password").val()
             },function(data){
        if(data.error) {
            alert(data.message);
        } else {
            $("#username").hide();
            $("#login-submit").prop('value','Log in');
        }
    });
    } else {
        $.post("http://localhost:3000/login",{
                   user_email : $("#useremail").val(),
                   user_password : $("#password").val()
                   },function(data){
            if(!data.error) {
                window.location.href = "/home";
            } else {
                alert(data.message);
            }
        });
    }
    });
    $("#reg").click(function(event){
        $("#username").show('slow');
        $("#login-submit").prop('value','Register');
        event.preventDefault();
    });
});
</script>
    </head>
    <body>
    <nav class="navbar navbar-default navbar-fixed-top">
    <div class="navbar-header">
    <a class="navbar-brand" href="#">
        <p>Redis session demo</p>
    </a>
    </div>
  <div class="container">
    <p class="navbar-text navbar-right">Please sign in</a></p>
  </div>
</nav>
<div class="form-group" style="margin-top: 100px; width : 400px; margin-left : 50px;">
    <input type="text" id="username" placeholder="Name" class="form-control"><br>
    <input type="text" id="useremail" placeholder="Email" class="form-control"><br>
    <input type="password" id="password" placeholder="Password" class="form-control"><br>
    <input type="button" id="login-submit" value="Log In" class="btn btn-primary">&nbsp;<a href="" id="reg">Sign up here </a>
    </div>
    </body>
</html>

Here is how it looks.

/view/home.html <html>
<head>
<title>Home</title>
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css">
<script src="https://code.jquery.com/jquery-1.11.3.min.js"></script>
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
<script type="text/javascript">
    $(document).ready(function(){
        $.get("http://localhost:3000/fetchStatus",function(res){
            $.each(res.message,function(index,value) {
                $("#showStatus").append('You have posted <br> <p>'+value.user_status+'</p><hr>');
            });
        });
    $("#addNewStatus").click(function(e){
        e.preventDefault();
            if($("#statusbox").text !== "") {
                $.post("/addStatus",
                                  { status : $("#statusbox").val() },
                                   function(res){
                    if(!res.error) {
                        alert(res.message);
                    }
                })
            }
        });
});
</script>
</head>
<body>
<nav class="navbar navbar-default navbar-fixed-top">
<div class="navbar-header">
<a class="navbar-brand" href="#">
<p>Redis session demo</p>
</a></div><div class="container">
<p class="navbar-text navbar-right">Hi you are login as <b><%= email %></b> (<a href="/logout/">Logout</a>)</p>
</div>
</nav>
<div style="margin-top:100px;margin-left:50px;width:400px">
<textarea rows="10" cols="5" id="statusbox" class="form-control"></textarea><br>
<input type="submit" id="addNewStatus" value="Post" class="btn btn-primary"><br>
<div id="showStatus" style="border : 2px grey; border-radius : 4px;">
</div>
</div>
</body>
</html>

Here is how it looks.

How to run:

Download code from Github and extract the file. Make sure you have installed Redis and created database.

If your database name and MySQL password is different then please update it on bin/index.js file.

Once done type npm start on terminal to start the project.

Now open your browser and go to localhost:3000 to view the app. Register with your email id and password and then Login with same. After login, have a look to Redis using commands mentioned above.

Now logout from System and check for same key. It should not be there.

This is it. Express session with Redis is working.

Conclusion:

Redis is one of the popular key storage database system. Using it to store session is not only beneficial in production environment but also it helps to improve performance of system.

Manual for dbt2-0.37.50.15, fully automated Sysbench and DBT2 benchmarking with NDB

The link dbt2.0.37.50 manual provides the details of how to use the dbt2-0.37.50 scripts
to execute benchmarks using MySQL Cluster.

These scripts can be used to execute automated test runs of Sysbench, DBT2 and
FlexAsynch. I also use it to start up NDB Clusters to run DBT3 benchmarks and
YCSB benchmarks.

This set of scripts originates from 2006 when I wanted to automate all my benchmark
efforts. The most challenging benchmarks constitute starting more than 100 programs
to work together and using more than 100 machines. This requires automation to
be succesful.

Now running any benchmark is a 1-liner e.g.
./bench_run.sh --default-directory /path/to/dir --init

The preparation to run this benchmark is to place a file called autobench.conf in
/path/to/dir. This contains the configuration of the NDB data nodes, NDB MGM
servers, MySQL Servers and the benchmark programs. Multiple benchmark
programs are supported for Sysbench, DBT2 and flexAsynch.

Database High Availability for Camunda BPM using MySQL or MariaDB Galera Cluster

Camunda BPM is an open-source workflow and decision automation platform. Camunda BPM ships with tools for creating workflow and decision models, operating deployed models in production, and allowing users to execute workflow tasks assigned to them.

By default, Camunda comes with an embedded database called H2, which works pretty decently within a Java environment with relatively small memory footprint. However, when it comes to scaling and high availability, there are other database backends that might be more appropriate.

In this blog post, we are going to deploy Camunda BPM 7.10 Community Edition on Linux, with a focus on achieving database high availability. Camunda supports major databases through JDBC drivers, namely Oracle, DB2, MySQL, MariaDB and PostgreSQL. This blog only focuses on MySQL and MariaDB Galera Cluster, with different implementation on each - one with ProxySQL as database load balancer, and the other using the JDBC driver to connect to multiple database instances. Take note that this article does not cover on high availability for the Camunda application itself.

Prerequisite

Camunda BPM runs on Java. In our CentOS 7 box, we have to install JDK and the best option is to use the one from Oracle, and skip using the OpenJDK packages provided in the repository. On the application server where Camunda should run, download the latest Java SE Development Kit (JDK) from Oracle by sending the acceptance cookie:

$ wget --header "Cookie: oraclelicense=accept-securebackup-cookie" https://download.oracle.com/otn-pub/java/jdk/12+33/312335d836a34c7c8bba9d963e26dc23/jdk-12_linux-x64_bin.rpm

Install it on the host:

$ yum localinstall jdk-12_linux-x64_bin.rpm

Verify with:

$ java --version java 12 2019-03-19 Java(TM) SE Runtime Environment (build 12+33) Java HotSpot(TM) 64-Bit Server VM (build 12+33, mixed mode, sharing)

Create a new directory and download Camunda Community for Apache Tomcat from the official download page:

$ mkdir ~/camunda $ cd ~/camunda $ wget --content-disposition 'https://camunda.org/release/camunda-bpm/tomcat/7.10/camunda-bpm-tomcat-7.10.0.tar.gz'

Extract it:

$ tar -xzf camunda-bpm-tomcat-7.10.0.tar.gz Related resources  How to Cluster Odoo 12 with PostgreSQL Streaming Replication for High Availability  How to Cluster Magento, nginx and MySQL on Multiple Servers for High Availability  Scaling Wordpress and MySQL on Multiple Servers for Performance

There are a number of dependencies we have to configure before starting up Camunda web application. This depends on the chosen database platform like datastore configuration, database connector and CLASSPATH environment. The next sections explain the required steps for MySQL Galera (using Percona XtraDB Cluster) and MariaDB Galera Cluster.

Note that the configurations shown in this blog are based on Apache Tomcat environment. If you are using JBOSS or Wildfly, the datastore configuration will be a bit different. Refer to Camunda documentation for details.

MySQL Galera Cluster (with ProxySQL and Keepalived)

We will use ClusterControl to deploy MySQL-based Galera cluster with Percona XtraDB Cluster. There are some Galera-related limitations mentioned in the Camunda docs surrounding Galera multi-writer conflicts handling and InnoDB isolation level. In case you are affected by these, the safest way is to use the single-writer approach, which is achievable with ProxySQL hostgroup configuration. To provide no single-point of failure, we will deploy two ProxySQL instances and tie them with a virtual IP address by Keepalived.

The following diagram illustrates our final architecture:

First, deploy a three-node Percona XtraDB Cluster 5.7. Install ClusterControl, generate a SSH key and setup passwordless SSH from ClusterControl host to all nodes (including ProxySQL). On ClusterControl node, do:

$ whoami root $ ssh-keygen -t rsa $ for i in 192.168.0.21 192.168.0.22 192.168.0.23 192.168.0.11 192.168.0.12; do ssh-copy-id $i; done

Before we deploy our cluster, we have to modify the MySQL configuration template file that ClusterControl will use when installing MySQL servers. The template file name is my57.cnf.galera and located under /usr/share/cmon/templates/ on the ClusterControl host. Make sure the following lines exist under [mysqld] section:

[mysqld] ... transaction-isolation=READ-COMMITTED wsrep_sync_wait=7 ...

Save the file and we are good to go. The above are the requirements as stated in Camunda docs, especially on the supported transaction isolation for Galera. Variable wsrep_sync_wait is set to 7 to perform cluster-wide causality checks for READ (including SELECT, SHOW, and BEGIN or START TRANSACTION), UPDATE, DELETE, INSERT, and REPLACE statements, ensuring that the statement is executed on a fully synced node. Keep in mind that value other than 0 can result in increased latency.

Go to ClusterControl -> Deploy -> MySQL Galera and specify the following details (if not mentioned, use the default value):

  • SSH User: root
  • SSH Key Path: /root/.ssh/id_rsa
  • Cluster Name: Percona XtraDB Cluster 5.7
  • Vendor: Percona
  • Version: 5.7
  • Admin/Root Password: {specify a password}
  • Add Node: 192.168.0.21 (press Enter), 192.168.0.22 (press Enter), 192.168.0.23 (press Enter)

Make sure you got all the green ticks, indicating ClusterControl is able to connect to the node passwordlessly. Click "Deploy" to start the deployment.

Create the database, MySQL user and password on one of the database nodes:

mysql> CREATE DATABASE camunda; mysql> CREATE USER camunda@'%' IDENTIFIED BY 'passw0rd'; mysql> GRANT ALL PRIVILEGES ON camunda.* TO camunda@'%';

Or from the ClusterControl interface, you can use Manage -> Schema and Users instead:

Once cluster is deployed, install ProxySQL by going to ClusterControl -> Manage -> Load Balancer -> ProxySQL -> Deploy ProxySQL and enter the following details:

  • Server Address: 192.168.0.11
  • Administration Password:
  • Monitor Password:
  • DB User: camunda
  • DB Password: passw0rd
  • Are you using implicit transactions?: Yes

Repeat the ProxySQL deployment step for the second ProxySQL instance, by changing the Server Address value to 192.168.0.12. The virtual IP address provided by Keepalived requires at least two ProxySQL instances deployed and running. Finally, deploy virtual IP address by going to ClusterControl -> Manage -> Load Balancer -> Keepalived and pick both ProxySQL nodes and specify the virtual IP address and network interface for the VIP to listen:

Our database backend is now complete. Next, import the SQL files into the Galera Cluster as the created MySQL user. On the application server, go to the "sql" directory and import them into one of the Galera nodes (we pick 192.168.0.21):

$ cd ~/camunda/sql/create $ yum install mysql #install mysql client $ mysql -ucamunda -p -h192.168.0.21 camunda < mysql_engine_7.10.0.sql $ mysql -ucamunda -p -h192.168.0.21 camunda < mysql_identity_7.10.0.sql

Camunda does not provide MySQL connector for Java since its default database is H2. On the application server, download MySQL Connector/J from MySQL download page and copy the JAR file into Apache Tomcat bin directory:

$ wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-8.0.15.tar.gz $ tar -xzf mysql-connector-java-8.0.15.tar.gz $ cd mysql-connector-java-8.0.15 $ cp mysql-connector-java-8.0.15.jar ~/camunda/server/apache-tomcat-9.0.12/bin/

Then, set the CLASSPATH environment variable to include the database connector. Open setenv.sh using text editor:

$ vim ~/camunda/server/apache-tomcat-9.0.12/bin/setenv.sh

And add the following line:

export CLASSPATH=$CLASSPATH:$CATALINA_HOME/bin/mysql-connector-java-8.0.15.jar

Open ~/camunda/server/apache-tomcat-9.0.12/conf/server.xml and change the lines related to datastore. Specify the virtual IP address as the MySQL host in the connection string, with ProxySQL port 6033:

<Resource name="jdbc/ProcessEngine" ... driverClassName="com.mysql.jdbc.Driver" defaultTransactionIsolation="READ_COMMITTED" url="jdbc:mysql://192.168.0.10:6033/camunda" username="camunda" password="passw0rd" ... />

Finally, we can start the Camunda service by executing start-camunda.sh script:

$ cd ~/camunda $ ./start-camunda.sh starting camunda BPM platform on Tomcat Application Server Using CATALINA_BASE: ./server/apache-tomcat-9.0.12 Using CATALINA_HOME: ./server/apache-tomcat-9.0.12 Using CATALINA_TMPDIR: ./server/apache-tomcat-9.0.12/temp Using JRE_HOME: / Using CLASSPATH: :./server/apache-tomcat-9.0.12/bin/mysql-connector-java-8.0.15.jar:./server/apache-tomcat-9.0.12/bin/bootstrap.jar:./server/apache-tomcat-9.0.12/bin/tomcat-juli.jar Tomcat started.

Make sure the CLASSPATH shown in the output includes the path to the MySQL Connector/J JAR file. After the initialization completes, you can then access Camunda webapps on port 8080 at http://192.168.0.8:8080/camunda/. The default username is demo with password 'demo':

You can then see the digested capture queries from Nodes -> ProxySQL -> Top Queries, indicating the application is interacting correctly with the Galera Cluster:

There is no read-write splitting configured for ProxySQL. Camunda uses "SET autocommit=0" on every SQL statement to initialize transaction and the best way for ProxySQL to handle this by sending all the queries to the same backend servers of the target hostgroup. This is the safest method alongside better availability. However, all connections might end up reaching a single server, so there is no load balancing.

MariaDB Galera

MariaDB Connector/J is able to handle a variety of connection modes - failover, sequential, replication and aurora - but Camunda only supports failover and sequential. Taken from MariaDB Connector/J documentation:

Mode Description sequential
(available since 1.3.0) This mode supports connection failover in a multi-master environment, such as MariaDB Galera Cluster. This mode does not support load-balancing reads on slaves. The connector will try to connect to hosts in the order in which they were declared in the connection URL, so the first available host is used for all queries. For example, let's say that the connection URL is the following: jdbc:mariadb:sequential:host1,host2,host3/testdb When the connector tries to connect, it will always try host1 first. If that host is not available, then it will try host2. etc. When a host fails, the connector will try to reconnect to hosts in the same order. failover
(available since 1.2.0) This mode supports connection failover in a multi-master environment, such as MariaDB Galera Cluster. This mode does not support load-balancing reads on slaves. The connector performs load-balancing for all queries by randomly picking a host from the connection URL for each connection, so queries will be load-balanced as a result of the connections getting randomly distributed across all hosts.

Using "failover" mode poses a higher potential risk of deadlock, since writes will be distributed to all backend servers almost equally. Single-writer approach is a safe way to run, which means using sequential mode should do the job pretty well. You also can skip the load-balancer tier in the architecture. Hence with MariaDB Java connector, we can deploy our architecture as simple as below:

Before we deploy our cluster, modify the MariaDB configuration template file that ClusterControl will use when installing MariaDB servers. The template file name is my.cnf.galera and located under /usr/share/cmon/templates/ on ClusterControl host. Make sure the following lines exist under [mysqld] section:

[mysqld] ... transaction-isolation=READ-COMMITTED wsrep_sync_wait=7 performance_schema = ON ...

Save the file and we are good to go. A bit of explanation, the above list are the requirements as stated in Camunda docs, especially on the supported transaction isolation for Galera. Variable wsrep_sync_wait is set to 7 to perform cluster-wide causality checks for READ (including SELECT, SHOW, and BEGIN or START TRANSACTION), UPDATE, DELETE, INSERT, and REPLACE statements, ensuring that the statement is executed on a fully synced node. Keep in mind that value other than 0 can result in increased latency. Enabling Performance Schema is optional for ClusterControl query monitoring feature.

Now we can start the cluster deployment process. Install ClusterControl, generate a SSH key and setup passwordless SSH from ClusterControl host to all Galera nodes. On ClusterControl node, do:

$ whoami root $ ssh-keygen -t rsa $ for i in 192.168.0.41 192.168.0.42 192.168.0.43; do ssh-copy-id $i; done

Go to ClusterControl -> Deploy -> MySQL Galera and specify the following details (if not mentioned, use the default value):

  • SSH User: root
  • SSH Key Path: /root/.ssh/id_rsa
  • Cluster Name: MariaDB Galera 10.3
  • Vendor: MariaDB
  • Version: 10.3
  • Admin/Root Password: {specify a password}
  • Add Node: 192.168.0.41 (press Enter), 192.168.0.42 (press Enter), 192.168.0.43 (press Enter)

Make sure you got all the green ticks when adding nodes, indicating ClusterControl is able to connect to the node passwordlessly. Click "Deploy" to start the deployment.

Create the database, MariaDB user and password on one of the Galera nodes:

mysql> CREATE DATABASE camunda; mysql> CREATE USER camunda@'%' IDENTIFIED BY 'passw0rd'; mysql> GRANT ALL PRIVILEGES ON camunda.* TO camunda@'%';

For ClusterControl user, you can use ClusterControl -> Manage -> Schema and Users instead:

Our database cluster deployment is now complete. Next, import the SQL files into the MariaDB cluster. On the application server, go to the "sql" directory and import them into one of the MariaDB nodes (we chose 192.168.0.41):

$ cd ~/camunda/sql/create $ yum install mysql #install mariadb client $ mysql -ucamunda -p -h192.168.0.41 camunda < mariadb_engine_7.10.0.sql $ mysql -ucamunda -p -h192.168.0.41 camunda < mariadb_identity_7.10.0.sql

Camunda does not provide MariaDB connector for Java since its default database is H2. On the application server, download MariaDB Connector/J from MariaDB download page and copy the JAR file into Apache Tomcat bin directory:

$ wget https://downloads.mariadb.com/Connectors/java/connector-java-2.4.1/mariadb-java-client-2.4.1.jar $ cp mariadb-java-client-2.4.1.jar ~/camunda/server/apache-tomcat-9.0.12/bin/

Then, set the CLASSPATH environment variable to include the database connector. Open setenv.sh via text editor:

$ vim ~/camunda/server/apache-tomcat-9.0.12/bin/setenv.sh

And add the following line:

export CLASSPATH=$CLASSPATH:$CATALINA_HOME/bin/mariadb-java-client-2.4.1.jar

Open ~/camunda/server/apache-tomcat-9.0.12/conf/server.xml and change the lines related to datastore. Use the sequential connection protocol and list out all the Galera nodes separated by comma in the connection string:

<Resource name="jdbc/ProcessEngine" ... driverClassName="org.mariadb.jdbc.Driver" defaultTransactionIsolation="READ_COMMITTED" url="jdbc:mariadb:sequential://192.168.0.41:3306,192.168.0.42:3306,192.168.0.43:3306/camunda" username="camunda" password="passw0rd" ... />

Finally, we can start the Camunda service by executing start-camunda.sh script:

$ cd ~/camunda $ ./start-camunda.sh starting camunda BPM platform on Tomcat Application Server Using CATALINA_BASE: ./server/apache-tomcat-9.0.12 Using CATALINA_HOME: ./server/apache-tomcat-9.0.12 Using CATALINA_TMPDIR: ./server/apache-tomcat-9.0.12/temp Using JRE_HOME: / Using CLASSPATH: :./server/apache-tomcat-9.0.12/bin/mariadb-java-client-2.4.1.jar:./server/apache-tomcat-9.0.12/bin/bootstrap.jar:./server/apache-tomcat-9.0.12/bin/tomcat-juli.jar Tomcat started.

Make sure the CLASSPATH shown in the output includes the path to the MariaDB Java client JAR file. After the initialization completes, you can then access Camunda webapps on port 8080 at http://192.168.0.8:8080/camunda/. The default username is demo with password 'demo':

You can see the digested capture queries from ClusterControl -> Query Monitor -> Top Queries, indicating the application is interacting correctly with the MariaDB Cluster:

With MariaDB Connector/J, we do not need load balancer tier which simplifies our overall architecture. The sequential connection mode should do the trick to avoid multi-writer deadlocks - which can happen in Galera. This setup provides high availability with each Camunda instance configured with JDBC to access the cluster of MySQL or MariaDB nodes. Galera takes care of synchronizing the data between the database instances in real time.

Tags:  MySQL MariaDB galera camunda high availability

Node.js MySQL Tutorial

Node.js and MySQL is one of the necessary binding needed for any web application. MySQL is one of the most popular open source database in world and efficient as well. Almost every popular programming language like Java or PHP provides driver to access and perform operations with MySQL.

In this tutorial i am trying to cover code for learning and code for production. So if you know this already and looking for ready made code for production. Click here to jump there directly.

Introduction:

Node.js is rich with number of popular packages registered at package registry called NPM. Most of them are not so reliable to use for production but there are some on which we can rely upon. For MySQL there is one popular driver called node-mysql.

In this tutorial i am going to cover following points related to Node.js and MySQL.

  • Sample code to get started.
  • Code for Production.
  • Testing concurrent users.
Sample code to get started. Project directory: ---node_modules
-----+ mysql
-----+ express
---index.js
---package.json package.json {
  "name": "node-mysql",
  "version": "0.0.1",
  "dependencies": {
    "express": "^4.10.6",
    "mysql": "^2.5.4"
  }
}

Install dependencies using

npm install

Here is sample code which connects to Database and perform SQL query.

var mysql      = require('mysql');
var connection = mysql.createConnection({
  host     : 'localhost',
  user     : '< MySQL username >',
  password : '< MySQL password >',
  database : '<your database name>'
});

connection.connect();

connection.query('SELECT * from < table name >', function(err, rows, fields) {
  if (!err)
    console.log('The solution is: ', rows);
  else
    console.log('Error while performing Query.');
});

connection.end();

Make sure you have started MySQL on default port and changed the parameter in above code then run this code using

node file_name.js Code for production :

Above code is just for learning purpose and not for production payload. In production scenario is different, there may be thousands of concurrent users which turns into tons of MySQL queries. Above code won’t run for concurrent users and here is a proof. Let’s modify our code little bit and add Express routes in that, here it is.

test.js ( Change database settings in code ) var express    = require("express");
var mysql      = require('mysql');
var connection = mysql.createConnection({
  host     : 'localhost',
  user     : 'root',
  password : '',
  database : 'address_book'
});
var app = express();

connection.connect(function(err){
if(!err) {
    console.log("Database is connected ... nn");    
} else {
    console.log("Error connecting database ... nn");    
}
});

app.get("/",function(req,res){
connection.query('SELECT * from user LIMIT 2', function(err, rows, fields) {
connection.end();
  if (!err)
    console.log('The solution is: ', rows);
  else
    console.log('Error while performing Query.');
  });
});

app.listen(3000);

Install siege in your system. I use this command to install it in Ubuntu.

apt-get install siege

then run our node and server and following command.

node test.js siege -c10 -t1M http://localhost:3000

Assuming you are running Node server on Port 3000.
Here is the output.

In above code, we are allowing it to run for standalone connection i.e one connection at a time but reality is bit different. You may get 100 or 1000 connection at one particular time and if your server is not powerful enough to serve those request then at least it should put them in queue.

Pool connection in MySQL :

Connection Pooling is mechanism to maintain cache of database connection so that connection can be reused after releasing it. In Node mysql, we can use pooling directly to handle multiple connection and reuse the connection. Let’s write same code with pooling and check whether it can handle multiple connection or not.

test.js var express   =    require("express");
var mysql     =    require('mysql');
var app       =    express();

var pool      =    mysql.createPool({
    connectionLimit : 100, //important
    host     : 'localhost',
    user     : 'root',
    password : '',
    database : 'address_book',
    debug    :  false
});

function handle_database(req,res) {
   
    pool.getConnection(function(err,connection){
        if (err) {
          res.json({"code" : 100, "status" : "Error in connection database"});
          return;
        }  

        console.log('connected as id ' + connection.threadId);
       
        connection.query("select * from user",function(err,rows){
            connection.release();
            if(!err) {
                res.json(rows);
            }          
        });

        connection.on('error', function(err) {      
              res.json({"code" : 100, "status" : "Error in connection database"});
              return;    
        });
  });
}

app.get("/",function(req,res){-
        handle_database(req,res);
});

app.listen(3000);

Run the app using

node test.js

and fire 10 concurrent users for 1 minute using siege by using this command.

siege -c10 -t1M http://localhost:3000

Here is output.

**UPDATE**

You can directly use pool.query() which internally will acquire connection and release it when query is executed. In my personal code review experience, majority of the developers often forget to release the acquired connection which in turns creates bottleneck and database load.

Refer the code snippet below:

test.js var express   =    require("express");
var mysql     =    require('mysql');
var app       =    express();

var pool      =    mysql.createPool({
    connectionLimit : 100, //important
    host     : 'localhost',
    user     : 'root',
    password : '',
    database : 'address_book',
    debug    :  false
});

function handle_database(req,res) {
       // connection will be acquired automatically
       pool.query("select * from user",function(err,rows){
        if(err) {
            return res.json({'error': true, 'message': 'Error occurred'+err});
        }
                //connection will be released as well.
                res.json(rows);
       });
}

app.get("/",function(req,res){-
        handle_database(req,res);
});

app.listen(3000);

I have used this function in a production environment with heavy payload and it works like charm.

Executing Queries

Let’s learn how to execute queries using Node.js.

Inserting Rows into Table

Here is the code to add new rows in the table.

const mysql = require('mysql');

const pool = mysql.createPool({
    connectionLimit : 100, //important
    host     : 'localhost',
    user     : 'root',
    password : '',
    database : 'todolist',
    debug    :  false
});

// add rows in the table

function addRow(data) {
    let insertQuery = 'INSERT INTO ?? (??,??) VALUES (?,?)';
    let query = mysql.format(insertQuery,["todo","user","notes",data.user,data.value]);
    pool.query(query,(err, response) => {
        if(err) {
            console.error(err);
            return;
        }
        // rows added
        console.log(response.insertId);
    });
}

// timeout just to avoid firing query before connection happens

setTimeout(() => {
    // call the function
    addRow({
        "user": "Shahid",
        "value": "Just adding a note"
    });
},5000);

The mysql.format function will perform the query escape.

Querying data in Table

Here is the code to query rows in the table.

const mysql = require('mysql');

const pool = mysql.createPool({
    connectionLimit : 100, //important
    host     : 'localhost',
    user     : 'root',
    password : '',
    database : 'todolist',
    debug    :  false
});

// query rows in the table

function queryRow(userName) {
    let selectQuery = 'SELECT * FROM ?? WHERE ?? = ?';    
    let query = mysql.format(selectQuery,["todo","user", userName]);
    // query = SELECT * FROM `todo` where `user` = 'shahid'
    pool.query(query,(err, data) => {
        if(err) {
            console.error(err);
            return;
        }
        // rows fetch
        console.log(data);
    });
}

// timeout just to avoid firing query before connection happens

setTimeout(() => {
    // call the function
    // select rows
    queryRow('shahid');
},5000);

If you would like to add multiple rows in the single query, you can pass an array in the values. Like this.

    let insertQuery = 'INSERT INTO ?? (??,??) VALUES (?,?)';
    let values = [["shahid","hello"],["Rohit","Hi"]]; // each array is one row
    let query = mysql.format(insertQuery,["todo","user","notes",values]); Updating data in Table

Here is the code to update data in the table.

const mysql = require('mysql');

const pool = mysql.createPool({
    connectionLimit : 100, //important
    host     : 'localhost',
    user     : 'root',
    password : '',
    database : 'todolist',
    debug    :  false
});

// update rows

function updateRow(data) {
    let updateQuery = "UPDATE ?? SET ?? = ? WHERE ?? = ?";
    let query = mysql.format(updateQuery,["todo","notes",data.value,"user",data.user]);
    // query = UPDATE `todo` SET `notes`='Hello' WHERE `name`='shahid'
    pool.query(query,(err, response) => {
        if(err) {
            console.error(err);
            return;
        }
        // rows updated
        console.log(response.affectedRows);
    });
}

// timeout just to avoid firing query before connection happens

setTimeout(() => {
    // call the function
    // update row
    updateRow({
        "user": "Shahid",
        "value": "Just updating a note"
    });
},5000); Deleting Rows in the table

Here is the code to delete a row from the table.

const mysql = require('mysql');

const pool = mysql.createPool({
    connectionLimit : 100, //important
    host     : 'localhost',
    user     : 'root',
    password : '',
    database : 'todolist',
    debug    :  false
});

function deleteRow(userName) {
    let deleteQuery = "DELETE from ?? where ?? = ?";
    let query = mysql.format(deleteQuery, ["todo", "user", userName]);
    // query = DELETE from `todo` where `user`='shahid';
    pool.query(query,(err, response) => {
        if(err) {
            console.error(err);
            return;
        }
        // rows deleted
        console.log(response.affectedRows);
    });
}

// timeout just to avoid firing query before connection happens

setTimeout(() => {
    // call the function
    // delete row
    deleteRow('shahid');
},5000); Calling MySQL Stored Procedure Using Nodejs

You can also call a stored procedure using Node.js. If you don’t have stored procedure created in MySQL, you can refer to the code below to do the same.

DELIMITER $$
 
CREATE PROCEDURE `getAllTodo`()
BEGIN
    SELECT * FROM todo;
END$$
 
DELIMITER ;

Here is the code to call the stored procedure from the code.

const mysql = require('mysql');

const pool = mysql.createPool({
    connectionLimit : 100, //important
    host     : 'localhost',
    user     : 'root',
    password : '',
    database : 'todolist',
    debug    :  false
});

function callSP(spName) {
    let spQuery = 'CALL ??';
    let query = mysql.format(spQuery,[spName]);
    // CALL `getAllTodo`
    pool.query(query,(err, result) => {
        if(err) {
            console.error(err);
            return;
        }
        // rows from SP
        console.log(result);
    });
}

// timeout just to avoid firing query before connection happens

setTimeout(() => {
    // call the function
    // call sp
    callSP('getAllTodo')
},5000); Final comments :

Siege is a really powerful tool for the testing server under pressure. We have created 100 connection limit in code, so you might be wondering that after 100 concurrent connection code will break. Well, let me answer it via code. Fire 1000 concurrent user for 1 minute and let’s see how our code reacts.

siege -c1000 -t1M http://localhost:3000

If your MySQL server is configured to handle such traffic at one socket then it will run and our code will manage the scheduling of concurrent connection. It will serve 100 connection time but rest 900 will be in the queue. So the code will not break.

Conclusion :

MySQL is one of a widely used database engine in the world and with Node it really works very well. Node-MySQL pooling and event-based debugging are really powerful and easy to code.

Further reading:

Fun with Bugs #82 - On MySQL Bug Reports I am Subscribed to, Part XVIII

I've got few comments to my post on references to MariaDB in MySQL bug reports (not in the blog, but via social media and in personal messages), and all but one comments from current and former colleagues whose opinion I value a lot confirmed that this really looks like a kind of attempt to advertise MariaDB. So, from now on I'll try to keep my findings on how tests shared by MySQL bug reporters work in MariaDB for myself, MariaDB JIRA and this blog (where I can and will advertise whatever makes sense to me), and avoid adding them to MySQL bug reports.

That said, I still think that it's normal to share links to MariaDB bug reports that add something useful (like patches, explanations or better test cases), and I keep insisting that this kind of feedback should not be hidden. Yes, I want to mention Bug #94610 (and related MDEV-15641) again, as a clear example of censorship that is not reasonable and should not be tolerated.

In the meantime, since my previous post in this series I've subscribed to 30 or so new MySQL bug reports. Some of them are listed below, started from the oldest. This time I am not going to exclude "inactive" reports that were not accepted by Oracle MySQL engineers as valid:
  • Bug #94629 - "no variable can skip a single channel error in mysql replication". This is a request to add support for per-channel options to skip N transactions or specific errors. It is not accepted ("Not a Bug") just because one can stop replication on all channels and start on one to skip transaction(s) there, then resume replication for all channels. Do you really think this is a right and only way to process such a report?
  • Bug #94647 - "Memory leak in MEMORY table by glibc". This is also not a bug because one ca use something like malloc-lib=jemalloc with mysqld_safe or Environment="LD_PRELOAD=/path/to/jemalloc" with systemd services. There might be some cost related to that in older versions... Note that similar MDEV-14050 is still open.
  • Bug #94655 - "Some GIS function do not use spatial index anymore". yet another regression vs MySQL 5.7 reported by Cedric Tabin. It ended up verified as feature request without a regression tag...
  • Bug #94664 - "Binlog related deadlock leads to all incoming connection choked.". This report from Yanmin Qiao ended up as a duplicate of  Bug #92108 - "Deadlock by concurrent show binlogs, pfs session_variables table & binlog purge" (fixed in MySQL 5.7.25+, thanks Sveta Smirnova for the hint). See also Bug #91941.
  • Bug #94665 - "enabling undo-tablespace encryption doesn't mark tablespace encryption flag". Nice finding by Krunal Bauskar from Percona.
  • Bug #94699 - "Mysql deadlock and bugcheck on aarch64 under stress test". Bug report with a patch contributed by Cai Yibo. The fix is included in upcoming MySQL 8.0.17 and the bug is already closed.
  • Bug #94709 - "Regression behavior for full text index". This regression was reported by Carlos Tutte and properly verified (with regression tag added and all versions checked) by Umesh Shastry. See also detailed analysis of possible reason in the comment from Nikolai Ikhalainen.
  • Bug #94723 - "Incorrect simple query result with func result and FROM table column in where". Michal Vrabel found this interesting case when MySQL 8.0.215 returns wrong results. I've checked the test case on MariaDB 10.3.7 and it is not affected. Feel free to consider this check and statement my lame attempt to advertise MariaDB. I don't mind.
  • Bug #94730 - "Kill slave may cause start slave to report an error.". This bug was declared a duplicate of a nice Bug #93397 - "Replication does not start if restart MySQL after init without start slave." reported by Jean-François Gagné earlier. Both bugs were reported for MySQL 5.7.x, but I do not see any public attempt to verify if MySQL 5.6 or 8.0 is also affected. In the past it was required to check/verify bug on all GA versions supported if the test case applies. Nowadays this approach is not followed way too often, even when bug reporter cared enough to provide MTR test case.
  • Bug #94737 - "MySQL uses composite hash index when not possible and returns wrong result". Yet another optimizer bug was reported by Simon Banaan. Again, MariaDB 10.3.7 is NOT affected. I can freely and happily state this here if it's inappropriate to state so in the bug report itself. By the way, other MySQL versions were probably not checked. Also, unlike Oracle engineer who verified the bug, I do not hesitate to copy/paste the entire results of my testing here:
    MariaDB [test]> show create table tmp_projectdays_4\G*************************** 1. row ***************************
           Table: tmp_projectdays_4
    Create Table: CREATE TABLE `tmp_projectdays_4` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `project` int(11) NOT NULL,
      `datum` date NOT NULL,
      `voorkomen` tinyint(1) NOT NULL DEFAULT 1,
      `tijden` tinyint(1) NOT NULL DEFAULT 0,
      `personeel` tinyint(1) NOT NULL DEFAULT 0,
      `transport` tinyint(1) NOT NULL DEFAULT 0,
      `materiaal` tinyint(1) NOT NULL DEFAULT 0,
      `materiaaluit` tinyint(1) NOT NULL DEFAULT 0,
      `materiaalin` tinyint(1) NOT NULL DEFAULT 0,
      `voertuigen` varchar(1024) DEFAULT '',
      `medewerkers` varchar(1024) DEFAULT '',
      `personeel_nodig` int(11) DEFAULT 0,
      `personeel_gepland` int(11) DEFAULT 0,
      `voertuigen_nodig` int(11) DEFAULT 0,
      `voertuigen_gepland` int(11) DEFAULT 0,
      `created` datetime DEFAULT NULL,
      `modified` datetime DEFAULT NULL,
      `creator` int(11) DEFAULT NULL,
      PRIMARY KEY (`id`),
      KEY `project` (`project`,`datum`) USING HASH
    ) ENGINE=MEMORY AUTO_INCREMENT=2545 DEFAULT CHARSET=utf8mb4
    1 row in set (0.001 sec)

    MariaDB [test]> explain SELECT COUNT(1) FROM `tmp_projectdays_4` WHERE `project`
     IN(15409,15911,15929,15936,16004,16005,16007,16029,16031,16052,16054,16040,1248
    5,15892,16035,16060,16066,16093,16057,16027,15988,15440,15996,11457,15232,15704,
    12512,12508,14896,15594,16039,14997,16058,14436,16006,15761,15536,16016,16019,11
    237,13332,16037,14015,15537,15369,15756,12038,14327,13673,11393,14377,15983,1251
    4,12511,13585,12732,14139,14141,12503,15727,15531,15746,15773,15207,13675,15676,
    15663,10412,13677,15528,15530,10032,15535,15693,15532,15533,15534,15529,16056,16
    064,16070,15994,15918,16045,16073,16074,16077,16069,16022,16081,15862,16048,1606
    2,15610,15421,16001,15896,15004,15881,15882,15883,15884,15886,16065,15814,16076,
    16085,16174,15463,15873,15874,15880,15636,16092,15909,16078,15923,16026,16047,16
    094,16111,15914,15919,16041,16063,16068,15971,16080,15961,16038,16096,16127,1564
    1,13295,16146,15762,15811,15937,16150,16152,14438,16086,16156,15593,16147,15910,
    16106,16107,16161,16132,16095,16137,16072,16097,16110,16114,16162,16166,16175,16
    176,16178,15473,16160,15958,16036,16042,16115,16165,16167,16170,16177,16185,1582
    3,16190,16169,15989,16194,16116,16131,16157,16192,16197,16203,16193,16050,16180,
    16209,15522,16148,16205,16201,15990,16158,16216,16033,15974,16112,16133,16181,16
    188,16189,16212,16238,16241,16183,15640,15638,16087,16088,16129,16186,16164,1610
    8,15985,16244,15991,15763,16049,15999,16104,16208,13976,16122,15924,16046,16242,
    16151,16117,16187);

    +------+-------------+-------------------+------+---------------+------+--------
    -+------+------+-------------+
    | id   | select_type | table             | type | possible_keys | key  | key_len
     | ref  | rows | Extra       |
    +------+-------------+-------------------+------+---------------+------+--------
    -+------+------+-------------+
    |    1 | SIMPLE      | tmp_projectdays_4 | ALL  | project       | NULL | NULL
     | NULL | 2544 | Using where |
    +------+-------------+-------------------+------+---------------+------+--------
    -+------+------+-------------+
    1 row in set (0.004 sec)

    MariaDB [test]> SELECT COUNT(1) FROM `tmp_projectdays_4` WHERE `project` IN(1540
    9,15911,15929,15936,16004,16005,16007,16029,16031,16052,16054,16040,12485,15892,
    16035,16060,16066,16093,16057,16027,15988,15440,15996,11457,15232,15704,12512,12
    508,14896,15594,16039,14997,16058,14436,16006,15761,15536,16016,16019,11237,1333
    2,16037,14015,15537,15369,15756,12038,14327,13673,11393,14377,15983,12514,12511,
    13585,12732,14139,14141,12503,15727,15531,15746,15773,15207,13675,15676,15663,10
    412,13677,15528,15530,10032,15535,15693,15532,15533,15534,15529,16056,16064,1607
    0,15994,15918,16045,16073,16074,16077,16069,16022,16081,15862,16048,16062,15610,
    15421,16001,15896,15004,15881,15882,15883,15884,15886,16065,15814,16076,16085,16
    174,15463,15873,15874,15880,15636,16092,15909,16078,15923,16026,16047,16094,1611
    1,15914,15919,16041,16063,16068,15971,16080,15961,16038,16096,16127,15641,13295,
    16146,15762,15811,15937,16150,16152,14438,16086,16156,15593,16147,15910,16106,16
    107,16161,16132,16095,16137,16072,16097,16110,16114,16162,16166,16175,16176,1617
    8,15473,16160,15958,16036,16042,16115,16165,16167,16170,16177,16185,15823,16190,
    16169,15989,16194,16116,16131,16157,16192,16197,16203,16193,16050,16180,16209,15
    522,16148,16205,16201,15990,16158,16216,16033,15974,16112,16133,16181,16188,1618
    9,16212,16238,16241,16183,15640,15638,16087,16088,16129,16186,16164,16108,15985,
    16244,15991,15763,16049,15999,16104,16208,13976,16122,15924,16046,16242,16151,16
    117,16187);

    +----------+
    | COUNT(1) |
    +----------+
    |     2544 |
    +----------+
    1 row in set (0.025 sec)

    MariaDB [test]> select version();
    +--------------------+
    | version()          |
    +--------------------+
    | 10.3.7-MariaDB-log |
    +--------------------+
    1 row in set (0.021 sec)When the job was done properly I see no reasons NOT to share the results.
  • Bug #94747 - "4GB Limit on large_pages shared memory set-up". My former colleague Nikolai Ikhalainen from Percona noted this nice undocumented "feature" (Had I forgotten to advertise Percona recently? Sorry about that...) He proved with a C program that one can create shared memory segments on Linux large than 4GB, one just had to use proper data type, unsigned long integer, in MySQL's code. Still, this report ended up as non-critical bug in "MySQL Server: Documentation" category, or even maybe a feature request internally. What a shame!
    Spring in Paris is nice, as this photo made 3 years ago proves. The way MySQL bug reports are handled this spring is not any nice in some cases.
    To summarize:
    1. It seems recently the fact that there is some limited workaround already published somewhere is a good enough reason NOT to accept valid feature request. Noted.
    2. Regression bugs (reports about drop in performance or problem that had not happened with older version but happens with some recent) are still not marked with regression tag sometimes. Moreover, clear performance regressions in MySQL 8.0.x vs MySQL 5.7.x may end up as just feature requests... A request to "Make MySQL Great Again" maybe?
    3. MySQL engineers who verify bugs often do not care to check all major versions and/or share the results of their tests. This is unfortunate.
    4. Some bugs are not classified properly upon verification. The fact that wrong data type is used is anything but severity 3 documentation problem, really.

    Terraform on OCI – Provisioning MySQL for InnoDB Cluster Setups

    In my prior blog post on Terraform, I demonstrated building the dependent infrastructure that MySQL implementations need.  Building MySQL isn’t much different, but does identify a need for a Webserver to provide configuration files for Terraform to execute on as was done in my prior MySQL on OCI post, and a Yum Repo Webserver to… Read More »

    MySQL Replication for High Availability - New Whitepaper

    We’re happy to announce that our newly updated whitepaper MySQL Replication for High Availability is now available to download for free!

    MySQL Replication enables data from one MySQL database server to be copied automatically to one or more MySQL database servers.

    Unfortunately database downtime is often caused by sub-optimal HA setups, manual/prolonged failover times, and manual failover of applications. This technology is common knowledge for DBAs worldwide, but maintaining those high availability setups can sometimes be a challenge.

    In this whitepaper, we discuss the latest features in MySQL 5.6, 5.7 & 8.0 as well as show you how to deploy and manage a replication setup. We also show how ClusterControl gives you all the tools you need to ensure your database infrastructure performs at peak proficiency.

    Topics included in this whitepaper are …

    • What is MySQL Replication?
      • Replication Scheme
        • Asynchronous Replication
        • Semi-Synchronous Replication
      • Global Transaction Identifier (GTID)
        • Replication in MySQL 5.5 and Earlier
        • How GTID Solves the Problem
        • MariaDB GTID vs MySQL GTID
      • Multi-Threaded Slave
      • Crash-Safe Slave
      • Group Commit
    • Topology for MySQL Replication
      • Master with Slaves (Single Replication)
      • Master with Relay Slaves (Chain Replication)
      • Master with Active Master (Circular Replication)
      • Master with Backup Master (Multiple Replication)
      • Multiple Masters to Single Slave (Multi-Source Replication)
      • Galera with Replication Slave (Hybrid Replication)
    • Deploying a MySQL Replication Setup
      • General and SSH Settings
      • Define the MySQL Servers
      • Define Topology
      • Scaling Out
    • Connecting Application to the Replication Setup
      • Application Connector
      • Fabric-Aware Connector
      • Reverse Proxy/Load Balancer
        • MariaDB MaxScale
        • ProxySQL
        • HAProxy (Master-Slave Replication)
    • Failover with ClusterControl
      • Automatic Failover of Master
        • Whitelists and Blacklists
      • Manual Failover of Master
      • Failure of a Slave
      • Pre and Post-Failover Scripts
        • When Hooks Can Be Useful?
          • Service Discovery
          • Proxy Reconfiguration
          • Additional Logging
    • Operations - Managing Your MySQL Replication Setup
      • Show Replication Status
      • Start/Stop Replication
      • Promote Slave
      • Rebuild Replication Slave
      • Backup
      • Restore
      • Software Upgrade
      • Configuration Changes
      • Schema Changes
      • Topology Changes
    • Issues and Troubleshooting
      • Replication Status
      • Replication Lag
      • Data Drifting
      • Errant Transaction
      • Corrupted Slave
      • Recommendations

    Download the whitepaper today!

    ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE About ClusterControl

    ClusterControl is the all-inclusive open source database management system for users with mixed environments that removes the need for multiple management tools. ClusterControl provides advanced deployment, management, monitoring, and scaling functionality to get your MySQL, MongoDB, and PostgreSQL databases up-and-running using proven methodologies that you can depend on to work. At the core of ClusterControl is it’s automation functionality that lets you automate many of the database tasks you have to perform regularly like deploying new databases, adding and scaling new nodes, running backups and upgrades, and more.

    To learn more about ClusterControl click here.

    About Severalnines

    Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability.

    Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. Severalnines is often called the “anti-startup” as it is entirely self-funded by its founders. The company has enabled over 32,000 deployments to date via its popular product ClusterControl. Currently counting BT, Orange, Cisco, CNRS, Technicolor, AVG, Ping Identity and Paytrail as customers. Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore, Japan and the United States. To see who is using Severalnines today visit, https://www.severalnines.com/company.

    Tags:  clustercontrol MySQL mysql replication

    Validating a Login Form With React

    For almost every form that you create, you will want some sort of validation. In React, working with and validating forms can be a bit verbose, so in this article we are going to use a package called Formik to help us out!

    TLDR
    • Create a React project
    • Add the Formik (and Yup) packages
    • Customize the Formik component with an onSubmit callback and a validate function for error messages
    • then display those error messages to the user.> View the final code on CodeSandbox!

    Here's a sneak peak at what we are going to create.

    https://codesandbox.io/s/4203r4582w

    Creating the React Project

    For this demo, I'll be using CodeSandbox. You can use CodeSandbox as well or use your local environment. Totally up to you.

    Regardless of what you use for this demo, you need to start with a new React app using Create React App. In CodeSandbox, I'm going to choose to do just that.

    Installing Necessary Packages

    Now that we have our initial project created, we need to install three packages.

    • Formik - makes handling validation, error messages, and form submission easier
    • Email-validator - tiny package to validate emails (I hope this one is self-explanatory : )
    • Yup - schema validator that is commonly used in conjuntion with Formik
    Formik

    In your terminal, you'll need to install Formik.

    npm install Formik

    I'll do the same in the CodeSandbox dependency GUI.

    Email-Validator

    Now install email-validator.

    npm install email-validator

    Again installing from the CodeSandbox GUI.

    Yup

    npm install Yup

    And again in CodeSandbox.

    Creating the Validated Form Component

    Now, we can start to stub out our ValidatedFormComponent. For now, we just want to create the basics and import it into the root file in the app to see it get displayed.

    • Create new functional component
    • Add dummy display content
    • Import in index.js

    So, create a new file in your src directory called ValidatedLoginForm.js. Inside of that file, add the basic code for a functional component.

    import React from "react"; const ValidatedLoginForm = () => ( <div> <h1>Validated Form Component</h1> </div> ); export default ValidatedLoginForm;

    Then, include it in your index.js file.

    function App() { return ( <div className="App"> <ValidatedLoginForm /> </div> ); }

    and you should see it displayed.

    Now, let's start with the Formik stuff. First, import Formik, Email-Valiator, and Yup in your new component.

    import { Formik } from "formik"; import _ as EmailValidator from "email-validator"; import _ as Yup from "yup";

    Now, let's stub out the Formik tag with initial values. Think of initial values as setting your state initially.

    You'll also need an onSubmit callback. This callback will take two parameters, values and an object that we can destructure. The values represented the input values from your form. I'm adding some dummy code here to simulate an async login call, then logging out what the values are.

    In the callback, I'm also calling the setSubmitting function that was destructured from the second parameters. This will allow us to enable/disable the submit button while the asynchronous login call is happening.

    <Formik initialValues={{ email: "", password: "" }} onSubmit={(values, { setSubmitting }) => { setTimeout(() => { console.log("Logging in", values); setSubmitting(false); }, 500); }} > <h1>Validated Login Form</h1> </Formik> Render Props

    The Formik component uses render props to supply certain variables and functions to the form that we create. If you're not very familiar with render props, I would take a second to check out Render Props Explained.

    In short, render props are used to pass properties to children elements of a component. In this case, Formik will pass properties to our form code, which is the child. Notice that I'm using destructuring to get a reference to several specific variables and functions.

    { props => { const { values, touched, errors, isSubmitting, handleChange, handleBlur, handleSubmit } = props; return ( <div> <h1>Validated Login Form</h1> </div> ); }} Display the Form

    Now, we can actually start to write the code to display the form. For what it's worth, in the finished CodeSandbox, I also created a LoginForm.js component to show how basic login forms are handled from scratch. You can also use that as a reference for the form we are going to add now.

    The form is pretty simple with two inputs (email and password), labels for each, and a submit button.

    { props => { const { values, touched, errors, isSubmitting, handleChange, handleBlur, handleSubmit } = props; return ( <form onSubmit={handleSubmit}> <label htmlFor="email">Email</label> <input name="email" type="text" placeholder="Enter your email" /> <label htmlFor="email">Password</label> <input name="password" type="password" placeholder="Enter your password" /> <button type="submit" > Login </button> </form> ); }}

    Notice that the onSubmit is calling the handleSubmit from the props.

    I mentioned earleir that we could disable our submit button while the user is already attempting to login. We can add that small change now by using the isSubmitting property that we destructured from props above.

    <button type="submit" disabled={isSubmitting}> Login </button>

    I would recommend adding the CSS from the finished CodeSandbox as well. Otherwise you won't get the full effect. You can copy the below css into your styles.css file.

    .App { font-family: sans-serif; } h1 { text-align: center; } form { max-width: 500px; width: 100%; margin: 0 auto; } label, input { display: block; width: 100%; } label { margin-bottom: 5px; height: 22px; } input { margin-bottom: 20px; padding: 10px; border-radius: 3px; border: 1px solid #777; } input.error { border-color: red; } .input-feedback { color: rgb(235, 54, 54); margin-top: -15px; font-size: 14px; margin-bottom: 20px; } button { padding: 10px 15px; background-color: rgb(70, 153, 179); color: white; border: 1px solid rgb(70, 153, 179); background-color: 250ms; } button:hover { cursor: pointer; background-color: white; color: rgb(70, 153, 179); } Adding Validation Messages Logic

    Now we need to figure out how to validate our inputs. The first question is, what constraints do we want to have on our input. Let's start with email. Email input should...

    • Be required
    • Look like a real email

    Password input should...

    • Be required
    • Be at least 8 characters long
    • contain at least one number

    We'll cover two ways to create these messages, one using Yup and one doing it yourself. We recommend using Yup and you'll see why shortly.

    Doing it Yourself

    The first option is creating our validate function. The purpose of the function is to iterate through the values of our form, validate these values in whatever way we see fit, and return an errors object that has key value pairs of value->message.

    Inside of the Formik tag, you can add the following code. This will always add an "Invalid email" error for email. We will start with this and go from there.

    validate={values => { let errors = {}; errors.email = "Invalid email"; return errors; }}

    Now, we can ensure that the user has input something for the email.

    validate={values => { let errors = {}; if (!values.email) { errors.email = "Required"; } return errors; }}

    Then, we can check that the email is actually a valid looking email by using the email-validator package. This will look almost the same as the equivalent check for email.

    validate={values => { let errors = {}; if (!values.email) { errors.email = "Required"; } else if (!EmailValidator.validate(values.email)) { errors.email = "Invalid email address"; } return errors; }}

    That takes care of email, so now for password. We can first check that the user input something.

    validate={values => { let errors = {}; if (!values.password) { errors.password = "Required"; } return errors; }}

    Now we need to check the length to be at least 8 characters.

    validate={values => { const passwordRegex = /(?=.*[0-9])/; if (!values.password) { errors.password = "Required"; } else if (values.password.length < 8) { errors.password = "Password must be 8 characters long."; } return errors; }}

    And lastly, that the password contains at least one number. For this, we can use regex.

    validate={values => { let errors = {}; const passwordRegex = /(?=.*[0-9])/; if (!values.password) { errors.password = "Required"; } else if (values.password.length < 8) { errors.password = "Password must be 8 characters long."; } else if (!passwordRegex.test(values.password)) { errors.password = "Invalida password. Must contain one number"; } return errors; }}

    Here's the whole thing.

    validate={values => { let errors = {}; if (!values.email) { errors.email = "Required"; } else if (!EmailValidator.validate(values.email)) { errors.email = "Invalid email address"; } const passwordRegex = /(?=.*[0-9])/; if (!values.password) { errors.password = "Required"; } else if (values.password.length < 8) { errors.password = "Password must be 8 characters long."; } else if (!passwordRegex.test(values.password)) { errors.password = "Invalida password. Must contain one number"; } return errors; }} Using Yup (Recommended)

    Ok, you might have noticed that handling the validate logic on our own gets a bit verbose. We have to manually do all of the checks ourselves. It wasn't that bad I guess, but with the Yup package, it gets all the more easy!

    Yup is the recommended way to handle validation messages.

    Yup makes input validation a breeze!

    When using Yup, we no longer will see the Validate property, but insead use validationSchema. Let's start with email. Here is the equivalent validation using Yup.

    validationSchema={Yup.object().shape({ email: Yup.string() .email() .required("Required") })}

    Much shorter right?! Now, for password.

    validationSchema={Yup.object().shape({ email: Yup.string() .email() .required("Required"), password: Yup.string() .required("No password provided.") .min(8, "Password is too short - should be 8 chars minimum.") .matches(/(?=.*[0-9])/, "Password must contain a number.") })}

    Pretty SWEET!

    Displaying Validation/Error Messages

    Now that we have the logic for creating error messages, we need to display them. We will need to update the inputs in our form a bit.

    We need to update several properties for both email and password inputs.

    • value
    • onChange
    • onBlur
    • className
    Email

    Let's start by updating value, onChange, and onBlur. Each of these will use properties from the render props.

    <input name="email" type="text" placeholder="Enter your email" value={values.email} onChange={handleChange} onBlur={handleBlur} />

    Then we can add a conditional "error" class if there are any errors. We can check for errors by looking at the errors object (remeber how we calculated that object ourselves way back when).

    We can also check the touched property, to see whether or not the user has interacted with the email input before showing an error message.

    <input name="email" type="text" placeholder="Enter your email" value={values.email} onChange={handleChange} onBlur={handleBlur} className={errors.email && touched.email && "error"} />

    And lastly, if there are errors, we will display them to the user. All in all, email will look like this.

    <label htmlFor="email">Email</label> <input name="email" type="text" placeholder="Enter your email" value={values.email} onChange={handleChange} onBlur={handleBlur} className={errors.email && touched.email && "error"} /> {errors.email && touched.email && ( <div className="input-feedback">{errors.email}</div> )} Password

    Now we need to do the same with password. I won't walk through each step beause they are exactly the same as email. Here's the final code.

    <label htmlFor="email">Password</label> <input name="password" type="password" placeholder="Enter your password" value={values.password} onChange={handleChange} onBlur={handleBlur} className={errors.password && touched.password && "error"} /> {errors.password && touched.password && ( <div className="input-feedback">{errors.password}</div> )} Test it Out

    Let's try it out! You can start by clicking the button without entering anything. You should see validation messages.

    Now, we can get more specific for testing messages. Refresh your page to do this.Click inside of the email input, but don't type anything.

    Then, click away from the input. You should see the "Required" message pop up. Notice that this message doesn't pop up automatically when the page loads. We only want to display error messages after the user has interacted with the input.

    Now, start to type. You should get a message about not being a valid email.

    And lastly, type in a valid looking email, and your error message goes away.

    Now, same for password. Click on the input, then away, and you'll get the required message.

    Then, start typing and you'll see the length validation.

    Then, type 8 or more characters that does not include a number, and you'll see the "must contain a number" message.

    And lastly, add a number, and error messages go away.

    Conclusion

    Whew, that was a long one! Again, validation can be a tricky thing, but with the help of a few packages, it becomes a bit easier. At the end of the day though, I think we've got a pretty legit login form!

    MySQL Workbench Spatial Viewer or How to See your GEOMETRY Data

    The past couple of blog entries have been on Geographic Information Systems and Geometric Data.  Visualizing that data with MySQL Workbench makes it easier for me to see what the results really mean.
    Workbench 8.0.15 will draw the polygon with the Spatial View Option
    So how do you get there?

    Start Workbench, create a new SQL Tab in your favorite scratch schema, and crate the table below. 

    CREATE TABLE `test` (
      `id` INT NOT NULL AUTO_INCREMENT,
      `geom` GEOMETRY NULL,
      PRIMARY KEY (`id`));

    Next add some data.

    INSERT INTO `test`
      (`geom`)
    VALUES
      (st_geomfromtext
      ('polygon((0 0,0 3,3 0, 2 2,0 0),(1 1,1 2,2 1,2 2, 1 1))')
       );

    The run the query.

    select geom from test;

    However the result will default to the Result Grid. Look off to the right hand side of the results window to see a series of stacked icons and the default is the Result Grid.  And that 'BLOB' is the result of the query.  But that result is not exactly visually stunning.

    The 'BLOB' is the result of the query.
    Click on the Form Editor icon. It is directly below the Result Grid Icon

    Select the Form Editor Icon And you should see the image from the top of this blog entry.

    Bonus!
    Now scroll down below the Form Editor icon and select Spatial View.

    Spatial View of the Query



    Running MySQL / Percona Server in Kubernetes with a Custom Config

    As we continue the development of our Percona Operators to simplify database deployment in Kubernetes (Percona Server for MongoDB Operator 0.3.0 and Percona XtraDB Cluster Operator 0.3.0), one very popular question I get is: how does deployment in Kubernetes affect MySQL performance? Is there a big performance penalty? So I plan to look at how to measure and compare the performance of Kubernetes deployments to bare metal deployments. Kubernetes manages a lot of infrastructure resources like network, storage, cpu, and memory, so we need to look individually at different components.

    To begin: I plan to run a single MySQL (Percona Server) instances in a Kubernetes deployment, and use local storage (fast NMVe device). I also want to customize my MySQL configuration, as the one that is supplied in public images are pretty much all set to defaults.

    Let’s take a look at how we can customize it.

    1. We are going to use a public Percona Server docker image “percona:ps-8.0.15-5”, it will deploy the latest version (at the time of writing) Percona Server for MySQL 8.0.15
    2. We will deploy this on a specific node and will assign specific local storage to use for MySQL data
    3. We’ll set up a custom configuration for MySQL.
    Setting up Kubernetes

    Here’s an example yaml file:

    ===================== apiVersion: v1 kind: Service metadata: name: mysql spec: ports: - port: 3306 selector: app: mysql clusterIP: None --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: nodeSelector: kubernetes.io/hostname: smblade01 volumes: - name: mysql-persistent-storage hostPath: path: /mnt/fast/mysql type: Directory containers: - image: percona:ps-8.0.15-5 name: mysql env: # Use secret in real usage - name: MYSQL_ROOT_PASSWORD value: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql ===============

    There is a lot of typical Kubernetes boilerplate to create a deployment, but the most important parts to note:

    • We choose the node where to deploy with nodeSelector (lines 28–29).
    • We allocate the local storage for MySQL volume with hostPath (lines 31–34).

    After deploying this, we make sure the Pod is running

    Kubectl get pods NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysql-d74d5d459-b6zrs 1/1 Running 0 3m8s 192.168.1.15 smblade01 <none> <none>

    Set up MySQL to access fast storage and modify the default config for performance

    Now as we are running MySQL on a dedicated node with fast storage, we want to customize the MySQL configuration to allocate a big buffer pool and adjust its IO settings.

    As I said, a downloaded image will most likely run with default settings and there is no straightforward way to pass our custom my.cnf to deployment. I’ll show you how to resolve this now.

    The default my.cnf contains the directive

    !includedir /etc/my.cnf.d

    So the solution for the custom my.cnf is as follows:

    • Create a Kubernetes configmap from our custom my.cnf. Here’s how to achieve that:

    kubectl create configmap mysql-config --from-file=my.cnf

    • Define yaml to load the configmap into the volume that mapped to /etc/my.cnf.d (nb lines 23–26 and 40-41).

    apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: mysql spec: selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: nodeSelector: kubernetes.io/hostname: smblade01 volumes: - name: mysql-persistent-storage hostPath: path: /mnt/fast/mysql type: Directory - name: config-volume configMap: name: mysql-config optional: true containers: - image: percona:ps-8 name: mysql env: # Use secret in real usage - name: MYSQL_ROOT_PASSWORD value: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: config-volume mountPath: /etc/my.cnf.d

    And here’s our example my.cnf:

    [mysqld] skip-log-bin ssl=0 table_open_cache = 200000 table_open_cache_instances=64 back_log=3500 max_connections=4000 innodb_file_per_table innodb_log_file_size=10G innodb_log_files_in_group=2 innodb_log_buffer_size=64M innodb_open_files=4000 innodb_buffer_pool_size= 100G innodb_buffer_pool_instances=8 innodb_flush_log_at_trx_commit = 1 innodb_doublewrite=1 innodb_flush_method = O_DIRECT innodb_file_per_table = 1 innodb_io_capacity=2000 innodb_io_capacity_max=4000 innodb_flush_neighbors = 0 innodb_use_native_aio=1 join_buffer_size=256K sort_buffer_size=256K

    When we deploy this yaml, we will have a MySQL instance running on a dedicated box with fast local storage, big log files, and 100GB allocated for its InnoDB buffer pool.

    Now we’re set to proceed to our performance measurements. Stay tuned!


    Photo by Joseph Barrientos on Unsplash

    New Webinar: Multi-Region AWS Aurora vs Continuent Tungsten for MySQL & MariaDB

    We’re pleased to announce our latest live webinar “Multi-Region AWS Aurora vs Continuent Tungsten for MySQL & MariaDB”, which will take place on Thursday, April 18th, 2019.

    Our colleagues Matt Lang and Eric M. Stone will walk you through a comparison of building a global, multi-region MySQL / MariaDB / Percona cloud back-end using AWS Aurora versus Continuent Tungsten.

    If you’d like to find out how multi region AWS Aurora deployments can be improved – then this webinar is for you!

    See the registration and agenda details below – we look forward to “seeing” you there!

     

    Date & Time:

    Thursday, April 18th at 10am PST / 1pm EST / 4pm BST / 5pm CEST

     

    Registration: follow this link to register

     

    Agenda:

    We will explore how to deploy a geo-scale MySQL / MariaDB / Percona cloud back-end with the following design criteria:

    • Geographically distributed, low-latency data with a single consolidated view
    • Fast local response times for read traffic
    • Local rapid-failover, automated high availability
    • Ability to deploy masters in multiple regions
    • No changes to application code
    • Complex schema changes while keeping applications available
    • Avoiding provider lock in

     

    Speakers:

    Matthew Lang

    Director of Professional Services – Americas at Continuent, has over 25 years of experience in database administration, database programming, and system architecture, including the creation of a database replication product that is still in use today. He has designed highly available, scalable systems that have allowed startups to quickly become enterprise organizations, utilizing a variety of technologies including open source projects, virtualization and cloud.

     

    Eric M. Stone

    COO at Continuent, is a veteran of fast-paced, large-scale enterprise environments with 35 years of Information Technology experience. With a focus on HA/DR, from building data centers and trading floors to world-wide deployments, Eric has architected, coded, deployed and administered systems for a wide variety of disparate customers, from Fortune 500 financial institutions to SMB’s.

     

    Pages