Simple helper scripts for managing lxc on Ubuntu

The scripts are available at github. Only tested on Ubuntu 12.04 LTS.

As default (for my dev env) each container will get its own separate mounted /opt (under host's /opt/lxc/\) and /mnt/dbext4 (/mnt/dbext4/lxc/\) and /mnt/dbxfs (/mnt/dbxfs/lxc/\). Containers will be named as 'name-seqno'. Paths/defaults can be changed by editing etc/config. There are no shared disks between the containers. Install:


Add MongoDB repo for apt and/or yum (bash)


[ `whoami` != "root" ] && echo "Do: sudo $(basename $0)" && exit 1


regex_lsb="Description:[[:space:]]*([^ ]*)"

do_lsb () {
lsb=`lsb_release -d`
[[ $lsb =~ $regex_lsb ]] && dist=${BASH_REMATCH[1]} ; return 0
return 1

do_release_file () {
etc_files=`ls /etc/*[-_]{release,version} 2>/dev/null`
for file in $etc_files
  [[ $file =~ $regex_etc ]] && dist=${BASH_REMATCH[1]} ; break

if [ `command -v lsb_release` ]
  [ $? -ne 0 ] && do_release_file


Unison OS X launchd plist

pid=`pgrep unison`

[ -z $pid ] && $HOME/bin/unison $pref -auto -batch


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "">
<plist version="1.0">


Build unison 2.40.65 (ocaml 4.0) to work between Ubuntu 12.04 and OS X Mountain Lion

I could not find any pre-built binaries (macports) that worked properly out of the box between my laptop and the ubuntu server. So build your own.

NOTE: This will break unison with hosts that use unison 2.40.65 built with ocaml 3.x!


LVM Add Disks Cheat Sheet

$ fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xec3e2922.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

The device presents a logical sector size that is smaller than
the physical sector size. Aligning to a physical sector (or optimal
I/O) size boundary is recommended, or performance may be impacted.


Galera Cluster for MySQL with Amazon Virtual Private Cloud [4]

Previous: Create a VPC with Public and Private subnetsGalera Configurator and deploying Galera ClusterSetup Load Balancers and Web Servers


sysbench is a popular system peformance benchmark tool and has a database OLTP benchmark module which is popular among MySQL users. We are going to use it to create the default OLTP schema that is used and run a light test.

The purpose is not to really bench our newly deployed Galera cluster but to get some test data that we can pull out from our web servers that we setup previously.

Prepare sysbench schema and table

Logon o to your ClusterControl instance


Galera Cluster for MySQL with Amazon Virtual Private Cloud [3]

Previous: Create a VPC with Public and Private subnets, Galera Configurator and deploying Galera Cluster
Next: Test your AWS VPC Galera Cluster

Setup Internal and External Load Balancers

Next we are going to add two AWS load balancers, one internal for our database nodes and one external for our clients load balancing requests across our http servers.

You can opt to install HAProxy ( instead however you would need to handle failover for it.


Using Amazon’s elastic load balancers (ELB) we don’t need to worry about that however the ELBs are less configurable than HAProxy and lack other features as well.


Install the Galera http health check scripts


First on each Galera instances we are going to install a HTTP health check script that uses a Galera node’s state to determine whether the node should be classified as up or down.


Galera Cluster for MySQL with Amazon Virtual Private Cloud [2]

Previous: Create a VPC with Public and Private subnets
Next: Setup Load Balancers and Web ServersTest your AWS VPC Galera Cluster

Galera Configurator

Now that we have the EC2 instances prepared it’s time to run the Severalnines’s Galera Configurator and generate the deployment scripts to be run from the ClusterControl instance.

Go to and create a Galera deployment package. The wizard should be pretty self explanatory.

Select Amazon EC2 as the cloud provider and the OS used for your instances.


Galera Cluster for MySQL with Amazon Virtual Private Cloud

Next: Galera Configurator and deploying Galera ClusterSetup Load Balancers and Web ServersTest your AWS VPC Galera Cluster

In the next following posts we’ll deploy a multi-master synchronous MySQL Galera Cluster with Amazon’s VPC service. We’re going to create a public facing subnet for app/web servers and a private subnet for our database cluster.

The deployment will look similar to the below diagram.



Amazon’s VPC provides a secure environment where you can chose to isolate parts of your servers by having complete control of how to deploy your virtual networking infrastructure much like your own datacenter. 


The steps that we’ll go through are as following:


  1. Create a VPC with Public and Private subnets
  2. Define Security Groups (add rules later)
  3. Launch one instance for ClusterControl
  4. Launch three EBS optimized instances for the Galera/database nodes
  5. Format and mount an EBS volume (or RAID set) for each Galera nodes
  6. Create a Galera Configuration with Severalnines Galera Configurator 
  7. Deploy and bootstrap Galera Cluster
  8. Add an internal load balancer
  9. Add a MySQL user for the internal load balancer
  10. Add web server instances
  11. Add an external load balancer
  12. Test the VPC database cluster setup

At the end we have the following instances available on the public subnet (Note your IP addresses would be different):


  • 1 Elastic External Load Balancer, Elastic IP
  • 2 Web servers: IP, Elastic IP and IP, Elastic IP
  • 1 ClusterControl server: IP,  Elastic IP

and on the private subnet:


  • 1 Elastic Internal Load Balancer, IP
  • Galera Node 1, IP
  • Galera Node 2, IP
  • Galera Node 3, IP

In this example going forward we only deploy one private subnet however if you require a more fault tolerant setup you can for example create two private subnets for the database cluster one in each Availability Zone (AZ) which can protect you from single location failures within a single Amazon region.


There are a number issues that needs to be handled properly with a Galera cluster across two regions and/or AZs (which practically is two data centers). This will be addressed in a future post.




Subscribe to howto