There are many reasons why you may choose to run your applications in containers. Given the right use case, containerization can open up a lot of possibilities making your applications scalable, robust and flexible.
While we won’t get into all the applications of this architecture, suffice it to say that if scalability, agility and speed are your key objectives; containers can prove to be an excellent choice.
In this article we well cover how to quickly build a multi-tier web application using docker and be able to scale it on demand.
What are we going to build?
We are going to build a front-end web application (website) connected to a backend database server. Our requirements state that we should be able to scale the application frontend to meet the demand without affecting business as usual. So, let’s get started.
Our Application Architecture
To keep things simple, our web application will be a registration form hosted on Apache web server running inside Docker containers. This web application will be connected to a MySQL database backend which will be running inside another docker container. We will need to demonstrate that the application can scale by adding additional apache nodes to cater for increased web traffic without making any configuration changes to the front end or back end.
Skills and difficulty level: Basic
To follow along, you should be comfortable in navigating your way around the Linux operating system. You should be able to install packages, list directories and make changes to config files using a text editor like vim or nano. If you do get stuck, simply hit the comment button below and let me know where you got stuck and I will do my best to help.
How do we achieve this?
Here are the steps we are going to follow:
- Install Docker Engine
- Configure Networking
- Add Persistent Storage
- Create a master template for our web app
- Scale our application
Let’s get started!
1. Install Docker Engine:
We will be using a linux (Ubuntu) host for this. If you are new to docker, simply follow the installation instructions on the docker website. If you are following along, you can use the Ubuntu instructions. Once you have installed the docker engine you can verify that it is working by running the hello-world image. An image is simply a pre-configured point-in-time copy of an application that is downloaded (usually from docker hub) and executed on your system. Think of this as a virtual machine image, only much smaller.
Here is the command to run and the output you should expect.
sudo docker run hello-world
The highlighted bit is what we are interested in as it shows that things are working properly.
No let’s get the ubuntu image for our application and database nodes and the command to do that is:
sudo docker pull ubuntu
Then check the image was downloaded using the following command:
sudo docker images | grep ubuntu
2. Configure Networking:
There are various ways of configuring networking for docker containers, each with its merits and restrictions. In order to allow communication between the containers we will create a user-defined bridge network as default bridge networks have certain restrictions. You can read more about it here.
Now execute the following command on your docker host.
docker network createapp-network
This command creates a user defined network bridge named app-network. You can choose a different name by changing the text in red.
Run the following command to inspect the subnet details of our new network bridge.
docker inspect app-network
Replace the red text with the name you chose earlier. From the output note down the subnet details as highlighted below:
Since we want to connect to our application from client machines outside of this subnet, we need will to configure the docker host to forward traffic onto the app-network. In order to do this, we need to make the following changes on the host.
2.1. Enable Forwarding:
Edit the /etc/sysctl.conf using your favourite editor and uncomment the line that contains net.ipv4.ip_forward = 1 by removing the # symbol in front of it. If you can’t find the line simply, add it at the end of the file and save it.
2.2 Configure NAT:
To configure our host for NAT, just execute the following command on your docker host.
sudo iptables -t nat -A POSTROUTING -o app-network -j MASQUERADE
2.3 Add a static route (optional):
This step is option as you may already have the required routes in place. I will be connecting to the application using my windows laptop (IP 220.127.116.113) which is connected to a subnet outside of the app-network, but my docker host is on the same network (IP 18.104.22.168) as my laptop.
I can reach the docker host from my laptop directly but if I try to reach the application running inside the docker container instances, which will be connected to the app-network (172.19.0.0/16 subnet), it is obviously not going to work as my laptop doesn’t know how to get to the 172.19.0.0/16 subnet. Since the docker host is directly connect to the 192.168.0.0/24 and 172.19.0.0/16 subnet, I am going add a static route on my laptop that forwards all my requests for 172.19.0.0/16 subnet on to the docker host which will know how to forward the traffic on to the containers, as we have already configured our host for forwarding and NAT above. With me so far?
Let us add the static route on my windows laptop below:
Note: The static route needs to be added to the machine you will be accessing your application from (only for testing purposes). Ideally you would configure your gateway to route traffic destined for your app-network subnet (172.19.0.0/16 in our case) onto the docker host or look at overlay networking, but I won’t be covering that here.
Let’s test to see id things are working by pinging the IP (172.19.0.1) address of our docker host mapped to app-network from windows laptop (IP 192.168.0.236).
3. Add persistent storage:
Containers are stateless, which means any changes you make during the lifetime of the container are gone once the container exits. Simply put, every-time you execute an image inside a container it’s going to be a new container instance with no recollection what you did before. This introduces in the need to add persistent storage as you would want your application data to be available after the container has exited.
Additionally, connecting persistent storage volumes will allow us to house and share our www (website contents) directory across multiple apache nodes, giving us the ability to add new web nodes on the fly.
In this instance we will use a local directory on the Docker host (in reality this could be a highly available block storage device) and present it to our containers.
We start by creating a directory on the docker host. I am using /home/madhulsachdeva/docker/data but you can use anything. Inside this directory create 2 sub-directories called www and mysql to house our application data.
(optional) I suggest you create a directory called test inside each of these 2 directories for testing.
The idea is that as we would have created these test directories on our docker host, once these directories are mapped as volumes inside our containers, we should be able to see these test directories automatically as soon our new container image starts, confirming that our storage is persistent.
3.2 Configure our web node with persistent storage attached:
Now that we have added the storage it’s time to have some fun. We will create our first container and configure it as a web node.
The command we will use for this is:
sudo docker run -it -v /home/madhulsachdeva/docker/data/www:/var/www ubuntu:latest /bin/bash
Just make sure that the red part of the command above matches the path where you created the www directory on your docker host.
4. Create master image for our app
The above command will create a new container instance running the ubuntu image (we downloaded earlier) with its /var/www directory mapped back to the /home/madhulsachdeva/docker/data/www (or the path use chose) on the docker host. If everything worked correctly you should already be logged inside the container with the root user logged in. If you then list the content of /var/www you should see the test directory you created earlier. just ignore the html directory you see in the image below.
Great! so far so good. Now you can configure the node as you please. Since the image we downloaded is a minimalistic version of ubuntu, to keep the download size small and quick, it doesn’t come with any tools preinstalled.
I installed the following tools: net-tools Ping, nano, ssh and apache, php-mysql. Here are the commands (make sure you are running these commands inside the container):
apt-get install inetutils-ping
apt-get install net-tools
apt-get install nano
apt-get install openssh-server
You many need to provide some input about your location during the ssh install.
apt-get install apache2
You should now go ahead and start apache and ssh using the following command. Later we will change it so these services automatically start up when a container is created. using this image.
sudo service apache2 start sudo service sshd start
Now that our node is up, let’s attach it to the app-network. This will give us an IP address on the 172.19.0.0/16 subnet. In order to do that we need the get the id of our container. This should be immediately visible on your screen as part of the root user prompt: [email protected]a0fc86cb67b9
Alternative you can retrieve it from the docker host. In order to do that, the simplest option is to open a new terminal window and type in the following command:
sudo docker ps -a
Once we have the id information, we can attach our container to the app-network on the fly by executing the following command on the docker host.
docker network connect app-network a0fc86cb67b9
The red part is the name of your app-network and the part in green is the id of our container from above.
Our container is now connected to the app-network let’s find out it’s IP address by running ipconfig in the container window with the [email protected]: prompt.
With that we should now be able to ping/connect to it from our windows laptop.
!IMPORTANT: If you ever need to detach from your container use the following key combination Ctrl+p followed by Ctrl+q. This ensures that you do not exit (aka) stop the container and your application continues to run. Running Ctrl+c will generally exit the container.
Ok, now that we have the Web node configured to our liking, I am going to place some web content inside the /var/www/html folder. This can be done via SSH to the container or by simply placing the files inside the persistent volume on our docker host. I chose to do the later and place the app content files into:
We should now be able to access our website. Let’s give it a go.
Let’s configure the apache and ssh services to start as soon as the container is created. In order to do that please add the following commands at the bottom of the /etc/bash.bashrc file inside the container.
Place the following commands at the bottom of the file and save it.
service ssh start service apache2 start
We will now save this container state as the master web node image in order to scale up our application by adding additional nodes on the fly. Please use the following command on the docker host to save your custom image.
sudo docker commit a0fc86cb67b9 web-node-master-image
The part in green is the id of our current container and purple part is the name you would like to give to your new master image.
You need to repeat the process for your backend MySQL server; I have already configured the backend MySQL server using exactly the same process as outlined above. The only difference is that the persistent storage /home/madhulsachdeva/docker/data/mysql is mapped to /var/lib/mysql and I have installed MySQL server and configured it to accept connections from 172.19.0.0/16 in the /etc/mysql/my.cnf file.
5. Let’s scale it up
As you have put in all the work already, now it’s time to scale your application by adding additional nodes on the fly.
You can do this with a single command.
sudo docker run -it -v /home/madhulsachdeva/docker/data/www/html:/var/www/html --net app-network --name web-node2 web-node-master-image
The red bit is your persistent storage mapping, blue is the name of your user-defined bridge from above, green is the new container alias for easier identification and purple bit is the name of the master image you created in the previous step.
Here is the output:
We have successfully added another web node in a matter of seconds.
As you can see above this is the web application running in a separate container instance (IP address: 172.19.0.4) without affecting the primary node. Throw in a load-balancer and you have a pretty scalable solution.
With the above scenario, I have been able to demonstrate that containers can bring in much needed flexibility and scalability to your application architecture. Given the right use case, containers can dramatically reduce the scaling headaches and maximise your return on investment.