Docker???!! Images???!!!

Hello Readers! Now that you have opened my post it probably means you are interested in Docker! Well, I am fascinated by it!! Thank You!

Allow me to give a little background of me, In our POC Lab I was not the first one to start with Docker. There was already a small team of 4-5 members who had started learning and creating Whitepapers on Docker.

I used to hear from them saying “Push Image” “Pull Image”. At first I thought “Yeah! Must be a cool stuff working with somebody’s Image” LOL. Then came my turn to read and I actually realized “Images are nothing but code” (Coding again!!?? Thought myself)

I started a self-study in Docker and it became my pet project. I will try to put my learning’s into words hoping it becomes beneficial to all. This is my first attempt on writing a blog so please bear with it.

So Docker! If I have to explain in my own words here it goes… (Non-technical) A standard container that is loaded with virtually any goods and stays sealed until it reaches final delivery. In between, goods can be loaded, unloaded stacked, transported efficiently over long distances.

The best way to describe Docker is to use the phrase from the Docker web site—Docker is “an open source project to pack ship and run any application as a lightweight container.” The idea is to provide a comprehensive abstraction layer that allows developers to “containerize” or “package” any application and have it run on any infrastructure. The use of container here refers more to the consistent, standard packaging of applications rather than referring to any underlying technology.

Docker can be a reference to a few things

Docker client: this is what’s running in our machine. It’s the docker binary that we’ll be interfacing with whenever we open a terminal and type $ docker pull or $ docker run. It connects to the docker daemon which does all the heavy-lifting, either in the same host (in the case of Linux) or remotely (in our case, interacting with our VirtualBox VM).

Docker daemon: this is what does the heavy lifting of building, running, and distributing your Docker containers.

Docker Images: Docker images are the blueprints for our applications. Keeping with the container/lego brick analogy, they’re our blueprints for actually building a real instance of them. An image can be an OS like Ubuntu, but it can also be an Ubuntu with your web application and all its necessary packages installed.

Docker Container: containers are created from Docker images, and they are the real instances of our containers/lego bricks. They can be started, run, stopped, deleted, and moved.

Docker Hub (Registry): a Docker Registry is a hosted registry server that can hold Docker Images. Docker (the company) offers a public Docker Registry called the Docker Hub which we’ll use in this tutorial, but they offer the whole system open-source for people to run on their own servers and  Store images privately.

Docker’s Key Benefits

Docker provides lightweight virtualization with almost zero overhead. The effect of this delivers some impactful advantages.

Primarily, you can benefit from an extra layer of abstraction offered by Docker without having to worry about the overhead. The next significant advantage is that you can have many more containers running on a single machine than you can with virtualization alone.

Another powerful impact is that container bringup and bringdown can be accomplished within seconds. The Docker FAQ has a good overview of what Docker adds to traditional containers.

The feature that really sets Docker apart, in my opinion, is the layered file system and the ability to apply version control to entire containers. The benefits of being able to track, revert and view changes is well-understood and, overall, a highly desirable and widely-used feature in software development. Docker extends that same idea to a higher construct; the entire application, with all its dependencies in a single environment. This is unprecedented.

Docker’s Key use cases

Here are just some of the use cases that provide a consistent environment at low overhead with the enabling technology of Docker.

1. Simplifying Configuration

The primary use case Docker advocates is simplifying configuration. One of the big advantages of VMs is the ability to run any platform with its own config on top of your infrastructure.

Docker provides this same capability without the overhead of a virtual machine. It lets you put your environment and configuration into code and deploy it. The same Docker configuration can also be used in a variety of environments. This decouples infrastructure requirements from the application environment.

The freedom to run your applications across multiple IaaS/PaaS without any extra tweaks is the ultimate dream that Docker can help you achieve.

Today, every IaaS/PaaS provider from Amazon to Google supports Docker.

2. Code Pipeline Management

The previous use case makes a large impact in managing the code pipeline. As the code travels from the developer’s machine to production, there are many different environments it has to go through to get there. Each of these may have minor differences along the way.

Docker provides a consistent environment for the application from dev through production, easing the code development and deployment pipeline.

The immutable nature of Docker images, and the ease with which they can be spun up, help you achieve zero change in application runtime environments across dev through production.

3. Developer Productivity

In a developer environment, we have two goals that are at odds with each other:

  1. We want it be as close as possible to production; and
  2. We want the development environment to be as fast as possible for interactive use.

Ideally, to achieve the first goal, we need to have every service running on its own VM to reflect how the production application runs. However, we don’t want to always require an Internet connection and add the overhead of working remotely every time a compilation is needed.

This is where the low overhead of Docker comes in handy. A development environment usually has a low memory capacity, and by not adding to the memory footprint that’s commonly done when using a VM, Docker easily allows a few dozen services to run.

To achieve the second goal, to provide a fast feedback loop, we use Docker’s shared volumes to make the application code available to the container(s) from the container’s host OS, which is a VirtualBox VM (typically, a Vagrant box). The application source code is made available to the container host OS (Vagrant box) using Vagrant’s synced folders with the host OS (Windows, Mac or Linux).

This approach has multiple benefits. The developer can edit the source code from his platform of choice (Windows, Mac or Linux) and is able to see the changes right away as the applications run using the same source code with the running environment set inside of the Vagrant box using Docker container(s).

Moreover, this approach helps a front-end engineer who is not much into the back end nitty gritty to easily use the full application setup and work on his or her area of interest without the setup or installation blues getting in the way. And, it provides an optional opportunity for further exploration on how back-end systems work under the hood to get a better understanding for the full stack.

4. App Isolation

There may be many reasons for which you end up running multiple applications on the same machine. An example of this is the developer productivity flow described earlier. But there are other cases, too.

A couple of such cases to consider are server consolidation for decreasing cost or a gradual plan to separate a monolithic application into decoupled pieces.

Let’s say, for example, you need to run two REST API servers, both of which use flask. But, each of them uses a slightly different version of flask and other such dependencies. Running these API servers under different containers provides an easy way out through what we call the “dependency hell.”

5. Server Consolidation

Just like using VMs for consolidating multiple applications, the application isolation abilities of Docker allow consolidating multiple servers to save on cost. However, without the memory footprint of multiple OSes and the ability to share unused memory across the instances, Docker provides far denser server consolidation than you can get with VMs.

6. Debugging Capabilities

Docker provides many tools that are not necessarily specific to containers, but, they work well with the concept of containers. They also provide extremely useful functionality. This includes the ability to checkpoint containers and container versions, as well as to diff two containers. This can be immensely useful in fixing an application.

7. Multi-tenancy

Yet another interesting use case of Docker is its use in multi-tenant applications, thereby avoiding major application rewrites.

Using Docker, it was easy and inexpensive to create isolated environments for running multiple instances of app tiers for each tenant. This was possible given the spin up speed of Docker environments and its easy-to-use API, which we can use to spin containers programmatically. We used docker-py, which is a Python library to help interact with the Docker daemon through a web application interface.

8. Rapid Deployment

Before VMs, bringing up a new hardware resource took days. Virtualization brought this number down to minutes. Docker, by creating just a container for the process and not booting up an OS, brings it down to seconds. This is the enabling technology that has brought Google and Facebook to using containers.

Essentially, you can create and destroy resources in your data centre without worrying about the cost of bringing it up again. With typical data centre utilization at 30%, it is easy to bump up that number by using a more aggressive allocation of resources. And, the low cost of bringing up a new instance allows for a more aggressive allocation of resources.

Why Developers Should Care

Build once… (Finally) run anywhere

  • A clean, safe, hygienic, portable runtime environment for your app.
  • No worries about missing dependencies, packages and other pain points during subsequent deployments.
  • Run each app in its own isolated container, so you can run various versions of libraries and other dependencies for each app without worrying.
  • Automate testing, integration, packaging…anything you can script.
  • Reduce/eliminate concerns about compatibility on different platforms, either your own or your customers.
  • Cheap, zero-penalty containers to deploy services. A VM without the overhead of a VM. Instant replay and reset of image snapshots.

Why Administrators Should Care

Configure once… run anything

  • Make the entire lifecycle more efficient, consistent, and repeatable
  • Increase the quality of code produced by developers.
  • Eliminate inconsistencies between development, test, production, and customer environments.
  • Support segregation of duties.
  • Significantly improves the speed and reliability of continuous deployment and continuous integration systems.
  • Because the containers are so lightweight, address significant performance, costs, deployment, and portability issues normally associated with VMs.

This was all about definition and why Docker so popular. In the next post, we will look into VM vs Containers.

Till then, keep reading, keep learning…

Don’t forget to share your opinions/suggestions. Thank You!



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s