In this blog we are going to learn about Docker in Devops. Docker is quite the buzz in the DevOps community these days. It is an open platform for distributed applications for developers and sysadmin and facilitates deployment of applications inside software containers.
As organizations embrace the DevOps philosophy and break down the traditional barriers that exist between Development and Ops teams, Docker provides some of the key tools to go DevOps and improve the application development process.
What is DevOps?
DevOps is a collaborative way of developing and deploying software. It is a software development method that stresses communication, collaboration and integration between software developers and information technology (IT) operation professionals.
Principles of DevOps
1. Systems thinking: This way emphasizes the performance of the entire system, as opposed to the performance of a specific silo of work or department — this can be as large as a division (e.g., Development or IT Operations) or as small as an individual contributor (e.g., a developer, system administrator).
2. Amplify Feedback loops: It is about creating the right to left feedback loops. The goal of almost any Process improvement initiative is to shorten and amplify feedback loops, so necessary Corrections can be continually made.
3. Culture of continual experimentation and learning: This is about creating a culture that fosters at two things: continual experimentation, which requires taking risks and learning from success and failure; and understanding that repetition and practice is the prerequisite to mastery.
What is Docker?
Docker is “an open source project to pack ship and run any application as a lightweight container.” Docker containers wrap up a piece of software in a complete file system that contains everything it needs to run: code, runtime, system tools, and system libraries. It allows packaging an application with all of its dependencies into a standardized unit for software development.
Benefits of Docker
Docker is good at building and sharing images through various ways like Cloud, Secure copy etc. It has image distribution model for server templates built with Configuration Managers (like Chef, Puppet, SaltStack, etc.). It has a central repository of disk images (public and private) that allow us to easily run on different operating systems (Ubuntu, Centos, Fedora, even Gentoo). Docker containers contain only what’s necessary to build, ship and run applications. Unlike virtualization technology (VMs), there is no guest OS or hypervisor necessary for containers.
How can Docker help in DevOps?
Developers can package up all the run times and libraries necessary to develop, test, and execute an application in an efficient and standardized way using Docker. It lets to put the environment and configuration into code and deploy it. The same Docker configuration can also be used in a variety of environments.
Docker is both a great software project (Docker engine) and a vibrant community (Docker Hub). Docker combines a portable, lightweight application run time and packaging tool and a cloud service for sharing images. Docker will achieve agility and control for Development and IT Operations teams to build, ship, and run any app, anywhere.
Agility: Docker gives developers the freedom to define environments, and create and deploy apps faster and easier, and flexibility for IT ops to quickly respond to change.
Control: Docker enables developers to own all the code from infrastructure to application and the manageability for IT ops to standardize secure and scale the operating environment.
Let’s see how Docker supports three principles of DevOps.
Docker and the first way
The “First Way”” and Docker can provide global optimization around software Velocity, Variation and Visualization. Dockerizing the development pipeline, organizations can reduce the cost and risk of software delivery while increasing the rate of change.
Developer flow– Developers who use Docker typically create a Docker environment on their laptop to locally develop and test applications in containers. The end result with Docker is that developers experience less context switch time for testing and retesting and resulting in significantly increased velocity.
Integration Flow -Docker can streamline a continuous integration (CI) with the use of Dockerized build slaves. A CI system can be designed such that multiple virtual instances each run as individual Docker hosts as multiple build slaves. Some environments run a Docker host inside of a Docker host (Docker-in-Docker) for their build environments. Docker also increases velocity for CI pipelines with Union File Systems and Copy on Write (COW). Docker images are created using a layered file system approach. Typically only the current (top) layer is a writable layer (COW). Advanced usage of base lining and re basing between these layers can also increase the lead time for getting software through the pipeline.
Deployment Flow– To achieve increased velocity for Continuous Delivery (CD) of software, there are a number of techniques that can be impacted by the use of Docker. A popular CD process called “Blue Green deploys” is often used to seamlessly migrate applications into production. One of the challenges of production deployments is ensuring seamless and timely changeover times (moving from one version to another). A Blue Green deploy is a technique where one node of a cluster is updated at a time (i.e., the green node) while the other nodes are still untouched (the blue nodes). This technique requires a rolling process where one node is updated and tested at a time. The two key takeaways here are: 1) the total speed to update all the nodes needs to be timely and 2) if the cluster needs to be rolled back this also has to happen in a timely fashion. Docker containers make the roll forward and roll back process more efficient.
A key benefit of using Docker images in a software delivery pipeline is that both the infrastructure and the application can both be included in the container image. One of the core tenants of Java was the promise of “write once run anywhere”. However, since the Java artifact (typically a JAR, WAR or EAR) only included the application there was always a wide range of variation, depending on the Java runtime and the specific operating environment of the deployment environment. With Docker, a developer can bundle the actual infrastructure (i.e. the base OS, middleware, runtime and the application) in the same image. This converged isolation lowers the potential variation at every stage of the delivery pipeline (dev., integration and production deployment). If a developer tests a set of Docker images as a service on a laptop, those same services can be exactly the same during the integration testing and the production deployment.
A new model of disruption in our industry is called Containerized Micro services. In a micro services architecture “services” are defined as bounded context. These are services that model real world domains. There might be a service domain called finance or warehouse in a micro services architecture. When these bounded services are coupled as Docker containers and used as part of the delivery pipeline, they are immediately visible as real word domains.
Docker and the second way
There are interrupts in the flow due to defects and the potential changeover time related to the defect. To be effective at the “Second Way”, DevOps need to have velocity in both directions (i.e., the complete feedback loop). How fast can the changeover occur? How adaptive is the process for not only quick defect detection but how fast can the system be re implemented and re based to the original baseline conditions?
Docker’s streamlining of packaging, provisioning and immutable delivery of artifacts allow an organization to take ad¬vantage of shortened changeover times due to defects and make it easier to stop the line if a defect is detected.
This is about the complexity of the infrastructure that is created of where the defect is detected. Docker delivery and the use of immutable artifacts through the pipeline reduce variation and therefore, reduce the risk of defect variants later in the delivery pipeline.
One of the advantages of an immutable delivery process is that most of the artifacts are delivered throughout the pipeline as binaries. Other techniques for including additional metadata about the software artifact can also be embedded in the Docker image.
Docker and the third way
Prior to implementing a Docker-based “Container as a Service” solution, it was extremely hard for the scientists to match an analysis tool to the data. Some analysis tool and data combinations perform well with a tool like Hadoop while others data sets are better suited for a tool like Spark while other sets work just fine with something like R. The point being there are a lot of tools out there for data analysis with new ones get added every other day.
Docker organization has created a sandbox environment of prebuilt container images (i.e., Docker images) that encapsulate all of the ingredients required to run the data analysis tool. The resulting solution is that any data scientist in this organization can instantiate a containerized set of data with a specific analysis tool (i.e., in a container) in minutes and can confirm or reject the experiment results in a two-order of magnitude less time. The Docker platform uniquely allows organizations to apply tools into their application environment to accelerate the rate of change, reduce friction and improve efficiency.
DevOps professionals use Docker as it makes it extremely easy to manage the deployment of complex distributed applications. Docker is also supported by the major cloud platforms including Amazon Web Services and Microsoft Azure which means it’s easy to deploy to any platform. Ultimately, Docker provides flexibility and portability so applications can run on-premise on bare metal or in a public or private cloud.