Puppet-Object/Software?

Hello Readers!!

In this blog I’m going to talk about one of the Configuration Management tool i.e. Puppet.

Puppet-Isn’t it a funny name for a software. As we all know that puppet-a person, group, or country under the control of another, the same is implemented in the software in which agents are controlled by the master. That’s how it is named!!!

In today’s world, it is one of the highly used configuration management tool. I hope everyone is familiar with Configuration Management, if not don’t worry!!

I have given a clear idea on what is Configuration Management and its structure in my previous blog, please go through it here  for better understanding.

For people who are not aware of Configuration Management, here is the small introduction of it.

What is Configuration Management?

Configuration management is unique identification, controlled storage, change control, and status reporting of selected intermediate work products, product components, and products during the life of a system.

It is the detailed recording and updating of information that describes an enterprise’s hardware and software. Its advantage is that the entire collection of systems can be reviewed to make sure any changes made to one system do not adversely affect any of the other systems.

Many customers use Puppet as a configuration management tool in conservative, compliance-oriented industries such as banking and government. The nice thing about Puppet is that it grows and scales as the team and infrastructure grow and scale.

What is Puppet?

Puppet is an open-source IT automation tool developed by Puppet Labs. It’s written in Ruby; composed of a declarative language for expressing system configuration; a client and a server for distributing it; and a library for realizing the configuration. Puppet helps in automation, deployment and scaling of applications in the cloud or on site.

The basic design objective of Puppet is to come up with a powerful and expressive language backed by an influential library that makes to write own server automation applications in just a few lines of code. Its intense adaptability and open source license lets us to add up the required functionality.

Why Puppet?

Puppet has been developed to benefit the sysadmin community in building up and sharing of all mature tools, which prevent the replication of a problem. It is extremely powerful in deploying, configuring, managing and maintaining a server machine.

The Puppet usage accelerates work as a sysadmin, as it supervises and handles all the details. It is easy to download the code from other sysadmin which help to get work done faster. Most of the Puppet implementations make use of one or more modules already developed by others. One can find hundreds of modules developed and shared by the community. Puppet’s functionality is framed as a heap of individual layers. Each layer is duty-bound to a fixed aspect of the system with a tight control on how information transfers between these layers.

layers

How Puppet Works?

Puppet in general is used in client/server architecture. The Puppet agent acts as a daemon that runs on all the client servers where configurations are required, and the servers to be managed by Puppet. Now all the clients to be controlled will have puppet agent installed in their systems. This puppet agent also known as a node in Puppet will have a server designated as the Puppet master.

So, what does puppet agent and puppet master do?

Puppet Master: This machine restrains the entire configuration for different hosts, and runs as a daemon on this master server.
Puppet Agent: This is the daemon that will run on the entire server that is to be managed with the help of Puppet. At a specific time interval, the Puppet agent will go and inquire the configuration from the puppet master server.
The connection network between the puppet agent and master is built in a secure encrypted channel with the support of SSL.

Master server

From the above diagram it is clear that puppet master server has all the configuration options available for Host 1 or Node1, Host 2 or Node 2 and Host 3 or Node 3.Following steps are always followed whenever a puppet agent of any node makes up a connection between a node and a puppet master server to fetch the data.

Step 1: Every time a client node makes a connection to the master, the master server examines the configuration to be applied to the node.
Step 2: The Puppet master server takes and assembles all resources and configurations be applied on the node, and brings it together to make a catalog. Now, this puppet is given to the puppet agent of the node.
Step 3: According to the catalog, the Puppet agent will apply configuration on the node and then reply. It will also submit the report of the configuration applied to the puppet master server.

Conclusion

Puppet is both a simple and a complex system. It is composed of several moving sections wired all together quite loosely. It is a structure that can be used for all configuration problems.
Puppet refers to the two different things.

            • The language in which code is written.
            • The platform that manages infrastructure.
Puppet is basically modeling the Infrastructure as Code. Puppet code is easily stored and re-used. DevOps team can use the same manifests to manage systems from the laptop development environment all the way to production which can yield big improvements in deployment quality.

Thank You for reading!!!

References

  1. http://www.slashroot.in/puppet-tutorial-how-does-puppet-work
  2. https://puppet.com/product/how-it-works

Configuration Management

Hello Readers!!

Last month I have written a blog on “Docker in DevOps” which gave a quick introduction on Docker and DevOps. In this blog I’m going to talk about a practice of DevOps i.e “Configuration Management”.

As we are familiar that DevOps is taking the IT world by storm, it is important to articulate that configuration state is critical during the entire execution phase. Configuration Management is a critical component throughout the plan, build, run and govern processes.

Configuration management permits the orderly development of a system, subsystem, or configuration item. A good configuration management program ensures that designs are traceable to requirements, that change is controlled and documented, that interfaces are defined and understood, and that there is consistency between the product and it’s supporting documentation.

Today, let us learn about Configuration Management and its importance.

What is Configuration Management?

Configuration management is unique identification, controlled storage, change control, and status reporting of selected intermediate work products, product components, and products during the life of a system.

1

It is the detailed recording and updating of information that describes an enterprise’s hardware and software. Its advantage is that the entire collection of systems can be reviewed to make sure any changes made to one system do not adversely affect any of the other systems.

Configuration management provides documentation that describes what is supposed to be produced, what is being produced, what has been produced, and what modifications have been made to what was produced. Configuration management is performed on baselines, and the approval level for configuration modification can change with each baseline.

Configuration management is supported and performed by integrated teams in an Integrated Product and Process Development (IPPD) environment. Configuration management is closely associated with technical data management and interface management. Data and interface management is essential for proper configuration management, and the configuration management effort has to include them.

Why we need CM?

Almost everyone in an organization plays a role in effective and efficient configuration management. Configuration management processes, like the conductor of an orchestra, are there to ensure everyone is working to the same sheet of music.

Without some form of configuration management, there would be chaos. When errors occur that need correcting, it is imperative that organizations are able to quickly locate and utilize the latest and most accurate information.

Configuration Management Planning

When planning a configuration management effort you should consider the basics: what has to be done, how should it be done, who should do it, when should it be done, and what resources are required. Planning should include the organizational and functional structure that will define the methods and procedures to manage functional and physical characteristics, interfaces, and documents of the system component. It should also include statements of responsibility and authority, methods of control, methods of audit or verification, milestones, and schedules.

Configuration Management Structure

Configuration management comprises four interrelated efforts:

  • Identification
  • Control
  • Status Accounting and
  • Audits

It is also directly associated with data management and interface management. Any configuration management planning effort must consider all six elements.

Identification

Configuration Identification consists of documentation of formally approved baselines and Specifications including:

  • Selection of the CIs.
  • Determination of the types of configuration documentation required for each CI.
  • Documenting the functional and physical characteristics of each CI.
  • Establishing interface management procedures, organization, and documentation.

Control

Configuration Control is the systematic proposal, justification, prioritization, evaluation, coordination, approval or disapproval, and implementation of all approved changes in the configuration of a system/CI after formal establishment of its baseline.

Configuration Control provides management visibility, ensures all factors associated with a proposed change are evaluated, prevents unnecessary or marginal changes, and establishes change priorities.

Status Accounting

Configuration Status Accounting is the recording and reporting of the information that is needed to manage the configuration effectively, including:

  • A listing of the approved configuration documentation,
  • The status of proposed changes, waivers and deviations to the configuration identification,
  • The implementation status of approved changes.

Purpose of Configuration Status Accounting

Configuration Status Accounting provides information required for configuration management by:

  • Collecting and recording data concerning:

                 – Baseline configurations,

                 – Proposed changes, and

                 – Approved changes.

  • Disseminating information concerning:

              – Approved configurations,

              – Status and impact of proposed changes,

              – Requirements, schedules, impact and status of approved changes,and

               – Current configurations of delivered items

Audits

Configuration Audits are used to verify a system and its components’ conformance to their configuration documentation. Audits are key milestones in the development of the system and do not stand alone.

Interface Management

Interface Management consists of identifying the interfaces, establishing working groups to manage the interfaces, and the group’s development of interface control documentation. Interface Management identifies, develops, and maintains the external and internal interfaces necessary for system operation. It supports the configuration management effort by ensuring that configuration decisions are made with full understanding of their impact outside of the area of the change.

Data Management

Data management documents and maintains the database reflecting system life cycle decisions, methods, feedback, metrics, and configuration control. It directly supports the configuration status accounting process. Data Management governs and controls the selection, generation, preparation, acquisition, and use of data imposed on contractors.

Conclusion

Configuration management is essential to control the system design throughout the life cycle. The use of integrated teams in an IPPD environment is necessary for disciplined configuration management of complex systems.

Technical data management is essential to trace decisions and changes and to document designs, processes and procedures. Interface management is essential to ensure that system elements are compatible in terms of form, fit, and function. The key elements of Configuration Management are Identification, Control, Status Accounting, and Audits.

Thank you for reading my blog!!!

References

  1. https://en.wikipedia.org/wiki/Configuration_management
  2. http://ptgmedia.pearsoncmg.com/images/0321117662/samplechapter/hassch01.pdf

Docker in DevOps

Hello Readers!!!

In this blog we are going to learn about Docker in Devops. Docker is quite the buzz in the DevOps community these days. It is an open platform for distributed applications for developers and sysadmin and facilitates deployment of applications inside software containers.

As organizations embrace the DevOps philosophy and break down the traditional barriers that exist between Development and Ops teams, Docker provides some of the key tools to go DevOps and improve the application development process.

What is DevOps?
DevOps is a collaborative way of developing and deploying software. It is a software development method that stresses communication, collaboration and integration between software developers and information technology (IT) operation professionals.

devops1

Principles of DevOps
1. Systems thinking: This way emphasizes the performance of the entire system, as opposed to the performance of a specific silo of work or department — this can be as large as a division (e.g., Development or IT Operations) or as small as an individual contributor (e.g., a developer, system administrator).

2. Amplify Feedback loops: It is about creating the right to left feedback loops. The goal of almost any Process improvement initiative is to shorten and amplify feedback loops, so necessary Corrections can be continually made.

3. Culture of continual experimentation and learning: This is about creating a culture that fosters at two things: continual experimentation, which requires taking risks and learning from success and failure; and understanding that repetition and practice is the prerequisite to mastery.

What is Docker?
Docker is “an open source project to pack ship and run any application as a lightweight container.” Docker containers wrap up a piece of software in a complete file system that contains everything it needs to run: code, runtime, system tools, and system libraries. It allows packaging an application with all of its dependencies into a standardized unit for software development.

Benefits of Docker
Docker is good at building and sharing images through various ways like Cloud, Secure copy etc. It has image distribution model for server templates built with Configuration Managers (like Chef, Puppet, SaltStack, etc.). It has a central repository of disk images (public and private) that allow us to easily run on different operating systems (Ubuntu, Centos, Fedora, even Gentoo). Docker containers contain only what’s necessary to build, ship and run applications. Unlike virtualization technology (VMs), there is no guest OS or hypervisor necessary for containers.

How can Docker help in DevOps?
Developers can package up all the run times and libraries necessary to develop, test, and execute an application in an efficient and standardized way using Docker. It lets to put the environment and configuration into code and deploy it. The same Docker configuration can also be used in a variety of environments.

devops2

Docker is both a great software project (Docker engine) and a vibrant community (Docker Hub). Docker combines a portable, lightweight application run time and packaging tool and a cloud service for sharing images. Docker will achieve agility and control for Development and IT Operations teams to build, ship, and run any app, anywhere.

Agility: Docker gives developers the freedom to define environments, and create and deploy apps faster and easier, and flexibility for IT ops to quickly respond to change.
Control: Docker enables developers to own all the code from infrastructure to application and the manageability for IT ops to standardize secure and scale the operating environment.

Let’s see how Docker supports three principles of DevOps.

Docker and the first way
The “First Way”” and Docker can provide global optimization around software Velocity, Variation and Visualization. Dockerizing the development pipeline, organizations can reduce the cost and risk of software delivery while increasing the rate of change.
Velocity

Developer flow– Developers who use Docker typically create a Docker environment on their laptop to locally develop and test applications in containers. The end result with Docker is that developers experience less context switch time for testing and retesting and resulting in significantly increased velocity.
Integration Flow -Docker can streamline a continuous integration (CI) with the use of Dockerized build slaves. A CI system can be designed such that multiple virtual instances each run as individual Docker hosts as multiple build slaves. Some environments run a Docker host inside of a Docker host (Docker-in-Docker) for their build environments. Docker also increases velocity for CI pipelines with Union File Systems and Copy on Write (COW). Docker images are created using a layered file system approach. Typically only the current (top) layer is a writable layer (COW). Advanced usage of base lining and re basing between these layers can also increase the lead time for getting software through the pipeline.
Deployment Flow– To achieve increased velocity for Continuous Delivery (CD) of software, there are a number of techniques that can be impacted by the use of Docker. A popular CD process called “Blue Green deploys” is often used to seamlessly migrate applications into production. One of the challenges of production deployments is ensuring seamless and timely changeover times (moving from one version to another). A Blue Green deploy is a technique where one node of a cluster is updated at a time (i.e., the green node) while the other nodes are still untouched (the blue nodes). This technique requires a rolling process where one node is updated and tested at a time. The two key takeaways here are: 1) the total speed to update all the nodes needs to be timely and 2) if the cluster needs to be rolled back this also has to happen in a timely fashion. Docker containers make the roll forward and roll back process more efficient.

Variation
A key benefit of using Docker images in a software delivery pipeline is that both the infrastructure and the application can both be included in the container image. One of the core tenants of Java was the promise of “write once run anywhere”. However, since the Java artifact (typically a JAR, WAR or EAR) only included the application there was always a wide range of variation, depending on the Java runtime and the specific operating environment of the deployment environment. With Docker, a developer can bundle the actual infrastructure (i.e. the base OS, middleware, runtime and the application) in the same image. This converged isolation lowers the potential variation at every stage of the delivery pipeline (dev., integration and production deployment). If a developer tests a set of Docker images as a service on a laptop, those same services can be exactly the same during the integration testing and the production deployment.

Visualization
A new model of disruption in our industry is called Containerized Micro services. In a micro services architecture “services” are defined as bounded context. These are services that model real world domains. There might be a service domain called finance or warehouse in a micro services architecture. When these bounded services are coupled as Docker containers and used as part of the delivery pipeline, they are immediately visible as real word domains.

Docker and the second way
Velocity
There are interrupts in the flow due to defects and the potential changeover time related to the defect. To be effective at the “Second Way”, DevOps need to have velocity in both directions (i.e., the complete feedback loop). How fast can the changeover occur? How adaptive is the process for not only quick defect detection but how fast can the system be re implemented and re based to the original baseline conditions?
Docker’s streamlining of packaging, provisioning and immutable delivery of artifacts allow an organization to take ad¬vantage of shortened changeover times due to defects and make it easier to stop the line if a defect is detected.

Variation
This is about the complexity of the infrastructure that is created of where the defect is detected. Docker delivery and the use of immutable artifacts through the pipeline reduce variation and therefore, reduce the risk of defect variants later in the delivery pipeline.

Visualization
One of the advantages of an immutable delivery process is that most of the artifacts are delivered throughout the pipeline as binaries. Other techniques for including additional metadata about the software artifact can also be embedded in the Docker image.

Docker and the third way
Prior to implementing a Docker-based “Container as a Service” solution, it was extremely hard for the scientists to match an analysis tool to the data. Some analysis tool and data combinations perform well with a tool like Hadoop while others data sets are better suited for a tool like Spark while other sets work just fine with something like R. The point being there are a lot of tools out there for data analysis with new ones get added every other day.
Docker organization has created a sandbox environment of prebuilt container images (i.e., Docker images) that encapsulate all of the ingredients required to run the data analysis tool. The resulting solution is that any data scientist in this organization can instantiate a containerized set of data with a specific analysis tool (i.e., in a container) in minutes and can confirm or reject the experiment results in a two-order of magnitude less time. The Docker platform uniquely allows organizations to apply tools into their application environment to accelerate the rate of change, reduce friction and improve efficiency.

Conclusion
DevOps professionals use Docker as it makes it extremely easy to manage the deployment of complex distributed applications. Docker is also supported by the major cloud platforms including Amazon Web Services and Microsoft Azure which means it’s easy to deploy to any platform. Ultimately, Docker provides flexibility and portability so applications can run on-premise on bare metal or in a public or private cloud.

References

  1. http://thenewstack.io/how-docker-fits-into-the-devops-ecosystem/
  2. https://www.docker.com/sites/default/files/WP_Docker.pdf
  3. https://www.docker.com/
  4. https://www.toptal.com/devops/getting-started-with-docker-simplifying-devops