In recent times, software production companies require continuous deployments and often operate over distributed systems scattered across the world in a random manner. In the world of the cloud, the basic and prime fundamental is to automate everything when it comes to on-demand cloud services. Various commercial companies like Red Hat and Rackspace provide automation services, but OpenStack cloud technology is also a significant option.
OpenStack is an open source platform comprising a set of software tools for building and managing cloud computing platforms for public and private cloud infrastructure. The software platform comprises interrelated components that control diverse, multi-vendor hardware pools of processing, storage and networking resources throughout a data centre.
The primary advantage of OpenStack is that it is designed for horizontal scalability so it becomes easy to add new compute, network and storage resources to grow the cloud over time. The most crucial part for cloud scalability is the amount of time required to set up and run the cloud and then scale up, as well as minimising the operational costs. For all this, what is needed is an automated deployment and configuration infrastructure that incorporates configuration management systems.
An automated deployment system installs and configures the operating system on new servers, without intervention, after the absolute minimum amount of manual work, including physical racking, MAC-to-IP assignment and power configuration.
OpenStack deployment can be done using any one of the following three important models:
- Enterprise based premise distribution
- Private cloud deployment
- OpenStack as a Service
In order to manage OpenStack cloud infrastructure, the systems administrators have to manage the configuration and the orchestration of the cloud services. Many open source tools are available to install, manage and run the OpenStack cloud. The most important question is how to choose a deployment tool. This article lists many open source tools available for OpenStack deployment.
Chef is regarded as a powerful automation framework, which enables systems administrators to deploy servers and applications to any physical, virtual or cloud location, regardless of infrastructure size. Chef automates the process of managing configurations, making sure that systems are configured correctly and consistently. All the updates are applied dynamically, so that any changes on a running environment or hardware can be made without any code changes throughout the production.
Chef includes cookbooks for working with different options of OpenStack like Keystone. It makes use of Ruby as the scripting language, and includes a searchable portal where one can find community-contributed recipes and cookbooks. It has a number of configuration options available like block storage, hypervisors, databases, message queuing, networking, object storage, source builds, etc. It works on agent-based architecture, where every VM machine is controlled by a central master agent.
The most important deployment scenarios for Chef are: All-in-one-compute, single controller + N compute and Vagrant. Chef manages the entire infrastructure by converting it into code, and provides configuration management and other infrastructure management tasks using Recipes.
Chef has the following components.
- Chef Cookbooks: These are part of the configuration and policy distribution, and comprise code that describes everything about the infrastructure.
- Chef Server: This acts as the central repository for Cookbooks and every node attached to the entire infrastructure.
- Chef Client: This runs on every node attached in the cloud and communicates with the Chef Server to replicate the latest configuration description.
- Chef DK: This provides developers with various tools to develop infrastructure automation code.
Figure 1 highlights all the components of the Chef Automation Framework tool. Figure 2 illustrates how the Chef automation tool works.
The features of Chef include the following:
- It allows the management of servers ranging from five to 50,000 by converting the infrastructure into code. It makes the infrastructure highly flexible, versionable and properly testable.
- It provides many resources for configuring SaaS services, and integrates cloud provisioning APIs as well as third party software like HashiCorp’s Terraform.
- It includes the Chef Development Kit (ChefDK), which has robust testing tools for validating infrastructure changes before implementing them in real-time.
- Highly customised software provides support for unique and complex infrastructure environments.
- Built-in validations ensure that configurations are changed if a system diverges from the desired state, as per the user.
Official website: https://www.chef.io/chef/
Latest versions: Chef Workstation: 0.1.162; Chef Server: 12.17.33; Chef Client: 14.5.33; Chef Development Kit: 3.3.23
Ansible is another open source infrastructure automation tool for OpenStack. It provides support to configure systems, deploy software and orchestrate more advanced IT-intensive tasks like consistent deployments and zero downtime while installing updates. Ansible goes beyond more than simple deployment. After deploying OpenStack, Ansible OpenStack modules can be used to manage all sorts of cloud operations. Ansible provides powerful tools for deploying and managing OpenStack — to provision, configure and deploy applications and services on top of the cloud.
OpenStack-Ansible is an official OpenStack project that aims to deploy production environments from source in a manner that provides high scalability for systems administrators to operate, upgrade and transform the OpenStack installation. It is based on an agentless architecture, so there is no requirement to configure VMs or workstations prior to installation. Ansible can work directly with them through the command line.
Ansible provides playbooks and roles for performing various deployment and configuration tasks in the OpenStack cloud. It provides systems administrators with a control language that uses modules or routines to perform all sorts of tasks on nodes, which users can control through the command line.
Ansible provides sysadmins with a strong IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration and other IT needs.
- Modules: Ansible connects to all nodes available in a network using Ansible Modules. These are programs regarded as resource models of the system state. The Modules library can reside on any machine and there is no requirement for any server, daemon and database.
- Plugins: These small pieces of code build up the entire functionality.
- Inventory: This consists of all machines managed by Ansible.
- Playbooks: These orchestrate multiple slices of the infrastructure topology.
Figure 3 illustrates the architecture of Ansible.
The features of Ansible include the following:
- It manages the system with a collection of scripts and ad hoc practices in a very simple manner.
- It offers simple solutions for all configuration management issues. It is designed to be minimal in nature, secure, consistent and easy-to-learn for administrators and developers.
- Ansible has a state-driven resource model that describes the desired state of computer systems and services, no matter what state a system is in. Ansible transforms it to the desired state, which allows reliable and repeatable IT infrastructure configuration, avoiding potential failures from scripting and script-based solutions.
- Ansible relies on the most secure remote configuration management system available, as its default transport layer is OpenSSH. It delivers all modules to remote systems and executes tasks, as needed, to enact the desired configuration. These modules run with user-supplied credentials, including support for sudo and even Kerberos, and clean up on their own when the job is complete.
- It consists of 1300+ modules and an active community for support and development.
Official website: https://www.ansible.com/
Latest version: Ansible Tower
Juju is an open source model for service-oriented architecture and deployment. It is a powerful service orchestration tool from Ubuntu that helps systems administrators to define, configure and deploy services to any cloud or bare metal system. A Juju Charm is a structured bundle of YAML configuration files and scripts for a software component that enables Juju to deploy and manage the software component as a service, according to best practices. It is mainly programmed in Go, with 99 per cent of the code composed in this language.
Charms are composed of metadata, configuration data, and hooks with some extra support files. Charms use hooks, typically written as shell scripts, which Juju invokes to manage the life cycle of the service. There are two types of hooks —Unit hooks and relation hooks. The former include install, config-changed, start, upgrade-charm and stop. Relation hooks include joined, changed, departed and broken. Defining relationships in a Charm simplifies deployment by embedding the logic of what the Charm can and cannot connect to.
Juju Charms offer an amazing experience for private and public cloud deployments, including bare metal with Metal as a Service (MAAS), OpenStack, AWS, Azure, Google Cloud, and more.
Figure 4 illustrates the architecture of Juju Charms.
Juju’s environment is bootstrapped locally or on top of an IaaS, creating the state service machine, which manages other machines.
The state service fetches the Charm that is to be deployed on the admin’s request from the Charms Store or the local repository (if passed by args). The state service creates a machine (VM or container, depending on the IaaS) that contains an agent. This agent communicates with the state service, passing information to it. The Charm code runs in one unit, which can be executed in one or more machines. Admins can manage the environment via the Juju CLI or Juju GUI.
The features of Juju Charms include:
- Easy deployment, modelling, scalability and management of services.
- An easy and interactive GUI to manage all services; overall, it is easy to install, use and configure.
- Service configuration as per user requirements.
- Avoids dependency problems in different machines while running services where Juju is installed.
- Can create and contribute a Charm with the help of the community.
Official website: https://jujucharms.com/
Latest version: 2.3
Puppet is an open source software configuration tool. It is a declarative language for write-once-deploy-many packages for on-demand OpenStack configuration and version control. Puppet provides full support for OpenStack, and the latter’s team has even devoted a page for Puppet’s support. The modules described on this page are created by the OpenStack community for Puppet and, as such, reside on OpenStack’s own Git repository system. Puppet is deployed in a client/server setup, or in serverless mode, where clients periodically poll the server for the desired state and send back status reports to the server (master). Puppet can provision, upgrade and manage nodes throughout their life cycle. Puppet is written in Ruby language, provides custom DSL for writing manifests and makes use of ERB for templates. It has a Web UI and reporting tools.
Puppet Enterprise allows for real-time control of managed nodes using prebuilt modules and the Cookbooks present on the master servers. The reporting tools are well developed, providing details on how agents are behaving and what changes have been made.
The following are the key components of Puppet.
- Puppet resources: These comprise the key components for modelling any particular machine and have their own implementation model. They are used by Puppet to get a particular resource in the desired state.
- Providers: Providers never do anything on their own; all of their action is triggered through an associated resource.
- Manifest: This is a collection of resources inside the function or classes, which helps to configure any target system and makes use of the Ruby language to configure a system.
- Modules: These act as building blocks of Puppet and consist of resources, files, templates, etc.
- Templates: These use Ruby expressions to define the customised content and variable input. They can be used by developers to create custom content.
- Static files: These are general files that perform specific tasks.
Figure 5 illustrates the architecture of Puppet, which comprises the following.
- Master: This handles all configuration related tasks. It applies the configuration to nodes through the Puppet agent.
- Agents: These are machines that make up the entire network and are managed by the Master. All agents run the Puppet Agent Daemon service.
- Config repository: All the configurations of nodes and servers are saved and acquired from the config repository.
- Facts: These are details regarding the nodes or master machine, and are used to analyse the current status of any node.
- Catalogue: All the manifest files or configurations written in Puppet are first converted to a compiled format called a catalogue and later this is applied on the target machine.
The features of Puppet include the following:
- It enables deployment and configuration of OpenStack components, like Nova, Keystone, Glance and Swift.
- Facilitates real-time control of managed nodes using prebuilt modules and Cookbooks present on master servers.
- Puppet supports idempotency, which makes it unique. Similar to Chef, in Puppet, one can safely run the same configuration multiple times on the same machine. In this flow, Puppet checks for the current status of the target machine and will only make changes when there is any specific change in the configuration. Idempotency helps in managing any particular machine throughout its life cycle, starting from the creation of the machine and the configurational changes in it, till its end-of-life.
- It supports full scale automation with proper reporting and compliance features.
Official website: https://puppet.com
Latest version: 6.0.1
Fuel is an open source, GUI based tool for systems administrators to deploy and manage OpenStack Cloud along with various components. It accelerates the complex, time-consuming, error-prone task of deploying and running various configuration flavours of OpenStack. Fuel automatically discovers all virtual nodes connected from the network. It provides a complete picture regarding the nodes ready for allocation, enabling systems administrators to assign roles and resources across the cloud.
Fuel provides a step-by-step wizard for systems administrators to select the following: the host OS, hypervisors, storage back-ends, network topology, controller configuration specifications and controllers deployed on nodes.
Figure 6 illustrates the architecture of Fuel.
The following components make up Fuel.
- Nailgun: This implements a REST API as well as deployment data management. It manages disk volumes, configuration data, network configuration data and any other environment-specific data necessary for successful deployment. It is written in the Python language.
- Astute: This represents Nailgun’s workers and encapsulates all details regarding the interactions with a variety of services like Cobbler, Puppet, shell scripts, etc, and provides a universal asynchronous interface to these services.
- Cobbler: This is used as a provisioning service.
- Puppet: This is a deployment service and creates MCollective agents to manage other configuration management frameworks like Chef, SaltStack, etc.
- MCollective agents: These perform tasks like hard drive cleaning, network connectivity probing, etc.
- OSTF: The OpenStack Testing Framework implements post-deployment verification of OpenStack.
Listed below are the features of Fuel:
- It helps systems administrators to deploy, manage and scale multiple OpenStack clusters.
- It allows hardware configuration using the GUI.
- It performs post-deployment checks and tests for validating the OpenStack Cloud deployed.
- It helps the sysadmin to monitor real-time logs via the GUI of OpenStack Cloud.
- It performs pre-deployment checks and network validations.
Official website: https://launchpad.net/fuel; https://wiki.openstack.org/wiki/Fuel
Latest version: 11.0
Canonical OpenStack Autopilot
Canonical OpenStack Autopilot is the easiest tool for OpenStack installation. It makes use of three tools from Canonical — MAAS, Juju and Landscape. Canonical’s Metal as a Service (MAAS) is a provisioning software designed to commission physical servers instantaneously.
The key components of a typical MAAS environment include a region controller, rack/cluster controller(s) and target nodes (physical servers). MAAS uses the Asynchronous Messaging Protocol (AMP) based RPC mechanism. It connects processes in the region to processes in the cluster. Juju is an open source model-driven service orchestration tool, which allows sysadmins to deploy, configure, manage and scale applications quickly and efficiently on the cloud as well as on physical servers. Landscape is a systems management tool to administer and audit large pools of Ubuntu servers. Systems are monitored by Landscape through a management agent installed on each machine. The agent automatically sends a selected set of essential health metrics, resource utilisation stats and other data to the Landscape server. Using Landscape, administrative tasks such as package and repository management, frequent upgrades, updating packages, etc, can be performed on several machines simultaneously through an intuitive dashboard.
Canonical Autopilot makes use of MAAS for automated bare metal provisioning, Juju for deploying OpenStack Charms and Landscape for the administration, monitoring and management of the deployed OpenStack cluster. Besides these, advanced system monitoring tools like Nagios can be integrated with Autopilot using Landscape.
Figure 7 illustrates how Canonical Autopilot works.
Listed below are the features of Canonical Autopilot:
- It has support for automated OpenStack deployment and administration using Ubuntu servers.
- It sets up clouds, adds new hardware to existing clouds and assists with cloud management.
- It has the ability to run OpenStack administrative services inside dedicated containers via LXD, i.e., Canonical’s home-grown virtualisation hypervisor.
Official website: http://www.ubuntu.org.cn/cloud/openstack/autopilot
Compass is an open source tool for the automated deployment and management of OpenStack. It reduces complexity, saves time and controls all sorts of errors in the data centre server management. It assists in bootstrapping the server pool associated with any cloud platform from bare metal nodes. It assists systems administrators in discovering hardware, deploying the OS and hypervisor, and also provides comprehensive configuration management.
Figure 8 illustrates Compass’ architecture.
The components that make up Compass OpenStack software are listed below.
- RESTful API server.
- Web UI: This is written in AngularJS and used by developers.
- Meta-Data module: This allows a developer to extend the core functionality and provide a custom data model for OpenStack configurations — for example, with or without HA, single controller vs multi-node, etc.
- Adapter interface: This is used for automatic resource discovery.
- Cobbler interface: This is used for OS provisioning.
Listed below are the features of Compass:
- Assists in the infrastructure bootstrapping process and also offers programmability for operators to do this.
- Allows implementation of different configuration flavours through metadata.
- Implements extensibility through the integration of a number of tools like Chef and Ansible for OpenStack cluster configuration. By default, Ansible is used for OpenStack installation. The Compass core blends with other tools for resource discovery, OS provisioning and package deployment.
Official website: https://wiki.openstack.org/wiki/Compass