Google Cloud Platform is one of the leading public cloud infrastructures that allows you to run applications, perform data analysis and various tools. Docker, on the other hand, is an engine that runs container workloads. This article shows the reader how to run Docker workloads on the Google Cloud Platform.
Organisations are moving their workloads to the cloud at a fast pace. At the same time, applications have been shifting towards modularity through microservices and the more efficient running of workloads, whether they are on premise or in the cloud. It is not just virtual machines (VMs) that organisations are provisioning in the cloud, but container runtimes within these VMs for higher efficiency. Docker has been at the forefront of all this, as the engine that runs these container workloads.
The Google Cloud Platform (GCP) provides multiple ways to run Docker workloads in the cloud. The options range from spinning up one’s own VMs to fully managed container orchestration platforms (based on Kubernetes) that are offered as options to the customer. This is coupled with powerful APIs and command-line interfaces to provision and manage the infrastructure and applications in the Google Cloud.
Google Cloud Platform (GCP) provides multiple options, depending on how much infrastructure management is desired (for simple deployment of applications, Google can take care of managing the infrastructure for you). Let us go through the options, first in brief and then in detail.
Google Compute Engine: This is the Infrastructure as a Service (IaaS) offering from the Google Cloud Platform. Google provides multiple types of VMs with pricing options, as well as the storage and networking infrastructure that might be needed for setting them up.
Google Container Engine: This is a fully managed offering that provides container orchestration at scale, based on Kubernetes. If you are dealing with Docker images, you can also choose to use the Google Container Registry option to host your images. For those fully committed to Google Cloud, using the Google Container Registry is a good option for faster availability of images.
Google App Engine Flexible: Building on the success of Google’s Platform-as-a-Service (PaaS) offering, i.e., App Engine Standard, the Flexible Environment supports multiple other languages including bringing your own stack. The development workflow is based on Docker and eventual deployment is done in the Container Engine offering, behind the scenes, by the provisioning infrastructure.
All of the above services are available via the Google Cloud SDK, which is a set of command line utilities that can be used on a laptop to work with GCP resources. Those who have worked with Amazon Web Services (AWS) and Azure will find it familiar, as Google uses similar client tools to manage its cloud resources.
Google Cloud SDK
Google Cloud SDK is a suite of command-line tools for the platform. These include gcloud, bqutil, gsutil and more to manage the resources in the cloud, directly from your machine. The installation of this SDK is straightforward and the link is provided in the References section.
Once the SDK is installed, you can specifically use the gcloud utility to create and manage VMs in the cloud on which we can have Docker running.
The SDK also ships with multiple client libraries that can be used in popular languages to interact with the cloud platform. In the next section, we will look at how to set up a VM running CoreOS in the Google Cloud, which can then run your Docker loads.
Google Compute Engine
Google Compute Engine is the Infrastructure-as-a-Service (IaaS) offering on GCP. It provides a variety of machine types (CPU, memory), ranging from the basic standard configuration to high-memory instances. It also supports multiple operating system images that can be used to run Docker loads. Specifically, vis-a-vis Docker, it provides custom Google-provided images that come bundled with Docker, rkt and Kubernetes so that you are up and running with your Docker machine as quickly as possible.
One operating system that has been built from the ground up as a container operating system is CoreOS, and Compute Engine supports it while creating the VM. All you need to do is provide a unique VM instance name, the image family (coreos-stable) and image project name (coreos-cloud) while using the gcloud command line utility —and within seconds you have a VM with Docker, rkt and Kubernetes running for you.
The steps shown below provide the usage of the gcloud SDK:
$ gcloud compute instances create container-instance1 \ --image-family coreos-stable \ --image-project coreos-cloud
Once the above command is run, within a minute you will have a VM ready for you, as indicated by the output below:
Created [https://www.googleapis.com/compute/v1/projects/mindstormclouddemo/zones/us-central1-f/instances/container-instance1]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS container-instance1 us-central1-f n1-standard-1 10.240.0.2 188.8.131.52 RUNNING
Now, if we SSH into the container-instance1 IP address, we can see that Docker and rkt are installed and ready for use.
Google Container Engine
Google Container Engine is a container orchestration and cluster manager for running your container workloads. It is based on Kubernetes, the leading orchestration engine for containers, that was developed by Google itself, before the company open sourced the project on GitHub.
Google Container Engine or GKE, as it is called, is the fastest way to set up a fully managed container cluster in the cloud. Coupled with the fact that it provides support for Kubernetes right out of the box, you get scaling, failover and solutions to all the other challenging distributed software problems that have been addressed by Kubernetes.
The next few commands show how easy it is to get a container cluster running in GKE and to deploy your own service to it.
Assuming that you have the gcloud utility installed, creating a container cluster is a breeze. All you need to do is give the following command:
$ gcloud container clusters create osfy-cluster
It takes a few minutes to create the cluster. By default, a three-node cluster is created with one Kubernetes Master, which is fully managed by GKE itself.
Once the cluster is successfully created, you can view its details, as shown below:
Creating cluster osfy-cluster...done. Created [https://container.googleapis.com/v1/projects/mindstormclouddemo/zones/us-central1-f/clusters/osfy-cluster]. kubeconfig entry generated for osfy-cluster. NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS osfy-cluster us-central1-f 1.4.6 184.108.40.206 n1-standard-1 1.4.6 3 RUNNING
Now, let’s create a deployment for nginx as shown below:
$ kubectl run nginx --image=nginx --port=80 deployment "nginx" created
We can expose this deployment as a service as follows:
$kubectl expose deployment nginx --type="LoadBalancer" service "nginx" exposed
Notice that we have exposed the service to the outside world via the LoadBalancer; so this will result in Google Cloud provisioning a load balancer behind the scenes for you.
Now, if we query the services status after a while, we get the following output:
$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.111.240.1 <none> 443/TCP 8m nginx 10.111.247.193 220.127.116.11 80/TCP 1m
Notice that our nginx service has got an external IP through which the service can be accessed. You might have noticed that GKE, via its inherent fully-managed service, makes the process very easy.
Google Container Registry
If you run your workload on the Google Container Engine, one of the complementary products to consider is the Google Container Registry, which provides a powerful private registry for your Docker images. The Container Registry can also be used independently, and can be integrated with Docker CLI tools. The prefix gcr.io is used on images in the Google Container Registry.
Google App Engine Flexible
App Engine has been one of the most popular PaaS offerings available. This is the original standard runtime that supports Java, Python, Go and PHP applications. However, developers demanded flexibility by wanting to use programming languages other than the ones this engine supported, as well as different frameworks, while enjoying the benefits of focusing on their applications and not on managing the infrastructure – which App Engine did well. The App Engine Flexible environment uses Docker at its core in the development environment and behind the scenes deploys it to Google Container Engine. So if you are looking at a flexible PaaS environment based on Google, give App Engine Flexible a try.
Google Cloud pricing has been considered by many to be more cost efficient vis-a-vis the other public clouds. Google has introduced pricing innovations like ‘Pre-emptible instances’, i.e., your instance could be shut down any time, and ‘Sustained use’ discounts, whereby automatic discounts are applied as you use the VMs for longer periods. This helps in significant savings. There is also a Google Cloud Platform Pricing Calculator, an interactive Web-based tool that you can use to determine your expenditure.