Deploy Kubernetes Web Application in GKE Cluster using Docker

Cloud Guru
9 min readSep 27, 2023

In this guide, you will learn some basic concepts of Docker Kubernetes and how we can deploy our first application on Kubernetes using GCP.

What is Docker?

Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allow you to run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host. It’s important to note that Docker containers don’t run in their own virtual machines, but share a Linux kernel. Compared to virtual machines, containers use less memory and less CPU.

Containerization:

Containerization is a software deployment process that bundles an application’s code with all the files and libraries it needs to run on any infrastructure. Traditionally, to run any application on your computer, you had to install the version that matched your machine’s operating system.

Container Images:

A container image is a ready-to-run software package, containing everything needed to run an application: the code and any runtime it requires, application and system libraries, and default values for any basic settings.

->Let’s get started with the docker:

sudo apt-get install docker.io

It will install docker on Linux OS. For MacOS download Docker Desktop from here.

docker run redis

It runs an image named “redis”. Docker looks for this image on our local system if we have made a custom image with that name. When it can’t find the image, Docker downloads it from Docker Hub for us.

docker ps -a

Lists all the containers on our system

docker image ls

Produces a listing of images on our system

docker stop <container name>

Stops a container

docker rm <container name>

Removes a container

Now we will create a React-App from a custom image that we will make with a Dockerfile. Images are created with a Dockerfile, which lists the components and commands that make up an image.

# Use an official Node.js runtime as a parent image

FROM node:14-alpine

# Set the working directory to /app

WORKDIR /app

# Copy the current directory contents into the container at /app

COPY . /app

# Install dependencies

RUN npm install

# Build the app

RUN npm run build

# Expose port 3000

EXPOSE 3000

# Define environment variable

ENV NODE_ENV=production

# Start the app as a docker container

CMD [“npm”, “start”]

docker build -t my-react-app .

This will build the image and tag it as my-react-app

docker run — name foo -d -p 8080:8080 my-react-app

To run the container from that image you just created.

Let’s break that command down.

  • –name foo gives the container a name,
  • -d detaches from the container, running it in the background.
  • -p 8080:8080 maps network ports.
  • Finally, the image name always comes last.

Now point your browser at http://127.0.0.1:8080 and you can see the test web page again.

Docker Swarm:

A Docker Swarm is a container orchestration tool running the Docker application. It has been configured to join together in a cluster. A swarm manager controls the activities of the cluster, and machines that have joined the cluster are referred to as nodes.

Why Kubernetes:

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Kubernetes vs Docker Swarm : Comparison

Kubernetes on GCP

Configuring Google Kubernetes Engine for nginx application

Don’t forget to click on ACTIVATE button on the top right corner to start your free trail.

Enable APIs

First, you need to enable three APIs for your Project: the Kubernetes Engine API, Cloud Resource Manager API, Google Container Registry API and the Stackdriver API. These APIs provide the programmatic interface for your pipeline to create and update resources in your cluster.

Click on the console menu in the top left, highlight APIs & Services, and click Dashboard:

Once you’ve registered, you will be given a Project. This is where you create all of your GCP resources.

Now, click on the console menu in the top left, highlight Kubernetes Engine, and click Clusters:

click the Create cluster button.

At this point, you will see a long list of cluster templates and options. Leave the Standard cluster option selected and enter a new name for the cluster in the Name field.

  • Choose a Zone that’s located close to you.
  • Goto default_pool then select Nodes from the left side and edit boot disk size to 30GB.
  • Now, click Create.
  • Go to the search bar and enter workloads and select Kubernetes engine from the dropdown.
  • Click Deploy.
  • By default, nginx image is selected but you can choose an existing container image if you have a custom image(that we will do in the next tutorial).
  • Click Continue and Deploy.
  • Click on the expose button to access it from pub ip.
  • Expose it using a Load balancer.
  • You will get an External Endpoint (Pub ip) to access your cluster.

To access the pods with kubectl from the command line we need to generate a kubeconfig entry in our environment.

Open the cloud shell editor by searching in the search bar and enter the following command by specifying your cluster name and zone name.

  • gcloud container clusters get-credentials ClusterName — zone=ZoneName
  • kubectl get pods (To get pods in a cluster)
  • kubectl delete pod <Pod name> (To delete a pod)
  • kubectl version (To check for the version of kubectl)

(A pod represents a Node in a cluster)

Congratulations!!! you have successfully deployed your first Kubernetes cluster.

Deploying Web app on GCP by using Google Kubernetes Engine

In this tutorial, you build and store an image in Container Registry and deploy a Kubernetes cluster of it from the registry The architecture of the application is as follows:

Goto Cloud shell:

  1. Set the PROJECT_ID environment variable to your Google Cloud project ID.

(PROJECT_ID). You’ll use this environment variable when you build the container image and push it to your repository.

export PROJECT_ID=PROJECT_ID

  1. Confirm that the PROJECT_ID environment variable has the correct value:

echo $PROJECT_ID

  1. Set your project ID for the Google Cloud CLI:

gcloud config set project $PROJECT_ID

  1. Download the hello-app source code and Dockerfile by running the following commands and navigate into the hello-app directory:

git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples

cd kubernetes-engine-samples/hello-app

  1. Build and tag the Docker image for hello-app:

docker build -t hello-app:v1 .

  1. To list the images

docker images

  1. Test your container image using your local Docker engine:

docker run — rm -p 8080:8080 hello-app:v1

Click the Web Preview button on right top of cloud shell and then select the 8080 port number. GKE opens the preview URL on its proxy service in a new browser window.

  1. Configure the Docker command-line tool to authenticate to Artifact Registry:

gcloud auth configure-docker us-east1-docker.pkg.dev

  1. Tag and Push the Docker image that you just built to the repository

docker tag hello-app:v1 gcr.io/PROJECT_ID/hello-world

docker push gcr.io/PROJECT_ID/hello-world

We are DONE, Now we will make cluster from that image.

Enable APIs

First, you need to enable three APIs for your Project: the Kubernetes Engine API, Cloud

Resource Manager API, Google Container Registry API and the Stackdriver API.These APIs provide the programmatic interface for your pipeline to create and update resources in your cluster.

Click on the console menu in the top left, highlight APIs & Services, and click Dashboard:

Now, click on the console menu in the top left, highlight Kubernetes Engine, and click Clusters:

click the Create cluster button.

At this point, you will see a long list of cluster templates and options. Leave the Standard cluster option selected and enter a new name for the cluster in the Name field.

  • Choose a Zone that’s located close to you.
  • Goto default_pool then select Nodes from the left side and edit boot disk size to 30GB.
  • Now, click Create.
  • Go to the search bar and enter workloads and select Kubernetes engine from the dropdown.
  • Click Deploy.
  • In the Specify container section, select Existing container image.
  • In the Image path field, click Select.
  • In the Select container image pane, select the hello-app image you pushed to Container Registry and click Select.
  • In the Container section, click Done, then click Continue.
  • In the Configuration section, under Labels, enter app for Key and hello-app for Value.
  • Under Configuration YAML, click View YAML. This opens a YAML configuration file.
  • In the cluster section, select the cluster you created.
  • Click Continue and Deploy.
  • Click on the expose button to access it from pub ip.
  • Expose it on port 8080 using a Load balancer.
  • You will get an External Endpoint (Pub ip) to access your cluster.

To access the pods with kubectl from command line we need to generate a kubeconfig entry in our environment.

Open the cloud shell editor by searching in the search bar and enter the following command by specifying your cluster name and zone name.

  • gcloud container clusters get-credentials ClusterName — zone=ZoneName
  • kubectl get pods #To get pods in a cluster

Congratulations!!! you have successfully deployed the hello-world application on the Kubernetes cluster.

Some useful kubectl commands:

  • kubectl attach my-pod -i #Attach to Running Container
  • kubectl logs my-pod # dump pod logs (stdout)
  • kubectl logs -l name=myLabel #dump pod logs, with label name=myLabel
  • kubectl get pods #To get pods in a cluster
  • kubectl delete pod <Pod-name> #Delete a pod
  • kubectl version #To check for the version of kubectl
  • kubectl get services # List all services in the namespace
  • kubectl get pods — all-namespaces # List all pods in all namespaces
  • kubectl get pods -o wide # List all pods in the current namespace
  • kubectl get deployment <name> # List a particular deployment
  • kubectl get pods # List all pods in the namespace
  • kubectl get pod <pod-name> -o yaml # Get a pod’s YAML
  • kubectl describe nodes <node> #Describe commands with verbose output
  • kubectl describe pods <pod> #Describe commands with verbose output
  • kubectl explain pods # get the documentation for pod manifests
  • kubectl delete pods,services -l name=myLabel

# Delete pods and services with label name=myLabel

  • kubectl scale — replicas=3 -f foo.yaml

# Scale a resource specified in “foo.yaml” to 3

  • kubectl scale — current-replicas=2 — replicas=3 deployment/mysql

#If the deployment named mysql’s current size is 2, scale mysql to 3

  • kubectl run — generator=run-pod/v1 my-pod — image=nginx

#Imperatively, It will deploy an nginx web server into a k8s cluster, and directly create a deployment that will create a replica set. This replica set, in turn, will create a pod named my-pod

Let’s create an nginx deployment the declarative way:

  • vim simple_deployment.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx-deployment

spec:

selector:

matchLabels:

app: nginx

minReadySeconds: 5

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx:1.14.2

ports:

- containerPort: 80

  • kubectl create -f simple_deployment.yaml

This declarative way of creating resources will keep a track of the specification we provided while creating a resource.

Go through Kubectl cheatsheet for more commands about kubectl.

--

--

Cloud Guru

Join us to follow the latest news & announcements around Cloud, DevOps, Artificial intelligence, Machine learning, Internet of things and Big data & analytics.