Launching your First Kubernetes Cluster with Nginx running using Kubeadm
In this project, I will cover everything you need to know to get started. From setting up your environment and installing Kubernetes components to configuring networking and deploying Nginx, I will walk you through each step with clear explanations and easy-to-follow instructions. By the end, you'll have a fully functional Kubernetes cluster up and running, with the powerful Nginx web server serving your applications.
Before starting with our hands-on let's get some introduction about Minikube, Kubeadm and pods in Kubernetes.
Minikube
It is considered a local Kubernetes which makes it easy to use all K8s applications on your VMs. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete.
Features of Minikube
Supports the latest Kubernetes release (+6 previous minor versions)
Cross-platform (Linux, macOS, Windows)
Deploy as a VM, a container, or on bare-metal
Multiple container runtimes (CRI-O, containers, docker)
Direct API endpoint for blazing-fast image load and build.
Advanced features such as LoadBalancer, filesystem mounts, FeatureGates, and network policy.
Addons for easily installed Kubernetes applications.
Supports common CI environments.
Pods:
In Kubernetes, a pod is the smallest and simplest unit in the deployment model. A pod is a logical host for one or more containers that are deployed together on the same node and share the same network namespace. Each pod has a unique IP address within the cluster and can communicate with other pods via a shared network.
A pod can contain one or more containers, which share the same network and file system. These containers can be tightly coupled and work together to provide a specific application service. For example, a pod can contain an application container and a sidecar container that performs logging or monitoring tasks.
Kubernetes manages pods as a unit, scheduling them to run on nodes in the cluster and ensuring that the desired number of replicas are always available. Pods are ephemeral and can be created, deleted, or replaced at any time by Kubernetes. Because of this, any data or state that needs to be persisted should be stored outside the pod, such as in a persistent volume.
Create an EC2 instance with t2.medium configuration as the minimum CPU required for Kubernetes to run is 2.
Login to the server and then install Docker.
Use the below command to install minikube.
What is Kubeadm?
Kubeadm is a tool used for automating the process of setting up and managing Kubernetes clusters. It is a command-line tool that can be used to bootstrap a cluster, configure the control plane, and add worker nodes to the cluster. Kubeadm is part of the Kubernetes project and is a recommended way of creating and managing production-grade clusters.
Kubeadm performs several tasks during the cluster creation process. These include:
Setting up the Kubernetes control plane: This includes initializing the etcd cluster, generating certificates and keys, and configuring the Kubernetes API server, scheduler, and controller manager.
Joining worker nodes to the cluster: Kubeadm generates a token that is used by worker nodes to join the cluster. Worker nodes are required to have the token, as well as the cluster's certificate authority (CA) and key.
Installing network plugins: In order for pods to communicate with each other in the Kubernetes cluster, you need to install a network plugin. Kubeadm supports several plugins, including Flannel, Calico, and Weave.
Kubeadm is designed to be flexible and configurable, allowing users to customize their Kubernetes cluster based on their specific requirements. It supports various configuration options, such as specifying the network pod CIDR, enabling or disabling certain features, and configuring the API server's service IP address.
Overall, Kubeadm simplifies the process of creating and managing Kubernetes clusters, making it easier for developers and DevOps teams to deploy and manage containerized applications.
Now let's get started with practicals.
Firstly I will create one AWS Server with 2 CPUs and 4 GB RAM for Master.
A minimum of two servers running Ubuntu 18.04 or later, with at least 2GB of RAM and 2 CPUs. One server will act as the master node, and the others will be worker nodes.
SSH access to all servers.
Master:
Worker:
Step: Install Docker and Kubeadm
First, we need to install Docker and Kubeadm on all of the servers. Run the following commands on each server:
sudo apt update -y
sudo apt install docker.io -y
sudo systemctl start docker
sudo systemctl enable docker
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update -y
sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y
Now our Master and Worker nodes are individually ready but not connected to each other right now.
So, let's connect them. let's come to the master node and switch to the root user with the below command.
sudo su
Now, let's initialize Kubeadm on Master and set up a cluster with the following command.
kubeadm init
This command will pull etcd, schedular, controller manager, cube API server etc.
Now, you will below instructions which it will ask to you run some set of commands.
Now on the Master node Kubectl finish the setup and the network will be created with the below command.
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
Now, We will generate a connection token that will be used to connect the Master node to the Worker node.
Now, Let's create a token with the help of the below command on the Master node.
kubeadm token create --print-join-command
Any worker who has this token will be able to my cluster. Now, I will jump to the Worker server and switch to the root user.
I will reset all checks so that we can join any server freshly and after running this command your worker node will be disabled for running "kubeadm init" command this node will remain the Worker node always and kubeadm will not be installed.
kubeadm reset pre-flight checks
Now we will make a connection between the Master and the Worker node but before that, we need to allow a port(6443) that is mentioned in your token if not allowed.
Now copy that token and add the version flag after the token as "--v=5" as below.
Worker Node:
kubeadm join 172.31.27.132:6443 --token w1h5uy.vxvu1arxuxxhf5u6 --discovery-token-ca-cert-hash sha256:5688fcdbb752d72bed51f84fe111f3ec25d86679115103d34ca578521d4fe085 - -v=5
Currently, in our Master node, we have only one node but after running the above command on the Worker node it will be show our worker node.
Before joining Master :
Your cluster is ready!!!!!
Now I want to run the nginx pode on my Worker node. It's very easy!!
Just go to your Master and run the below command.
kubectl run nginx --image=nginx --restart=Never
The magic is here dear!!!! Your Nginx is running on your Worker node.
Look bellow command......
Worker Node:
docker ps
Master Node:
You can see this here with the below command.
This command shows the status of all the services running in the Kubernetes cluster, including the Nginx service created by the
kubectl get pod
Master:
Congratulations! You have successfully Launched your first Kubernetes Cluster with Nginx running using Kubeadm.
Thank you for reading this blog. I hope you learned something new today! If you found this blog helpful, please like, share, and follow me for more blog posts like this in the future.
— Happy Learning !!!
Let’s connect !!!