Kubernetes Deployment & Secure CI/CD Pipeline Implementation for Full-Stack Applications
Today, we'll explore how to set up Continuous Integration and Continuous Deployment (CI/CD) for our full-stack Node.js and React.js application. We'll create a pipeline script to deploy our application in a Docker container while leveraging SonarQube for code quality checks, OWASP Dependency Check for examining code dependencies and security vulnerabilities, and Trivy for conducting a comprehensive security scan of our Docker images. Additionally, we'll employ Trivy to scan files for security risks. Through these tools, we'll ensure the security of our project's deployment.
Moreover, we'll configure email notifications to receive alerts about our pipeline's build stage results. Whether the build succeeds or fails, Jenkins will automatically notify us via email once the build stage is complete.
The exciting part is deploying our application on Kubernetes! We'll deploy it as both a Docker container and a Kubernetes container. For Kubernetes deployment, we'll create deployment and service manifest files.
This entire process will be demonstrated hands-on, starting from scratch. Let's get started and try it out step by step!!!
First, we'll launch two instances: one for the full-stack app server, where we'll deploy our entire project, and the other for the SonarQube server. The full-stack app server is our primary server. Here, we'll set up our entire project. On the SonarQube server, we'll run a SonarQube Docker container for using SonarQube.
Source Code: https://github.com/nahidkishore/Node-React-Full-Stack-App.git
Install and Setup Jenkins:
To start, we'll install Jenkins by following the steps provided and then set up our Jenkins server. Let's get started with this process!
sudo apt update
sudo apt install fontconfig openjdk-17-jre -y
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee \
/usr/share/keyrings/jenkins-keyring.asc > /dev/null
sudo sh -c 'echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null'
sudo apt-get update
sudo apt-get install jenkins -y
sudo systemctl start jenkins
sudo systemctl enable jenkins
You can get the ec2-instance-public-ip-address from your AWS EC2 console page.
Edit the inbound traffic rule to only allow custom TCP port 8080
http://:<ec2-instance-public-ip-address>8080.
Using this command, we'll copy the password from the terminal and paste it here. After clicking the "Continue" button, our Jenkins server will open.
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Next, by selecting the "Install suggested plugins" option, the required default plugins will be installed automatically. Once the installation of these suggested plugins is complete, we'll proceed to set up the User for Jenkins.
After making the necessary configurations, we'll click on the "Save and Continue" button, followed by the "Save and Finish" button. Our Jenkins setup will then be complete!
Run the below commands as root user to install Docker
sudo su -
sudo apt update
sudo apt install docker.io -y
sudo chmod 777 /var/run/docker.sock
sudo usermod -aG docker $USER
sudo usermod -aG docker jenkins
sudo usermod -aG docker ubuntu
sudo systemctl restart docker
Once you have done these, its a best practice to restart Jenkins
sudo systemctl restart jenkins
Sonarqube setup:
Install Docker for Sonarqube container on sonar-server instance
sudo apt-get update
sudo apt-get install docker.io -y
sudo usermod -aG docker $USER
sudo chmod 777 /var/run/docker.sock
sudo docker ps
docker run -d --name sonar -p 9000:9000 sonarqube:lts-community
Our SonarQube Docker container is now running. We'll attempt to access our SonarQube server. To do this, we need to enable inbound traffic on port 9000 for the SonarQube instance's EC2 instance. The access URL will be in the format http://<EC2-instance-public-IP-address>:9000.
Create SonarQUbe Credential in Jenkins
Once we've set up SonarQube, we'll generate a SonarQube token. This token will be added to the credentials on our Jenkins server, allowing Jenkins to connect with SonarQube.
To generate the token, go to SonarQube → Administration → Security, write "Jenkins," and then click on "Generate."
We'll set up webhooks in SonarQube for communication with Jenkins.
Next, navigate to Jenkins → Manage Jenkins → Manage Credentials → System → Global Credentials → Add Credentials →
Create DockerHub Credential in Jenkins
To create DockerHub credentials in Jenkins, follow these steps: Jenkins → Manage Jenkins → Manage Credentials → Stored scoped to Jenkins → Global → Add Credentials.
Install Trivy
sudo apt-get install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy -y
Next, we will login to Jenkins and start to configure our Pipeline in Jenkins
Install Plugins like Sonarqube Scanner, Nodejs, OWASP Dependency Check, Docker , Kubernetes
Goto Manage Jenkins →Plugins → Available Plugins →
Install below plugins
1→ SonarQube Scanner (Install without restart)
2 → OWASP Dependency Check
3 → Docker , Docker pipeline, Docker Cli, Docker build step,
4 → Kubernetes Related plugin
5 → Nodejs
Configure Sonar Scanner, OWASP check in Global Tool Configuration
Goto Manage Jenkins → Tools → Dependency Check, Sonar scanner→ Click on Apply and Save
Configure Sonar Server in System Configuration:
Lets Create a Job , it as Real-World CI-CD project
We'll create a new job for our full-stack application. In the item name section, we'll provide our project's application name, such as 'Node-React-Full-Stack-app.' After selecting the pipeline option and clicking 'OK,' our new job will be created.
If we want to describe our project, we can add its description. One tricky aspect is setting the 'discard old build' section to retain a record of the last two builds. This way, we can view the history of the previous two builds, preventing excessive storage consumption in our VM. This step is optional but beneficial for managing storage.
Next, we'll write a pipeline script. To start, we'll create a 'hello world' script to test if our pipeline functions correctly. After scripting, we'll click 'Apply and Save' followed by 'Build Now' to ensure the pipeline runs smoothly.
Great! Our 'hello world' pipeline script is working properly.
Now, we'll move to the next step by checking out from Git and providing our project's repository URL.
Then, at the next stage, we'll create a Sonarqube analysis and Quality Gate stage to analyze our project's code for any code analysis or code smell.
Our pipeline script is working properly without any errors or issues. So, we'll now verify if our project has been successfully created on the Sonarqube server. This check ensures there are no issues because if there were, the pipeline script would display error messages causing a build stage failure.
Great news! Our project has been successfully created on the Sonarqube server, and the 'quality gate status' shows 'Passed.' This means our Sonarqube analysis and Quality Gate stage have successfully completed.
OWASP Dependency-Check
Next, we'll perform an OWASP Dependency-Check. OWASP Dependency-Check works by identifying project dependencies and checking for any publicly known vulnerabilities. It'll scan all dependencies of our project and generate a report. This report, named 'dependency check report.xml,' will be generated in our project's workspace. We can review this XML file to see the types of issues it has identified.
Looks like our OWASP dependency check stage has successfully completed, and within our workspace, the dependency check report XML file has been generated. The console output provides comprehensive details about each stage of our pipeline, showcasing any failed stages, types of issues encountered, reasons for failure, and more. Specifically, we can easily locate the location of our generated XML file within the console output.
Docker build and Push to Docker Hub:
Moving on to Docker Build and Push to Docker Hub: We'll build our Docker file and then push our Docker image to a DockerHub repository. Once the build stage is successful, we'll verify if our Docker image has indeed been successfully pushed to the DockerHub repository.
Indeed, our image has been successfully pushed to DockerHub, displaying the timestamp of its last push, indicating it was pushed just a minute ago.
Trivy Docker Image Scan
Next, let's address Trivy Docker Image Scan. Trivy serves as an open-source vulnerability scanner specifically designed for container images. Its main focus lies in scanning container images to identify any security vulnerabilities within their OS packages and dependencies.
To maintain a clean workspace, we've created a new stage named 'clean workspace.' Additionally, we've added two more stages: 'Trivy FS Scan' and 'Build Frontend,' responsible for scanning and building the frontend app, respectively. All three stages have been successfully executed.
Deploy to Docker Container:
In the final stage,If all the previous stages have worked properly, we'll deploy our Docker image into a Docker container. This will allow our application to be easily accessible from anywhere in the world.
stage("Deploy to Docker Container"){
steps{
sh " docker run -d --name node-full-stack-app -p 4000:4000 nahid0002/node-full-stack-app:latest "
}
}
Looks like our Docker image has been successfully deployed into a Docker container.
To check if our container is properly running, we can use the 'docker ps' command. This command will verify if our container is indeed running. Here, we see that our container is up and running.
Now, let's try accessing our application. Before that, it's important to ensure that the port number of our Docker image is added as an inbound rule in the security settings. Without this, we won't be able to access our application. In our case, the application's port number is 4000, so we've added port 4000 to the inbound rule section and saved the settings.
Next, we'll copy our instance's public IP address and paste it into our browser, adding the port number alongside it. Hitting 'Enter,' we can now see our application.
Wow! Congratulations! Finally, we've successfully accessed our application.
Now, let's add a few tasks and check if the application is working correctly. The new tasks we've added are listed below, and if needed, we can delete specific tasks.
Deploy application on Kubernetes
We've already deployed our application in a Docker container, and it's successfully running. Now, we're planning to deploy our application on Kubernetes and try accessing it. So, let's get started.
To deploy the app on Kubernetes, we'll install Kubeadm and Kubectl. We've set up two Ubuntu 20.04 instances: one for the K8s master and the other for the worker node. Additionally, we'll install Kubectl only on the Jenkins machine.
We'll install Kubectl on the Jenkins server to prevent any failure in the Kubernetes deployment stage.
Create Credentials for connecting to Kubernetes Cluster using kubeconfig
Execute the below command
cd .kube/
sudo cat config
Open your text editor or notepad, copy and paste the entire content and save in a file. We will upload this file.
Enter ID as K8s and choose File and upload the file and save.
Now, we'll write a pipeline script for deploying our application.
stage('Deploy To Kubernetes') {
steps {
script {
withKubeConfig([credentialsId: 'K8s', serverUrl: '']) {
sh ('kubectl apply -f deployment.yaml')
}
}
}
}
Our application is successfully deployed in a Kubernetes container.
To check if our container is deployed, we'll execute a command.
kubectl get all
Kubectl get svc
Once executed, if we see the container running and displaying port number 30430, we'll add this port number to the inbound rule of our worker node's EC2 instance. Then, by adding this port number to the public IP address of our worker node's instance in the browser, we should be able to access our application.
Wow! Congratulations, we've successfully accessed our application.
Clean up your container:
We can write a pipeline script to stop and remove the container if needed, essentially cleaning it up.
Configuring mail server in Jenkins ( Gmail )
Another great addition is configuring email notifications for our pipeline's build stage. We want to receive notifications via email automatically from the Jenkins server, whether the build status is a success or failure. After completing the build stage, notifications will be sent to our email.
So, here's the process:
Firstly, we'll need to install the Email Extension Template plugin.
Next, let's navigate to Gmail, access our profile, and manage our Google Account. Then, within the security tab, ensure that 2-step verification is enabled. Generate an app password by searching for 'other' in the app passwords section.
Once the plugin is installed in Jenkins, we'll need to access 'Manage Jenkins' and navigate to 'credentials' to add our email username and the generated password.
In 'Manage Jenkins' again, under 'configure system,' we'll set up the details in the 'E-mail Notification' section according to the provided guidelines.
For verification purposes, we'll test the email configuration to ensure that everything is properly set up.
After configuring the 'Extended E-mail Notification' section as indicated, we'll test the configuration to check if our settings are working correctly.
Now, let's test the configuration to confirm if the notifications are being sent successfully. By checking our inbox, we'll see a test email indicating whether our email configuration is functioning properly.
Finally, let's script the pipeline to enable email notifications for our build stage completion.
post {
always {
emailext attachLog: true,
subject: "'${currentBuild.result}'",
body: "Project: ${env.JOB_NAME}<br/>" +
"Build Number: ${env.BUILD_NUMBER}<br/>" +
"URL: ${env.BUILD_URL}<br/>",
to: 'nahidkishore99@gmail.com',
attachmentsPattern: 'trivy.txt'
}
}
Once the build stage is successfully completed, we'll check our email to confirm if we've received any success-related emails.
Checking our email, we can see that we've received a success email indicating that our configurations are working properly.
In conclusion, we've successfully demonstrated the implementation of a CI/CD pipeline for our full-stack application. We've covered aspects from setting up Jenkins, SonarQube, Docker, Trivy, and Kubernetes to deploying our application while incorporating security measures and automated email notifications. This hands-on project provided an in-depth understanding of building, testing, and deploying applications with CI/CD practices. By following these steps, developers can ensure both the reliability and security of their deployments.
Thank you for reading this blog. If you found this blog helpful, please like, share, and follow me for more blog posts like this in the future.
— Happy Learning !!!