Secure and Scalable React.js E-commerce: CI/CD, Docker, SonarQube, and Kubernetes Orchestration
Today, we'll explore how to set up Continuous Integration and Continuous Deployment (CI/CD) for our React.js E-commerce application. We'll create a pipeline script to deploy our application in a Docker container while leveraging SonarQube for code quality checks, OWASP Dependency Check for examining code dependencies and security vulnerabilities, and Trivy for conducting a comprehensive security scan of our Docker images. Additionally, we'll employ Trivy to scan files for security risks. Through these tools, we'll ensure the security of our project's deployment.
We've optimized our Docker image by around 800%, following industry best practices. Initially, our project's Docker image size was 1.4GB. However, by creating an efficient Dockerfile, we managed to significantly reduce the image size to a mere 135MB. This reduction in Docker image size has been incredibly beneficial for our project.
The exciting part is deploying our application on Kubernetes! We'll deploy it as both a Docker container and a Kubernetes container. For Kubernetes deployment, we'll create deployment and service manifest files.
✅ Developed a Continuous Integration and Continuous Deployment (CI/CD) pipeline for a React.js E-commerce application.
✅ Utilized Kubernetes orchestration and Docker containers to deploy the application, optimizing Docker image size by 800% following industry best practices.
✅ Integrated SonarQube for code quality checks and OWASP Dependency Check, ensuring security by examining code dependencies and vulnerabilities.
✅ Incorporated Trivy for comprehensive security scans of Docker images and files, reducing security risks and vulnerabilities.
✅ Successfully deployed the application as both Docker and Kubernetes containers, using deployment and service manifest files.
✅ Demonstrated a hands-on approach from scratch, deploying instances for the main server and SonarQube server, and setting up Kubernetes nodes.
✅ Installed and configured Jenkins, SonarQube, and Docker, creating credentials for SonarQube and DockerHub.
✅ Configured CI/CD pipeline stages, including 'Clean Workspace,' SonarQube analysis, OWASP Dependency Check, Trivy FS Scan, Docker Build, Push to Docker Hub, and Deployment to Docker Container.
✅ Verified application functionality by accessing and testing product purchases, cart section addition, and removal to ensure proper functioning.
✅ Ensured scalability, availability, and load balancing with Kubernetes Deployment, setting the desired number of replicas to 3, enhancing application performance during varying workloads and unforeseen failures.
This entire process will be demonstrated hands-on, starting from scratch. Let's get started and try it out step by step!!!
First, we'll launch two instances: one for the Main server, where we'll deploy our entire project, and the other for the SonarQube server. The main server is our primary server. Here, we'll set up our entire project. On the SonarQube server, we'll run a SonarQube Docker container for using SonarQube.
We'll deploy our application using Kubernetes orchestration. To do this, we'll install Kubeadm and Kubectl. We've set up two Ubuntu 20.04 instances: one for the K8s master and the other for the K8s worker node. Additionally, we'll install Kubectl specifically on the Jenkins machine.
Source Code: https://github.com/nahidkishore/Shop-safely-now.git
Install and Setup Jenkins:
To start, we'll install Jenkins by following the steps provided and then set up our Jenkins server. Let's get started with this process!
sudo apt update
sudo apt install fontconfig openjdk-17-jre -y
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee \
/usr/share/keyrings/jenkins-keyring.asc > /dev/null
sudo sh -c 'echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null'
sudo apt-get update
sudo apt-get install jenkins -y
sudo systemctl start jenkins
sudo systemctl enable jenkins
You can get the ec2-instance-public-ip-address from your AWS EC2 console page.
Edit the inbound traffic rule to only allow custom TCP port 8080
http://:<ec2-instance-public-ip-address>8080.
Using this command, we'll copy the password from the terminal and paste it here. After clicking the "Continue" button, our Jenkins server will open.
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Next, by selecting the "Install suggested plugins" option, the required default plugins will be installed automatically. Once the installation of these suggested plugins is complete, we'll proceed to set up the User for Jenkins.
After making the necessary configurations, we'll click on the "Save and Continue" button, followed by the "Save and Finish" button. Our Jenkins setup will then be complete!
Run the below commands as root user to install Docker
sudo su -
sudo apt update
sudo apt install docker.io -y
sudo chmod 777 /var/run/docker.sock
sudo usermod -aG docker $USER
sudo usermod -aG docker jenkins
sudo usermod -aG docker ubuntu
sudo systemctl restart docker
Once you have done these, its a best practice to restart Jenkins
sudo systemctl restart jenkins
Sonarqube setup:
Install Docker for Sonarqube container on sonar-server instance
sudo apt-get update
sudo apt-get install docker.io -y
sudo usermod -aG docker $USER
sudo chmod 777 /var/run/docker.sock
sudo docker ps
docker run -d --name sonar -p 9000:9000 sonarqube:lts-community
Our SonarQube Docker container is now running. We'll attempt to access our SonarQube server. To do this, we need to enable inbound traffic on port 9000 for the SonarQube instance's EC2 instance. The access URL will be in the format http://<EC2-instance-public-IP-address>:9000.
Create SonarQube Credential in Jenkins
Once we've set up SonarQube, we'll generate a SonarQube token. This token will be added to the credentials on our Jenkins server, allowing Jenkins to connect with SonarQube.
To generate the token, go to SonarQube → Administration → Security, write "Jenkins," and then click on "Generate."
We'll set up webhooks in SonarQube for communication with Jenkins.
Next, navigate to Jenkins → Manage Jenkins → Manage Credentials → System → Global Credentials → Add Credentials →
Create DockerHub Credential in Jenkins
To create DockerHub credentials in Jenkins, follow these steps: Jenkins → Manage Jenkins → Manage Credentials → Stored scoped to Jenkins → Global → Add Credentials.
Install Trivy
sudo apt-get install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy -y
Next, we will login to Jenkins and start to configure our Pipeline in Jenkins
Install Plugins like Sonarqube Scanner, Nodejs, OWASP Dependency Check, Docker , Kubernetes
Goto Manage Jenkins →Plugins → Available Plugins →
Install below plugins
1→ SonarQube Scanner (Install without restart)
2 → OWASP Dependency Check
3 → Docker , Docker pipeline, Docker Cli, Docker build step,
4 → Kubernetes Related plugin
5 → Nodejs
6 → jdk
Configure Sonar Scanner, OWASP check, jdk, nodejs in Global Tool Configuration
Goto Manage Jenkins → Tools → Dependency Check, Sonar scanner→ Click on Apply and Save
Configure Sonar Server in System Configuration:
Lets Create a Job , it as Real-World CI-CD project
We'll create a new job for our application. In the item name section, we'll provide our project's application name, such as 'Shop Safely Now.' After selecting the pipeline option and clicking 'OK,' our new job will be created.
If we want to describe our project, we can add its description. One tricky aspect is setting the 'discard old build' section to retain a record of the last two builds. This way, we can view the history of the previous two builds, preventing excessive storage consumption in our VM. This step is optional but beneficial for managing storage.
Next, we'll write a pipeline script. To start, we'll create a 'hello world' script to test if our pipeline functions correctly. After scripting, we'll click 'Apply and Save' followed by 'Build Now' to ensure the pipeline runs smoothly.
Great! Our 'hello world' pipeline script is working properly.
Now, we'll move to the next step by checking out from Git and providing our project's repository URL.
Before running any new builds, we'll always initiate the 'Clean Workspace' stage to ensure our workspace is tidy and free from previous build artifacts. This step is crucial as it clears the workspace used during the build process, promoting a clean and consistent environment for subsequent builds in the pipeline.
Then, at the next stage, we'll create a Sonarqube analysis and Quality Gate stage to analyze our project's code for any code analysis or code smell.
Our pipeline script is working properly without any errors or issues. So, we'll now verify if our project has been successfully created on the Sonarqube server. This check ensures there are no issues because if there were, the pipeline script would display error messages causing a build stage failure.
Great news! Our project has been successfully created on the Sonarqube server, and the 'quality gate status' shows 'Passed.' This means our Sonarqube analysis and Quality Gate stage have successfully completed.
OWASP Dependency-Check & Trivy FS Scan:
Next, we'll perform an OWASP Dependency-Check. OWASP Dependency-Check works by identifying project dependencies and checking for any publicly known vulnerabilities. It'll scan all dependencies of our project and generate a report. This report, named 'dependency check report.xml,' will be generated in our project's workspace. We can review this XML file to see the types of issues it has identified.
Looks like our OWASP dependency check stage has successfully completed, and within our workspace, the dependency check report XML file has been generated. The console output provides comprehensive details about each stage of our pipeline, showcasing any failed stages, types of issues encountered, reasons for failure, and more. Specifically, we can easily locate the location of our generated XML file within the console output.
Trivy FS Scan:
Trivy is a vulnerability scanner designed for container images. Integrating Trivy into a CI/CD pipeline involves conducting vulnerability scans on the container images during the build stage.
Scans code early for vulnerabilities, preventing insecure deployments.
Improves code quality and compliance with security standards.
Reduces attack surface and potential damage from exploits.
Docker build and Push to Docker Hub:
Moving on to Docker Build and Push to Docker Hub: We'll build our Docker file and then push our Docker image to a DockerHub repository. Once the build stage is successful, we'll verify if our Docker image has indeed been successfully pushed to the DockerHub repository.
Indeed, our image has been successfully pushed to DockerHub, displaying the timestamp of its last push, indicating it was pushed just a minutes ago.
Trivy Docker Image Scan
Next, let's address Trivy Docker Image Scan. Trivy serves as an open-source vulnerability scanner specifically designed for container images. Its main focus lies in scanning container images to identify any security vulnerabilities within their OS packages and dependencies.
Deploy to Docker Container:
In the final stage,If all the previous stages have worked properly, we'll deploy our Docker image into a Docker container. This will allow our application to be easily accessible from anywhere in the world.
stage("Deploy to Docker Container"){
steps{
sh " docker run -d --name shop-safely-now -p 3000:3000 nahid0002/shop-safely-now:latest "
}
}
Looks like our Docker image has been successfully deployed into a Docker container.
To check if our container is properly running, we can use the 'docker ps' command. This command will verify if our container is indeed running. Here, we see that our container is up and running.
Now, let's try accessing our application. Before that, it's important to ensure that the port number of our Docker image is added as an inbound rule in the security settings. Without this, we won't be able to access our application. In our case, the application's port number is 3000, so we've added port 3000 to the inbound rule section and saved the settings.
Next, we'll copy our instance's public IP address and paste it into our browser, adding the port number alongside it. Hitting 'Enter,' we can now see our application.
Wow! Congratulations! Finally, we've successfully accessed our application.
Let's test by purchasing some products to check if they're properly added to the cart section and if we can successfully remove them. This way, we'll ensure that our application is functioning correctly.
Deploy application on Kubernetes
We've already deployed our application in a Docker container, and it's successfully running. Now, we're planning to deploy our application on Kubernetes and try accessing it. So, let's get started.
To deploy the app on Kubernetes, we'll install Kubeadm and Kubectl. We've set up two Ubuntu 20.04 instances: one for the K8s master and the other for the worker node.
Additionally, we'll install Kubectl only on the Jenkins machine.
We'll install Kubectl on the Jenkins server to prevent any failure in the Kubernetes deployment stage.
Create Credentials for connecting to Kubernetes Cluster using kubeconfig
Execute the below command
cd .kube/
sudo cat config
Open your text editor or notepad, copy and paste the entire content and save in a file. We will upload this file.
Enter ID as K8s and choose File and upload the file and save.
Now, we'll write a pipeline script for deploying our application.
stage('Deploy to kubernets') {
steps {
script {
withKubeConfig([credentialsId: 'k8s', serverUrl: '']) {
sh ('kubectl apply -f deployment.yaml')
}
}
}
}
Our application is successfully deployed in a Kubernetes container.
To check if our container is deployed, we'll execute a command.
kubectl get all
Kubectl get svc
Kubectl get deployment
Kubectl get pods
Once executed, if we see the container running and displaying port number 30885, we'll add this port number to the inbound rule of our worker node's EC2 instance. Then, by adding this port number to the public IP address of our worker node's instance in the browser, we should be able to access our application.
We've set the desired number of replicas to 3 in our deployment file, resulting in the creation of three pods. This configuration helps in managing application scalability, availability, and load balancing. Having a maximum number of replicas in a Kubernetes Deployment ensures robustness, scalability, and efficient resource utilization. It guarantees better performance and availability for your application, especially during varying workloads and unexpected failures.
Wow! Congratulations, we've successfully accessed our application.
In conclusion, we've successfully demonstrated the implementation of a CI/CD pipeline for our E-commerce application. We've covered aspects from setting up Jenkins, SonarQube, Docker, Trivy, and Kubernetes to deploying our application while incorporating security measures . This hands-on project provided an in-depth understanding of building, testing, and deploying applications with CI/CD practices. By following these steps, developers can ensure both the reliability and security of their deployments.
Thank you for reading this blog. If you found this blog helpful, please like, share, and follow me for more blog posts like this in the future.
— Happy Learning !!!