In this blog, we’ll see how we can configure Jenkins on our Kubernetes clusters to scale on-demand, allowing for hundreds or thousands of pipeline jobs per day. Additionally, we’ll see how easy it is to incorporate Anchore vulnerability scanning and compliance into these pipelines to make sure we aren’t deploying or pushing insecure containers into our environments. In this example, we are using Amazon’s EKS, but the same steps can be performed on any Kubernetes cluster.
Step 1: Configure
First, we need to create a Jenkins deployment, load balancer service, cluster IP service and cluster role binding. To make things simple, we can apply this jenkins-deploy.yaml file which uses the latest Jenkins image from DockerHub.
kubectl apply -f jenkins-deploy.yaml
Run a kubectl get for the jenkins-lb service that was created for us and navigate to the EXTERNAL-IP from your browser. You will now be at the Jenkins UI (keep in mind it can take some time for a load balancer to be provisioned and become active).
kubectl get svc jenkins-lb NAME TYPE CLUSTER-IP EXTERNAL-IP jenkins-lb LoadBalancer 10.100.141.50 aaaceb153e97241fab81d9d4109440c-1811878998 .us-east-2.elb.amazon.com
Now that we are at the Jenkins UI, go to Manage Jenkins > Configure Global Security, check the “Enable proxy compatibility” box under CSRF Protection and click “Save.” Checking this box allows us to fix an extremely annoying crumb error.
Once that’s complete, we can go to Manage Jenkins > Manage Nodes and Clouds and click the gear icon on the far right of the master node row. Set the “# of executors” to zero and click “Save.” The master instance should only be in charge of scheduling build jobs, distributing the jobs to agents for execution, monitoring the agents and getting the build results. So, since we don’t want our master instance to execute builds, we are setting the executors to zero.
From Manage Jenkins > Manage Plugins, install the Anchore Container Image Scanner plugin, Kubernetes plugin and Pipeline plugin. Once those have installed, go to Manage Jenkins > Configure System and scroll down to the Anchore Container Image Scanner settings. Find the “Engine URL” using kubectl describe for the Anchore Engine API pod and enter your Engine Username and Engine Password (default Username: admin; default Password: foobar; default port: 8228), then click “Save.” Don’t forget the http:// and /v1.
Now go to Credentials > System > Global credentials (unrestricted) > Add Credentials, add a “Kubernetes Service Account” credential, and click “OK”. This will allow the Jenkins Kubernetes plugin to manage pods in the default namespace of the cluster using the cluster role binding that we created earlier:
Next, we need to configure the Kubernetes plugin. Go to Manage Jenkins > Manage Nodes and Clouds > Configure Clouds and add a Kubernetes Cloud. If you’re using EKS, retrieve the API server endpoint from the AWS EKS cluster dashboard and paste it into the “Kubernetes URL” field (other platforms may have different names or locations for the API server endpoint). Add the Kubernetes Service Account credential we just created, test your connection, and enter the Jenkins URL using kubectl describe for the Jenkins Master pod that was created for us earlier. Don’t forget the http:// and :8080.
Below that, we must create a Pod Template for our Jenkins Agents. Enter a Name and Label, set the Usage to “Use this node as much as possible,” create a Container Template using the Docker image jenkins/inbount-agent, then click “Save.”
We now have our Jenkins Master running, Anchore plugin configured and Kubernetes plugin configured. All that’s left to do is create our pipeline jobs and test!
Step 2: Test
Create a pipeline job and scroll down to its Pipeline configuration settings. Paste the contents from this Jenkinsfile into the Pipeline script, uncheck the “Use Groovy Sandbox” box, and click “Save.” A traditional Jenkinsfile may involve building an image, running QA tests, scanning with Anchore and then pushing to a registry, but for this example, we’re just showing the “Analyze with Anchore plugin” stage.
Create some more copies of this pipeline job using any images you want. A typical workflow may involve triggering these pipelines from a git push or merge request, but we’ll just trigger them manually for the sake of testing by using the clock icon on the far right of each item’s row. We’ll see our jobs in the build queue and our Jenkins Agents spinning up in the build executor status.
If we watch our pods, we’ll see the Jenkins Agents pending, creating, running our pipeline and then terminating.
If we take a look at our test2 pipeline and select the newly-created “Anchore Report,” we can see that httpd:latest was analyzed and the result was fail due to eight high vulnerability CVEs (common vulnerabilities and exposures).
We now have the ability within our cluster to dynamically scale Jenkins Agents to run our pipeline jobs. We’ve also seen how to integrate Anchore into these pipeline jobs to analyze containers and prevent non-compliant ones from entering our production environment.