For software companies following an agile development process, releasing software every day is an intensive process. Jenkins is a tool that can speed up your workflow by automating many of the repetitive tasks, such building, testing, and releasing.
What Is Jenkins?
Jenkins was originally built to serve a single purpose—to automate the building and testing of nightly software builds. New commits must be integrated into the master branch regularly (often referred to as “continuous integration”), which often involves heavy testing to ensure everything runs smoothly. Doing that once a week is one thing, but when you’re integrating multiple times a day, it’s better for everyone to have that be an automatic process.
Jenkins, and other CI/CD solutions like it, speed up this process. You can think of it like an automated shell script. For example, to release a new build of a React app, you might have to run npm install, npm run build, then run a testing suite like Jest to verify that the new build passes all the tests. If it’s successful, you might want to send it over to a testing environment for manual review, or simply publish a new release directly. All of these are commands you can script quite easily.
Jenkins can handle running all of these tasks as part of a pipeline. Whenever Jenkins detects changes to your source control (either on master or a feature branch), it will start the automated pipeline, and run through each task you’ve given it. Some tasks are as simple as bash commands, other tasks may interface with an external service like Jira, Git, or your email provider. Jenkins is also fully extensible with plugins, and can really be made to do whatever you would like.
Jenkins has two major releases, Blue Ocean, and classic. Blue Ocean is more recent, and includes a streamlined UI experience that makes creating pipelines much easier. We’ll be using Blue Ocean for this guide, but most of the same concepts will apply to both versions of Jenkins.
Installing Jenkins
Jenkin binaries are available for multiple operating systems, but not usually from the default package managers. If you’re on Ubuntu, you can download and install with the following few commands:
To make things platform independent though, we’ll run Jenkins using Docker, which is what Jenkins recommends anyways.
Installing Docker
Docker is a platform for running apps in “containers;” The container includes all necessary dependencies, and will ensure the app runs the same regardless of the base operating system—all you have to do is install Docker for your system, and run a few commands.
For Ubuntu, you’ll have to install some prerequisites:
Then add Docker’s GPG key:
And add the repo itself:
Refresh your sources:
And, finally, install Docker from apt:
Docker should now be running on your system, which you can check with systemctl.
Setting Up Jenkins
With Docker up and running, you can set up the Docker container for Jenkins. You’ll first need a bridged network for the containers to communicate on:
Docker is inherently ephemeral—all data stored on containers will be deleted when those containers are stopped. To prevent this, you’ll want to store data on Docker volumes, which will persist to disk. You’ll need two volumes, one for some TLS certs Jenkins needs to connect with Docker, and the other for all your Jenkins data. These will be bound to the container at runtime.
Jenkins actually needs to be able to run Docker as part of its operation, to set up the build environments. This isn’t possible with normal Docker, so to make that function properly, you’ll need to run “Docker in Docker,” or DinD. The following command will run Docker’s official docker:dind container, bind the network and volumes you created in the previous steps to it, and publish it as a service running on port 2376 for the Jenkins container to use. You’re free to change this port if you want, though you’ll have to change it in the next step as well.
With that set up, you can run the jenkinsci/blueocean container using the following command:
This will mount the network and drives, set the DOCKER_HOST variable to the port that DinD is running on, and publish the service on port 8080, which you can change if it’s not free. (The format is host:container.) It also publishes Jenkin’s administrative connection on port 50000, if you’re planning on setting up a master Jenkins server with multiple distributed builds connecting to it.
Once Jenkins is up and running, it will be accessible over a browser on port 8080 of the host machine. You’ll need to do a bit of setup before Jenkins is fully usable, the first of which is authenticating yourself to prove you’re the owner of the server, and not a bot attacking a vulnerable web interface.
You’ll need to enter in a password stored in /var/jenkins_home/, which is part of the Docker volume. To get access to it, you’ll have to run cat inside the Docker container:
This will print out the password, which you can copy and begin the rest of the setup.
You’ll be asked to configure your admin username and password, install various plugins. Selecting “Install Recommended Plugins” will just install a lot of community recommended ones to start. You’re always free to install more later.
Creating a Pipeline
Once you set up Jenkins, you’ll be greeted with the following welcome screen. Click “Create A New Pipeline” to get started.
You’ll have to select where your code is stored. You can link your Github or BitBucket account directly with an access key.
However, a better solution is to simply choose generic “Git.” Enter in your repository URL, and Jenkins will give you a public key. Because Jenkins is able to make commits (and always commits changes to pipeline configuration), you should create a new service user and add the public key to it.
Jenkins will take a second to connect to Git, then bring you to a page where you can edit the pipeline settings. Jenkins stores all of the pipeline configuration in a Jenkinsfile, placed at the root of your repository. Whenever you make updates to the pipeline in the editor, Jenkins will commit the change to your Jenkinsfile.
Each pipeline will have a few distinct stages, such as Build, Test, or Deploy, which will contain individual steps. These can do all sorts of things, such as send emails, interact with other services like Jira and Git, and coordinate the flow of other steps, but you’ll most commonly use these to execute simple shell scripts. Any errors in the return of these scripts will cause the pipeline to fail.
Before adding any stages, you’ll want to configure the environment settings. Usually, you’ll use a Docker container, such as node:latest.
For this example, we’ll build a Node based web app. Simply adding two steps for npm install and npm run build is all that is necessary. Jenkins will execute these commands and move to the next stage, with the build artifacts in place. For the testing phase, setting up a shell script to run Jest will require all tests to pass for the build to be successful.
Jenkins can run multiple stages in parallel, which is useful if you need to test on multiple different platforms. You can set up a “Test on Linux” and “Test on Windows” stage, and have them execute at the same time so that you aren’t waiting on one to start the other. If either one fails, the pipeline will still fail.
Once you’re done editing, Jenkins will automatically start your pipeline. If you click on it for more info, you can watch the build progress through the stages.
The top bar will turn red if there’s an error, and green if everything is successful. If you’re running into errors, you can click on the offending stages to view the console output of the commands causing the pipeline to fail. You’ll also want to check your environment configuration to make sure all required tools are installed. (You may want to set up your own Docker container for your builds.)