One of the challenges when starting with CI/CD is that quite a few systems need to work together. This complexity makes it somewhat difficult to focus on the conceptual side of CI/CD. Instead of learning how the software development and delivery benefit, people need to work on infrastructure details.
When starting with CI/CD back in 2008, I have experienced first-hand what it means to fight your way through all the nitty-gritty details. The discrepancy between the relatively simple core ideas, and what it means to put them in practice, is truly staggering. So one of core aspects of the course was to remove that hurdle.
Below you can read how this is done. I tried very hard to imagine what I would have loved to have when starting this journey myself. So hopefully it is also helpful for you.
A multitude of options
A big challenge was to decide how I wanted to provide the learning environment. Back in 2008 that wasn’t even a question. The only way was a number of VMs running in the company data center. Amazon EC2 was still in public beta, while AWS as an organization didn’t even exist yet. Notebooks and (affordable) desktops on the other hand were not nearly powerful enough to run things locally.
Today, on the other hand, we have a multitude of public cloud providers with lots of different offerings, and much more powerful hardware (esp. notebooks with very fast SSDs) available to individuals. As if that wasn’t enough, there are freely available virtualization solutions (e.g. VirtualBox, Proxmox, XCP-ng) and containers.
So how to navigate this sea of possibilties? Here are the guiding principles I came up with eventually:
- Reduce infrastructure complexity as much as possible. The course is about CI/CD and not setting up your personal data center.
- Automate the setup as much as possible. Nobody wants to go through 10 pages with setup instructions. Plus it is virtually impossible to perform so many steps without making errors that later come back to bite you.
- No dependency on a particular cloud provider or type of infrastructure.
- Usable on personal hardware.
Another critical aspect was that I did not want to develop, let alone maintain, multiple solutions. So I had to find something that was usable as widely as possible. On the other hand it had to be easy to set up for people who were not really interested in the infrastructure side of things.
So I had a number of factors to take into account. It took a while, but I think I found an approach that works for most (perhaps all) people.
The decision
As with so many decisions in designing IT systems, the answer lies in finding the right layers for your problem. This should be augmented with a “less is more” attitude. Something that does not exist in the system, (usually) cannot cause problems. Add ruthless automation and you end up with something robust and easy to maintain.
Core
With all those aspects in mind, I decided to go for a plain VM approach. Such a VM can be run on any cloud platform, virtualization technology, or even bare metal hardware. The actual programs will run as containers with Docker Compose as the underlying toolkit.
It comes without saying that the underlying OS will be Linux. Yes, for some use-cases I prefer FreeBSD. But for me it does not make sense here. The main factor was the availability of Docker and ready-made images of all the software that is needed. More details on those programs below.
Update 19 Sep 2024: Since a quick test showed that using Debian 12 simply works, ignore the paragraph below. The learning environment will be based on Debian 12.
The only supported Linux distribution at the moment is Debian 11 Bullseye. While this is not the latest release, it will receive LTS updates until August 2026. I will eventually add support for Debian 12 Bookworm, but didn’t want things slowed down because of that. Feel free to test things on Debian 12, but please understand that I cannot support you there at this time.
Using Docker images for the actual applications was an easy decision. While I have always liked to automate installations on the plain OS, this would have made things more complicated. To handle the configurations and things like networking, Docker Compose is a great and lightweight tool. So this was a no-brainer, too.
Applications
The next challenge was selecting the individual applications for the various tasks. I needed something for
- Version control
- CI pipeline orchestration
- Binary repository
- Local Docker registry
To kill two birds with one stone, I chose GitLab CE for version control and CI pipeline orchestration. While I have used Jenkins (and its predecessor Hudson before that) as my main CI server for many years, the integration that GitLab delivers was the deciding factor. GitLab also provides functionality for the two remaining tasks (binary repo and Docker registry). But they both come with assumptions on their usage that made me choose other tools.
For the binary repo I went with Nexus Repository OSS. I have used it in other scenarios and it does the job well for me. In the past I had also used Artifactory from JFrog, but found recent open source versions a bit limited. I admit, though, that I have not re-evaluated Artifactory for this use-case. If you want to use it instead, feel free to adjust your setup accordingly. It will not change anything on the conceptual side.
The Docker registry was a bit tricky because of the self-signed certificate that gets used. The latter is a nightmare to set up with both GitLab and Nexus, which both provide that functionality. I had spent quite a few hours trying different things. In the end I decided to use the Registry from Docker. Its functionality is sufficient here and making it work with the self-signed certificate was relatively straightforward.
How it works
Let’s now dive a little bit into how you get started on a practical level. I will not get into all the details, but you should be able to form a picture in your mind what we are talking about
Prepare Debian 12 system
Your first step is to get yourself a fresh Debian 12 environment. While in theory you can also re-use an existing system, this is not supported and the JahnTech Learning Environment Installer assumes to be run on a fresh system. So things might not work and/or you can loose data, if you execute the installer on an existing system.
As to sizing, you will need approximately 16 GB of RAM and 50 GB of disk space. Those values contain some buffer and things might work with lower settings. However, these are the recommended values and with somewhat current hardware they should not pose a problem. Please let me know if this is an issue for you.
The Debian system can be a VM (recommended) or bare metal. WSL2 (Windows Sub-system for Linux v2) will require testing and I cannot promise anything right now. If you are experienced enough with Linux that might be a good option, though.
As to the Debian 12 system, this can be with our without a graphical desktop. You don’t need a desktop environment, the command line is enough. If you don’t already have a virtualization tool in use, I can recommend the free version of VirtualBox as an easy starting point.
Running the installer
Download the JahnTech Learning Environment Installer and run it. All necessary programs will be installed, incl. the current version of Docker from the official Docker website. If you have Docker installed from the normal Debian packages, those will be removed automatically and without further asking.
After the installation of Docker and other required programs, the various components will be started and configured. No manual work is necessary. Depending on the power of the underlying hardware, this will take a while. Also, the various Docker images (about 2.5 GB in total) need to be downloaded and extracted during the initial run.
Overall, you can expect at least 5-10 minutes for the initial run. The majority of time is needed for GitLab. With a slow internet connection and hardware, considerably longer times are also possible. Subsequent startup time will be considerably shorter. Please be aware that on every startup an update check for newer Docker images will be performed. Especially for GitLab this can lead to a substantial increase (5 minutes or more).
In total the installer does the following things:
- Install required programs
- Pull Docker images and start containers
- Configure programs with users, project settings etc.
Ready to go - almost
From a CI/CD environment point of view things are ready. What you need to do in addition is get yourself a development environment for Integration Server. I recommend the Service Designer on either Windows or Linux.
In closing
The goal of this article was to give you an overview how the learning environment will look like and how to set it up. Of course I am looking forward to your thoughts and feedback. Did I miss anything that would add value for you?
I will continue to publish information about the learning environment as well as the course. If you want to be sure to get all blog posts etc., please subscribe to my newsletter.
If you want me to write about other aspects of this topic, please leave a comment or send an email to info@jahntech.com. The same applies if you want to talk how we at JahnTech can help you with your project.
© 2024, 2025 by JahnTech GmbH and/or Christoph Jahn. No unauthorized use or distribution permitted.