Only when starting to work on this article, did I realize that until now I have not done any blog post on CI/CD for webMethods Integration Server. No idea how this could happen. After all, it is one of my main areas of expertise and interest. A lot more will come, especially since I am currently working on an online course about this.
Scope and goal
This article aims to provide a conceptual overview what I recommend for doing CI/CD with webMethods Integration Server. It is built on more than 15 years of experience with the topic. The approach has the following advantages:
- Conceptually simple with clear steps and responsibilities.
- Tool-agnostic and flexible.
- Compatible with standard approaches from e.g. the Java world.
So you will be able to nicely fit in with whatever process and tool chain your organization already has in place for other languages and systems (e.g. Spring Boot, C#). In other words: The only new thing is how to handle some specific technical details. But overall it is like just another project for an existing system.
Not in scope is to teach you the basics of CI/CD, why you want to do it, etc. I may occasionally mention something of that nature. But it will be to set context or illustrate some specifics.
Overview
Here is how I do CI/CD for webMethods Integration Server. This approach has been battle-tested over the last 15 years and works for all scenarios I have seen so far.
- All developers have their own environment and commit their changes into the VCS, usually multiple times per day.
- The CI server (e.g. Jenkins) picks up those changes, builds the package, and runs the unit tests.
- On success, a package ZIP file is created and uploaded to the binary repository (e.g. Nexus, Artifactory). You can also take GitHub here, but be aware that it is used in a fundamentally different role, compared to its usual VCS functionality.
- On further test environments the package ZIP file is installed from the binary repository and tests are executed. These environments do not interact with the VCS, that is solely the job of the CI server.
- Once all tests have been passed, the package ZIP file gets promoted from snapshot to official version (often called “release”). That can happen automatically for Continuous Deployment. Or you do it manually for additional control over the time of the update.
- Once release status has been reached, you use the exact same package ZIP file to create a new container image, update your running VMs, or deliver a new version to your customers. All this usually with automation.
Below I will cover those points in more detail.
Development environment
A crucial aspect for successful CI/CD is that all developers can work independently and in isolation. The traditional model for development with Integration Server was to have a shared DEV
environment server, and a locally installed development tool. This was envisioned in the mid to late 1990s and was effectively the only option at the time, given the available hardware. But those days are long gone and Integration Server is not particularly resource-hungry by today’s standards. So you can easily run an entire environment on a Notebook (even in a VM).
For a while there was an additional hurdle in the form of licensing. But since the arrival of webMethods Service Designer that problem has mostly gone. It has some limitations compared to a regular Integration Server, but for our scenario that is usually not a problem.
In terms of making a unified DEV environment available I usually recommend a Linux VM these days. You evade the license problems and costs that come with Windows. And Linux has really made some big steps forward on the desktop in recent years. I am writing this on Linux which is now my primary desktop OS, having tried the switch multiple times since around 2003. As a starting point for selecting your Linux distribution I recommend either Fedora with Cinnamon desktop, or Linux Mint (also with Cinnamon desktop).
As to the VCS, today most people and organizations have settled on Git. The Git client in Eclipse, upon which Designer is based, is not my cup of tea. In fact I have not yet met anyone who likes it. So if you find it difficult to use, you are not alone. In terms of alternatives I use TortoiseGit on Windows and VS Code on Linux. Yes, I really start VS Code in addition to Designer. Not only for the Git client, though. But whenever there are plain files to work on (configuration, DSPs, etc.) I perceive VS Code to be easier then Designer.
Continuous Integrations server
You can use whatever CI server is available and in most organizations that decision has already been made for you anyway. So Jenkins, Team City, GitLab, GitHub, etc. are all fine.
I have worked extensively with Jenkins most of the time and recently switched to a self-hosted GitLab instance. From a practical perspective the biggest difference for me was that with Jenkins I can run Integration Server on the same machine, while I need a dedicated (and by that separate) GitLab runner. I have come to prefer the latter, because it enforces a separation. That makes things cleaner and helps to avoid unintentional dependencies. Also, I find it easier when using containers.
For me the job of a CI server entails the following activities:
- Detect a change in the VCS and trigger a new build incl. the checkout from the VCS
- Build the artifact in question, which is highly dependent on the language and system. In the case of Integration Server that means to compile the Java code and create a ZIP archive of the package.
If Java code is in the game, I consider Deployer unsuitable for proper CI/CD, bcause the compilation happens on the target system. This means that the byte-code can differ between environments. Plus it of course defeats the purpose to have a single artifact used across the entire pipeline.
But luckily the Java compilation requires no special handling. So it really comes down to that plus creating a ZIP file. There are additional nuances possible (e.g. removing the Java code before shipping), but for now that is sufficient. - The package ZIP file needs to be installed on the CI instance of Integration Server. In the case of containers you can simply unzip it into the
./packages
directory. It will be picked up during the server startup. For a “regular” server, you can either do the same and restart the server. Or you mimic what happens when installing a package through the UI. In that case you need to copy the ZIP file to./replicate/inbound
and then trigger the appropriate service (wm.server.packages:packageInstall
). The net result is the same. What you need to ensure, though, is that all required packages are already there. - The next step is the execution of the unit tests. Since the WmTestSuite uses JUnit under the covers, you can invoke it like that. You need to specify the correct class for running non-Java tests, but that is not too difficult.
- Once the steps have been completed with success, you need to upload the ZIP archive to your binary repository. From there, and only from there(!), it will be used in all further stages.
Test environment and beyond
Once your package was successfully tested for the first time on the CI instance of Integration Server, it will be made available on the binary repository. You can think about the latter as some kind of HTTP server with a lot of specific additional functionality. But at the very core the CI server has done an HTTP upload, and all further stages can now do a download of the package ZIP file. What exactly then happens after that download depends on your setup.
Conceptually it is more or less the same as with the CI instance of Integration Server. Whatever your new target environment is, the package will somehow be made available, and the appropriate tests be executed. In contrast to the CI instance they are usually not integration tests. Although you can also do those to check on your configuration management and how it handles different environment types.
More common, though, are things like integration tests, performance tests, GUI tests. Acceptance tests usually involve people, so they are often done a bit outside this rhythm.
On each environment type, when the respective tests were successful, a “signal” is created. It tells the CI pipeline that another step was completed as expected, so that things can move forward.
Eventually we then reach the point where deployment to the production environment can be done. Whether or not also this is fully automated, is a decision that goes beyond this article. Especially since we probably also cross organizational boundaries and things like DevOps come into play.
Pipeline orchestration
The overall “workflow” of all this is called the CI/CD pipeline. It can be orchestrated in many different ways to achieve exactly the process that has been laid out above. 10-15 years ago that usually meant to have a number of discrete jobs in your Jenkins server, that invoked the next one, if everything went well. It was easy to get lost and your only chance for survival was a strictly enforced naming convention.
You could also combine things with an infrastructure-as-code tool, like Chef, Puppet, Ansible, etc. That has the advantage of a pull approach and reduces the requirement for all components to be available all the time. When you already have such a tool in place and know how to use it, that is a serious alternative. I did this for a corporate integration platform around 10 years ago and it worked great. But there is a lot of complexity with these tools. So CI/CD is certainly not a good use-case to get started.
Today my general recommendation is to use the built-in pipeline capabilities of your CI server. If your CI server does not have this, get yourself a better one. Such a pipeline definition is basically an elaborate CI job description that already anticipates the kind of workflow laid out above. The details can vary a lot, of course.
Configuration management
One of the most critical aspects of CI/CD is configuration management. Especially so when it comes to integration, since by definition there is a number of connections to external systems. So for every environment those connections need to be adjusted, before transactions (in a very broad sense) can hit the server and end up in the wrong database or create an order in the wrong system.
There are a number of aspects I want to look at in more details, so let’s go.
Scope
As much as I like Integration Server and in almost all aspects think that it is a great product, there is one problem with how it handles the scope of configuration: Some configurations that are relevant to a specific package, are not stored in that package but centrally for the entire server. The part that most people have probably experienced are ACLs. That is the scope issue that is forced upon us.
In addition there are often self-inflicted problems. What many have done in the past is to place all adapter connections into a single package and defined no dependencies. I understand why this was done and it does indeed solve the problem of a reload cascade when updating the “wrong” package on a live server. I was told by a credible sources that the deployment of a single package caused an outage of about 50 minutes on one of their systems.
So as a conceptually dirty workaround the common connections package is a possible approach. But it does collide heavily with CI/CD. Plus with today’s level of automation (even more so with containers) it is usually not a problem to have a cluster of servers and perform a rolling update.
From an application architecture perspective the approach of a shared connections package, creates a high amount of coupling. So I always recommend to place the connection into the package where the using logic sits. This also makes it easier to segregate users and not have a “shared” account that creates orders, extracts data for some ETL process, and runs analytics. Not only is this no good architecture, but it also has a negative impact on operations. What about the ETL process blocking database resources so that time-sensitive orders don’t get through?
Using MSR dynamic variables
The official tooling to handle different environments is using MSR (Microservices Runtime). For the purpose of this text it is basically a version of Integration Server that supports the automatic update of certain configuration values during server startup. So this means that before the first transaction can hit the server, the settings have been updated and hopefully contain the correct values.
The technical detail that I dislike a bit is that it uses a single file to drive those changes. So if you want to handle scope properly by having settings versioned in their respective package, you need to add a script that puts all this together. But that is not a critical issue and relatively easy to fix.
The other challenge here is the license. If you have just a standard IS license without MSR, you cannot use this feature.
Custom tooling
I have personally always been a big fan of custom tooling in general. It gives you flexibility to handle business requests in the most effective and efficient manner. But I am aware that some organizations have been burned in the past here. Although I would argue that this is not the problem of custom tooling per se, but the fact that there was no proper governance.
Back to configuration management: I have spent more than 15 years on this topic and developed a tool that is used in more than 13k mission-critical installations. So this is absolutely doable. And the requirements for CI/CD are only a fraction of what my tool handled.
What I have made available so far as open-source, is a tool to update adapter connections (ART-based only). You can find it on GitHub and there are also two videos on my YouTube channel.
If you have the need for another specific scenario, please let me know.
In closing
There is a lot more to say. But this should give people a reasonable starting point. My core message is this:
Conceptually there is nothing special with Integration Server. You can follow the standard approach and use the tools and processes that probably already exist in your organization. In practical terms there are a few technical details that need attention. But that does not change anything on the conceptual side.
I will continue to publish content on CI/CD for Integration Server. If you want to be sure to get notified, please consider subscribing to my newsletter on the contact page.
If you want me to write about other aspects of this topic, please leave a comment or send an email to info@jahntech.com. The same applies if you want to talk how we at JahnTech can help you with your project.
© 2024 by Christoph Jahn. No unauthorized use or distribution permitted.