Tools & Consulting for the webMethods® platform

Tools & Consulting for the webMethods® platform

WxConfigNG and containers

Integration with containers is one of the main differentiators of WxConfigNG. This includes the development side (incl. migration from traditional VMs) as well as the operations part.

The use of containers has been on the rise for a while now and, not surprisingly, there is also considerable interest in the context of webMethods Integration Server. So let’s talk about what this means in terms of requirements for configuration management.

Today I will cover the following aspects:

  • One image for all environments
  • Phased migration from non-container approach (usually VMs) incl. fall-back option
  • Layering images (product, company, application)
  • Container orchestration platforms

Given the width of topics this post needs to stay a bit on the superficial side, but more details will come. And if you want something to covered sooner rather than later or in particular depth, please feel free to send me an email at .

The bootom line is that WxConfigNG with its proven approach addresses all those aspects for you.

One image

A very common question is how to handle different environment types when dealing with container images of Integration Server. It is of course possible to create individual images per environment type, but that comes with a number of downsides:

  • Effort and complexity: Having to set up and maintain different build pipelines is cumbersome at best. It doesn’t scale at all and introduces considerable risk for copy-paste error, deploying the wrong image, and many other things.
  • Unsuitable for orchestration platforms like Kubernetes: Production workload will usually not run on manually controlled container instances. Instead a platform like Kubernetes or OpenShift expect a “universal” imagine and inject the specifics needed for runtime.
  • Administration and storage: Somewhat related to the complexity aspect, keeping track of different images increases the administration effort and storage capacity needed. And many organizations will be required to keep older images for considerable time due to legal and compliance regulations. Imagine the difference between 50 and 300 images to keep over multiple years.

So a solution is needed that allows the use of a single image across all environment types and for Kubernetes et al. Therefore all relevant settings must be controllable from the outside. The latter in particular means that quite a few settings must be kept in sync between the Integration Server as a platform, and the application code running inside.

WxConfigNG supports this directly. You can configure the platform and your application alike through the use of either environment variables or files. This way an integration with Kubernetes ConfigMaps is easily possible.

Phased migration

Most organizations have existing integration logic and are now looking to move to containers. With WxConfigNG you can do this by adding a virtual environment type “CONTAINER” to your application’s configuration settings. This is different from the normal “PROD” configuration in that it does not contain fixed values (e.g. to connect to a database), but refers to values injected from the outside by (usually) environment variables. And you also control with environment variables which configuration should be used.

Here are a few “pseudo-code” files to illustrate this:

					# Main configuration file

# Classic PROD configuration

# Container configuration
#  - Only loaded if environment variable found.
#  - Otherwise the "conventional" configuration is used.
					# PROD configuration

# Fixed value pointing to database production system

# Retrieve from Integration Server's built-in secerets manager
					# CONTAINER configuration

# Retrieve from environment variable

# Retrieve from external secrets manager

With this setup you can very easily start your migration to containers. Pick a suitable application/integration and do the following:

  • Create a CONTAINER environment type configuration like shown above and test locally with manually run container instance. This covers the Integration Server and custom logic side of things.
  • Repeat in your container orchestration TEST environment. This covers the orchestration part.
  • Move to your orchestration PROD environment and run some manual transactions.
  • Re-configure your load-balancer to send a few real transaction to your newly set up cloud instance.
  • In case of errors, revert your load-balancer settings. Otherwise increase load as you see fit.
  • Run this small container instance in parallel to your normal infrastructure to learn the operational side.
  • Since the integration logic is unaware of all this, you can continue to adjust as needed by the business.
  • Move more workload to the container platform as needed. You can always go back to VMs, since your integration logic is agnostic of where it runs. This includes even new releases, from which you could go back in a worst-case scenario.

In addition to the technical flexibility outlined above, you get another major advantage: reduced lock-in on the platform your integration logic is running on. That makes it very easy to mix and match and get the optimal combination of workload scalability and cost.

Layering images

An integration platform is a bit different from typical applications when it comes to containerization. It includes the Integration Server as an additional layer. This is actually an advantage in that it helps to decouple things in clean way. Because the overall complexity of any IT solution is mainly determined by the business requirements.

To deal with this situation the following approach has proven to be helpful. You basically layer the creation of your container images. That limits the complexity per step, and also ensures a cleaner architecture. (Think along the lines of separation of concerns.)

You start with the product image. This contains the Integration Server and the additional components needed (adapters etc.). It should not be customized any further to your organization’s need.

Then comes the organization image. This contains organization-wide settings and common packages (e.g. utilities) for Integration Server. If your organization is big enough, you should consider department images as well. This is basically about encapsulating the organization, where reasonable, and make dependency management easier.

Finally we have the application image. This is where your actual application lives. So whenever you change something at this level, you only need to rebuild this image, but not neither the product nor the organization image.

From an overall architecture perspective it is important to only have one application running inside a container image. Many grown environments contain a lot of different integrations. Yes, technically nobody prevents you from throwing them all into a single container image. But operations will be hell. And if you follow the phased migration approach outlined above, a clean separation will come nicely as a by-product.

Orchestration platforms

The earlier paragraphs mentioned orchestration platforms numerous times already. What I would like to emphasize here is one of the fundamental changes those bring for many organizations. It is the concept of

Infrastructure As Code (IAC)

This means that you have to say goodbye to any manual configuration or code change. Those must all go through your CI/CD pipeline or you are “dead’. Because now Kubernetes (or similar) is the master of your infrastructure. If you interfere, you will end up with all sorts of problems, and some will likely not be confined to the technical arena but also have legal ramifications (compliance).

So it is mandatory that build, deployment, and configuration are 100% automated. If you are 99.9% automated you are not fully automated. I know this sounds pedantic, but so are computers. And yes, a younger “me” has tried cutting corners, with not the best of outcomes.

The good thing, though, is that those pains are not necessary. I could hopefully convey that WxConfigNG is a flexible and powerful solution to make your container journey as smooth as possible. Please get in touch and I am more than happy to discuss specifics.

If you want me to write about other aspects of this topic, please leave a comment or send an email to The same applies if you want to talk how we at JahnTech can help you with your project.

© 2024 by Christoph Jahn. No unauthorized use or distribution permitted.



Leave a Reply

Your email address will not be published. Required fields are marked *

On Key

Related Posts

Update connection details for JDBC adapter on webMethods Integration Server

Connections to a an external database are mostly environment specific. So they must be adjusted whenever a new instance of Integration Server gets deployed. In some situations that can be done manually, in others automation is mandatory. The tool described here can do this for you and is ideal for use in a container scenario.

Performance tuning for webMethods Integration Server

Having spent a lot of time with performance tuning on webMethods Integration Server, I wanted to share some thoughts here. They will equip you with a starting point and the flexibility to go wherever your project takes you.

Understand the problem. Truly.

If you want to solve a problem, you must first understand it. What may sound silly, is actually more nuanced than people often realize. Here is a bit more detail on this and why an outside view is often helpful.

One secret of good demos

Here is a short article on what I consider an often overlooked reason for how well software demos go and why you need to prepare hard.

CI/CD and change logs

If you release software multiple times a day, what does that mean for change logs? Here is my approach to convey relevant information to various target groups like end-users or sysadmins.