Tools & Consulting for the webMethods® platform

Tools & Consulting for the webMethods® platform

WxConfigNG and containers

Integration with containers is one of the main differentiators of WxConfigNG. This includes the development side (incl. migration from traditional VMs) as well as the operations part.

The use of containers has been on the rise for a while now and, not surprisingly, there is also considerable interest in the context of webMethods Integration Server. So let’s talk about what this means in terms of requirements for configuration management.

Today I will cover the following aspects:

  • One image for all environments
  • Phased migration from non-container approach (usually VMs) incl. fall-back option
  • Layering images (product, company, application)
  • Container orchestration platforms

Given the width of topics this post needs to stay a bit on the superficial side, but more details will come. And if you want something to covered sooner rather than later or in particular depth, please feel free to send me an email at info@jahntech.com .

The bootom line is that WxConfigNG with its proven approach addresses all those aspects for you.

One image

A very common question is how to handle different environment types when dealing with container images of Integration Server. It is of course possible to create individual images per environment type, but that comes with a number of downsides:

  • Effort and complexity: Having to set up and maintain different build pipelines is cumbersome at best. It doesn’t scale at all and introduces considerable risk for copy-paste error, deploying the wrong image, and many other things.
  • Unsuitable for orchestration platforms like Kubernetes: Production workload will usually not run on manually controlled container instances. Instead a platform like Kubernetes or OpenShift expect a “universal” imagine and inject the specifics needed for runtime.
  • Administration and storage: Somewhat related to the complexity aspect, keeping track of different images increases the administration effort and storage capacity needed. And many organizations will be required to keep older images for considerable time due to legal and compliance regulations. Imagine the difference between 50 and 300 images to keep over multiple years.

So a solution is needed that allows the use of a single image across all environment types and for Kubernetes et al. Therefore all relevant settings must be controllable from the outside. The latter in particular means that quite a few settings must be kept in sync between the Integration Server as a platform, and the application code running inside.

WxConfigNG supports this directly. You can configure the platform and your application alike through the use of either environment variables or files. This way an integration with Kubernetes ConfigMaps is easily possible.

Phased migration

Most organizations have existing integration logic and are now looking to move to containers. With WxConfigNG you can do this by adding a virtual environment type “CONTAINER” to your application’s configuration settings. This is different from the normal “PROD” configuration in that it does not contain fixed values (e.g. to connect to a database), but refers to values injected from the outside by (usually) environment variables. And you also control with environment variables which configuration should be used.

Here are a few “pseudo-code” files to illustrate this:

				
					# Main configuration file

# Classic PROD configuration
include={condition:envType=PROD}env-PROD.properties

# Container configuration
#  - Only loaded if environment variable found.
#  - Otherwise the "conventional" configuration is used.
include={condition:enVarExists=IS_CONTAINER_ENV}env-CONTAINER.properties
				
			
				
					# PROD configuration

# Fixed value pointing to database production system
db.hostname=db.prod.corp.acme.com

# Retrieve from Integration Server's built-in secerets manager
db.password=[[secret{type=local}:crm_db_user]]
				
			
				
					# CONTAINER configuration

# Retrieve from environment variable
db.hostname={$env:K8S_CRM_DB_CLUSTER_HOSTNAME}

# Retrieve from external secrets manager
db.password=[[secret{type=external}:crm_db_user]]
				
			

With this setup you can very easily start your migration to containers. Pick a suitable application/integration and do the following:

  • Create a CONTAINER environment type configuration like shown above and test locally with manually run container instance. This covers the Integration Server and custom logic side of things.
  • Repeat in your container orchestration TEST environment. This covers the orchestration part.
  • Move to your orchestration PROD environment and run some manual transactions.
  • Re-configure your load-balancer to send a few real transaction to your newly set up cloud instance.
  • In case of errors, revert your load-balancer settings. Otherwise increase load as you see fit.
  • Run this small container instance in parallel to your normal infrastructure to learn the operational side.
  • Since the integration logic is unaware of all this, you can continue to adjust as needed by the business.
  • Move more workload to the container platform as needed. You can always go back to VMs, since your integration logic is agnostic of where it runs. This includes even new releases, from which you could go back in a worst-case scenario.

In addition to the technical flexibility outlined above, you get another major advantage: reduced lock-in on the platform your integration logic is running on. That makes it very easy to mix and match and get the optimal combination of workload scalability and cost.

Layering images

An integration platform is a bit different from typical applications when it comes to containerization. It includes the Integration Server as an additional layer. This is actually an advantage in that it helps to decouple things in clean way. Because the overall complexity of any IT solution is mainly determined by the business requirements.

To deal with this situation the following approach has proven to be helpful. You basically layer the creation of your container images. That limits the complexity per step, and also ensures a cleaner architecture. (Think along the lines of separation of concerns.)

You start with the product image. This contains the Integration Server and the additional components needed (adapters etc.). It should not be customized any further to your organization’s need.

Then comes the organization image. This contains organization-wide settings and common packages (e.g. utilities) for Integration Server. If your organization is big enough, you should consider department images as well. This is basically about encapsulating the organization, where reasonable, and make dependency management easier.

Finally we have the application image. This is where your actual application lives. So whenever you change something at this level, you only need to rebuild this image, but not neither the product nor the organization image.

From an overall architecture perspective it is important to only have one application running inside a container image. Many grown environments contain a lot of different integrations. Yes, technically nobody prevents you from throwing them all into a single container image. But operations will be hell. And if you follow the phased migration approach outlined above, a clean separation will come nicely as a by-product.

Orchestration platforms

The earlier paragraphs mentioned orchestration platforms numerous times already. What I would like to emphasize here is one of the fundamental changes those bring for many organizations. It is the concept of

Infrastructure As Code (IAC)

This means that you have to say goodbye to any manual configuration or code change. Those must all go through your CI/CD pipeline or you are “dead’. Because now Kubernetes (or similar) is the master of your infrastructure. If you interfere, you will end up with all sorts of problems, and some will likely not be confined to the technical arena but also have legal ramifications (compliance).

So it is mandatory that build, deployment, and configuration are 100% automated. If you are 99.9% automated you are not fully automated. I know this sounds pedantic, but so are computers. And yes, a younger “me” has tried cutting corners, with not the best of outcomes.

The good thing, though, is that those pains are not necessary. I could hopefully convey that WxConfigNG is a flexible and powerful solution to make your container journey as smooth as possible. Please get in touch and I am more than happy to discuss specifics.

If you want me to write about other aspects of this topic, please leave a comment or send an email to info@jahntech.com. The same applies if you want to talk how we at JahnTech can help you with your project.

© 2024 by Christoph Jahn. No unauthorized use or distribution permitted.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

On Key

Related Posts

Microservices and code reuse for integration platforms

Today I want to write about a software development approach I was able to put into practice a few years ago. I was myself very much surprised how well things worked out, and especially how fast I was able to see tangible benefits for the business side.

Custom logging with Log4j on Integration Server

Every serious IT system needs proper logging. This article describes how I use Log4j v2 with webMethods Integration Server. There is no need to change anything on the system, just install the package and it works.

External Java libraries in webMethods Integration Server

To get most out of your Integration Server you will sooner or later need to leverage the functionality of existing Java libraries. Common examples are Log4j or Apache POI (for working with Excel or Word files). Here is a short guide for this.

Running webMethods Integration Server in containers

Organizations are moving from traditional deployments to containers and Kubernetes to increase business agility. This article discusses a lot of topics that are relevant to actually reach the goal of faster delivery of value to the business through software. Because if you do it wrong, you just end up with yet another technology stack. And, as always, technology is not your biggest challenge.