Deployment options for webMethods Integration

There are several ways to deploy code to webMethods Integration. Let's look at how things work under the hood, some common pitfalls, and how you can easily automate things. This knowledge will allow you to deliver more value to projects.

Deployment is the activity to bring code onto a system such that it can perform its designated duties. In other words: Without deployment all you have produced is costs, but so far the organization does not get any value from your wonderful solution. Yet, I have often seen that deployment is thought of as something relatively trivial (or not thought of at all until it is almost too late). Sometimes developers also consider it as something that is not really their concern, but the job of a release engineer.

I disagree with both aspects. Deployment is a critical aspect not only of operations, but it affects the entire life cycle. Design choices at the very beginning of a project often have a fundamental impact on how easy or difficult it is to deploy an application. This starts with new developers joining the team. If they need to go through a multi-page list of manual activities before they can even start development, that is already a vey bad sign. Things get only worse from here. CI/CD is by definition impossible, because things need to be automated. And then there are the various situations (incl. emergencies) when you need to deploy onto your production system.

Last but not least I would like to mention that not too many people have truly mastered this topic. So it might be a good opportunity to stand out and give your webMethods career a boost.

Scope and goal

The main goal of this article is to present you with the main approaches how to do deployment on webMethods Integration. There are some differences between the classic Integration Server and Microservices Runtime. When necessary I will explicitly mention those. Otherwise the content is relevant for both product variants.

In addition there will also be some side notes. Mainly about my own practical experiences and things I learned from others over the years. Yet this blog post is not aiming to thoroughly discuss the conceptual side of deployment and how it is linked to other parts of the SDLC (Software Development Life Cycle). While it is not meant to talk about CI/CD explicitly, there will be connection every now and then. Also, towards the end there will be an outlook on deployment automation.

This article assumes you have reasonable experience with webMethods Integration. 

How things work internally

Before we can look at different approaches of doing deployments in practice, we need to understand how webMethods Integration handles things internally. The core aspect here are packages. They are the way that code (and additional things that are needed, more on that later) is organized. You can easily see this when you look at the main log file (server.log) during server startup. The two main blocks in the log file are package loading, and then package startup. The logic that identifies the to-be loaded and started packages works such that it goes over the contents of the ./packages sub-directory. All directories that contain the necessary files (esp. manifest.v3) and sub-directories (esp. ./ns for namespace) are considered  to represent/contain packages and processed accordingly.

What does that mean for package deployment? You basically have to do one of the following things:

  • Create the required sub-directories and files  in the ./packages directory and
    • perform a server restart
    • let the server know that new package exists and should be activated
  • Copy a ZIP file containing the package contents into ./replicate/inbound and tell the server to install it
  • On Microservices Runtime only:
    • You can copy a ZIP file containing the package contents into ./replicate/autodeploy and it will automatically be deployed. There is some scheduling involved, so it might need a few seconds to kick in.
    • This feature is turned off by default. To enable it set watt.server.autodeploy.enabled=true in the “Extended Settings” menu of the admin web UI or directly in the ./config/server.cnf file (restart required in the second case).

Whatever you are doing for deployment (incl. the use of webMethods Deployer) it comes down to one of those approaches under the covers. In the case of Deployer there is some additional file locking, that might interfere with file-based automation, but that is a separate topic.

You might now say “Hang on, what about partial deployments?”. The most important scenario for partial deployments is the installation of product fixes. Updates to the WmPublic and WmRoot packages usually contain code changes. Sometimes it is also updates of .jar files. However, all that doesn’t change things on the conceptual level. If you inspect the ZIP files that contain product fixes, you will see some additional files. Amongst other things their job is to ensure traceability by displaying the fixes in the server web UI. The next section covers partial deployments for your code in more detail.

On development systems, there is also the dynamic reload of discrete services due to VCS operations. This is triggered by webMethods Designer.

Full vs. partial deployment

You can either deploy an entire package (full deployment) or parts of it (partial deployment). The default deployment approach is “full” in the official tooling. With tooling I mainly refer to the server web UI and the Deployer GUI. So users are guided to selecting packages in their entirety, and consequently have them deployed as such. On the other hand you can also select individual services, document types, triggers, etc. for deployment.

In the past many organizations have approach this with full deployments for the initial release, and often also the following bigger releases. Everything in between was done using partial deployments. I must confess that it feels natural to do that. It even creates the feeling of reduced risk. After all, I am not changing everything but a carefully selected sub-set, which seems to limit the blast radius, should something really go wrong.

Sorry to disappoint you. But your feeling betrayed you here. Actually it is the other way around and a full deployment is usually the safer option. Let me explain this counter-intuitive statement.

(Because some people had asked: Yes, it is conceivable that under very specific circumstances a partial deployment is safer. But it is one of those topics that fall into the category of “If you have to ask, you really shouldn’t do it”. Plus in all my time with Integration Server since 2002, I have not come across one such scenario. That doesn’t mean it’s not possible, but certainly pretty rare.)

Deployment failure

All those considerations are relevant because of the risk of a deployment failure. So we first need to define what that is. For the purpose of this article it means all problems that can be traced back to a given deployment. This includes

  • Any run-time issue caused by the deployment. Here we need to distinguish 3 types:
    • During deployment
    • Directly after deployment
    • Days or weeks after deployments
  • Running transactions that do not complete or get corrupted because of a deployment
  • Non-completion of the deployment itself
  • Introduction of circular dependencies
  • Missing artifacts (services, adapter connections, configuration files, etc.)
  • External dependencies like entries in mapping tables in databases
  • Server startup issues because of missing package dependencies
  • Disabled adapter connections due to wrong parameters (e.g. wrong hostname or user name)

The list above is by no means complete. But it covers the most common issues.

Some of those can be attributed to user error, e.g. the wrong connection parameters for external systems were somehow used. Yet we have to ask ourselves: How was it possible that 1) this error was possible to be made in the first place, and 2) why was it not detected before the production system? In that light I would argue that it is a combination of problems with configuration management and test automation. In other words: It is not primarily human error.

Other issues originate very clearly at the business process level, like not handling running transactions properly. If I want to deploy, I need to be able to either suspend those transactions (details extremely dependent on the implementation), or move them to another node Integration Server instance. All that while keeping in mind that data must be kept consistent with the updated packages that are about to be deployed. Those aspects need to be addressed already during the design phase. It is very advanced topic and deserves its own blog post (or more).

Overall, it is hard to find issues that are truly caused by human error, as long as proper processes and tooling are in place.

Failure types

Knowing what can go wrong is one thing (and certainly better than not knowing). But what we are really after is of course to deal with the issues and make them disappear. For a better understanding how to address the various points, it is helpful to group them. What has helped me greatly in the past is to use the type of the root cause for that grouping. I know this sounds abstract, so let’s look at an example.

If we take the point of missing artifacts from the list above, it is pretty likely a user error. Either someone missed to put the artifact in question onto the list of to-be-deployed parts. Or someone didn’t follow the list properly. In both cases an individual made an error, so the “root cause type” would be something like “human error”. This is also the type of error we want to avoid by eliminating the activity altogether: Something that doesn’t need to be done, cannot cause problems.

Another kind of failure can be found in the adapter connection that gets disabled due to wrong parameters. Here we have a problem with configuration management.

Why partial deployments are risky

So why is a partial deployment such a risky thing? The core reason is that we move away from one of the core aspects of reliable IT operations: automation. A partial deployment, by definition, means the creation of a new deployment set. This new deployment set may not be complete or contain other errors. It may also have hidden dependencies, that inside the package. A typical example for the latter would be an SQL script that adds a column to a table. Only that this other script that creates the table in the first place, never made it to the target environment before.

Of course there are ways to mitigate this example. The same goes for probably all other scenario I can come up with. But the question is: What do we gain then, is it not much simpler to solve the full deployment once and for all? Not only does that mean that all future situation are addressed. It also frees people’s mind of complexity. And it gets tested very frequently, incl. on lower level environments. And this is where you want the problems to show up as much as possible.

If you don’t believe me, look into which direction the rest of the world is moving. Yes it feels uncomfortable in the beginning. But how relaxed are you today with your partial deployments? I have yet to meet someone who truthfully says “I don’t care if you perform a partial deployment on Friday afternoon”.

The “path towards deployment automation” will be covered in a separate section below.

Different approaches

Given how well established and universally deployed webMethods Integration is, a number of ways exists to perform deployments. Here is a list of approaches that I am aware of:

  • Manual deployment via admin web UI
  • webMethods Deployer
    • Manual mode
    • Scripted
    • Asset Build Environment
  • REST API
  • Custom solution using internal APIs
    • Shell script
    • Custom Ant tasks
  • webMethods Package Manager
  • Auto-deploy on Microservices Runtime

I will not go into details here, that would be too much.  But let’s briefly look at the two approaches that I have seen most in practice. There is the “old” school of thought that looks at the solution as a monolithic application. This is not so much about an anti-Microservices view. But it moves the discrete packages a bit into the background and looks at the “bigger picture”. This is how things were seen in the late 2000s and you can recognize it in the design of webMethods Deployer. The latter has the great advantage to handle additional webMethods products. So if you have more than just packages for webMethods Integration, please do consider using Deployer.

Then there is the perspective that an application is explicitly comprised of a number of packages that each performs its duties. Many packages are used across multiple applications, simply because in a given organization there tends to be a lot of overlap. Example: If two applications deal with the same custom database, I should really have a single package as the interface to that database. Developing duplicate code, each in the context of its main application, is recipe for disaster (plus highly inefficient). Please check out this blog post for more details on having a single package per task and how it moves you towards a true integration platform.

Given that I have developed applications purely via webMethods Integration for the last 10+ years, my preferred approach has been the second one from above. In terms of deployment I have used some custom scripts. They are easy to write and you can see an example further below. I will concede that it seems to be a bit of extra work relative to using Deployer. But the latter is conceptually not suitable for a per-package approach and that would have caused so much (also manual) work, that writing my own scripts was much easier overall.

Going forward the new webMethods Package Manager is probably the way to go. I truly like the elegance and simplicity of its approach. The one thing to look at is whether or not it matches your exact needs. It is under active development in my understanding (as of this writing), so re-checking every now and then is a good idea.

The auto-deploy feature is positioned in the context of Docker images. And I agree that it sounds tempting not having to re-create the image on every change of a single package. But there is a good chance that your auditors will not allow it. After all, it will not be clear what code exactly was running when. It is still a great feature in the context of manual testing. But for the context of CI/CD in general and production in particular I would suggest to be careful.

Deployment made simple

As with so many things in life, the key to success with deployment is keeping it as simple as possible. The core aspect here is: no exceptions. Solve a given problem in a clean/elegant/simple way once and for all. Translated into a practical outcome, this means that I only have to find a way to safely handle full deployments and address all possible topics. This primarily includes:

  • Configuration management: Ensure that settings and values are correct for all environments. Also, there are usually various requirements on the OS, the DBMS, local file system, etc. Those must be formally specified and enforced.
  • Dependency management: Ensure that dependencies of all sorts are completely specified in a way that makes them machine-enforceable. The obvious candidate are package dependencies, which in some organizations are deliberately ignored.
  • Data migration: A classic example is the addition of new fields for entities (in relational terms: additional columns to tables). This must not only be handled in the associated adapter services etc., but also the underlying databases. For RDBMS programs like Liquibase have proven to be a very solid solution.
  • Pre-/post-processing: Sometimes there are very special additional operations that must be applied. There must be some kind of hook (often called “user exit”), so that you can plug in such things.
  • Downtimes: A deployment means that the instance in question must not process requests while the deployment is running. Ever. I know that you can sometimes work around this and “design” deployments such, that no downtime is necessary. Yet there are two reasons why that is a really bad idea. Firstly, there are cases like an OS reboot that require some kind of approach for handling downtimes anyway. So it would be foolish not to re-use/leverage what is already there. Secondly, introducing custom workarounds not only requires considerable effort and probably screws up the architecture. But it also introduces additional risks for the deployment again.

All of the topics mentioned above are universal and not specific to webMethods Integration. That means they are well documented on a conceptual level and have been solved for many scenarios. Since webMethods Integration is Java based and works off of a regular file system, a ton of stuff can be re-used very easily. Liquibase was already mentioned for relational databases. For testing there is JUnit and the webMethods Test Suite is based on that. Configuration values can be updated on the file system with some scripting or more elaborate approaches like WxConfig NG. The list goes on.

Your path to deployment automation

The main goal of this article was to bring structure into the topic of deployment. This allows us to see the discrete areas that play a role. And just like with good system architecture we can address each of them in isolation. That makes it a lot easier for the initial work. Much more important, though, is what it means for the users. If I can explain my entire deployment pipeline with e.g. 1 or 2 slides that has a tremendous (positive) impact on all other parties involved.

In particular we are talking about operations. If something fails on Sunday morning at 1:37 am, it is a relief to know that I as the operator on duty can handle 99% of the issues myself. And for that tiny rest where I need additional help, I am able to provide a concise report about the issue.

What does that mean in practical steps? 

Configuration management

One of the main aspects of a successful deployment approach is the configuration management. Each environment is different and we need a good way to deal with those differences. If you check out my other blog posts, you will see that I have written a lot about this topic. For advanced scenarios in the application code a dedicated package like WxConfig NG may be helpful. But you can always start with what is available out-of-the-box.

For values inside you application code that means using the built-in “Global Variables”. You can use some shell scripting to populate values during deployment and have a solid start. For other values, esp. adapter connections, you will need the Microservices Runtime, if you want/need out-of-the-box functionality. For classic Integration Server some custom tooling is probably necessary. As an example adapter connections (incl. passwords) can be updated using an open-source tool from JahnTech. Please have a look at this blog post for details and some YouTube videos.

If your organization hasn’t worked a lot in this area, but wants to have fast results, please consider getting in touch. I have spent more than 15 years with this topic and my code handles such things for a number of large organizations. They include government bodies, one of the largest global retailers (in more than 13k stores), logistics, and banks.

In my experience this is the single biggest challenge for most organizations. Once you have it solved, the rest is relatively easy.

Shell scripts

The great thing about deployment is that it boils down to file system operations and optionally a few HTTP calls (see “How things work internally” above). This makes it surprisingly easy to automate stuff. To build a package from VCS (usually Git), and deploy it to a target environment, you only need to do a few things. Below is some pseudo-code to illustrate that:

  • The first snippet creates the installable package from the VCS and uploads it to a binary repository (which also solves a lot of auditing issues)
  • The second part downloads and installs the package
				
					# ====================================================
# Build webMethods Integration package from Git and
# upload to binary repo for deployment from there
# ====================================================

# Get source from Git
cd $WEBMETHIDS_HOME/IntegrationServer/packages
git clone https://my.server.com/path/my_package.git MyPackage

# The following lines are only needed for Java code
cd $WEBMETHIDS_HOME/IntegrationServer/bin
./jcode.sh makeall MyPackage
./jcode.sh fragall MyPackage

# Create package ZIP archive
cd $WEBMETHIDS_HOME/IntegrationServer/packages/MyPackage
zip -r /tmp/MyPackage.zip .

# Upload ZIP to binary repo
curl -v -u ci:manage --upload-file /tmp/MyPackage.zip \
  https://binary.repo.com/webm/MyPackage.zip
				
			
				
					# ====================================================
# Download webMethods Integration package from 
# binary repo and deploy on local Integration Server
# ====================================================

# Download ZIP archive and unzip into correct directory
wget --user=ci --password=manage -O "/tmp/MyPackage.zip" \
  https://binary.repo.com/webm/MyPackage.zip
unzip "/tmp/MyPackage.zip" -d "$WEBMETHODS_HOME/IntegrationServer/packages/MyPackage"

# Perform whatever changes are needed for adjusting to the environment
# This can include the use of MSR's Dynamic Variables or the admin REST API
# of webMethods Integration

# Enable package
curl -u 'Administrator:manage' --request POST \
  --url "http://localhost:5555/admin/package/MyPackage?action=enable"

				
			

While I would certainly recommend some level of input validation and error checking for the final versions, those scripts demonstrate how easy it is to automate deployment of packages. I copied the contents from some real-world scripts with minor adjustments, but didn’t check further. So please excuse possible errors.

In closing

I could hopefully convey that deployment is an important aspect of the SDLC. When I was first pushed into the topic back in 2008 I truly underestimated its importance and fascination. Ever since then it has been in important part of my professional journey. It also played a role in boosting my career.

Of course there are more aspects and esp. the exact situation at a customer does have an influence. Do you have any particular questions here? If so, I will be happy to answer them in the comments.

If you want me to write about other aspects of this topic, please leave a comment or send an email to info@jahntech.com. The same applies if you want to talk how we at JahnTech can help you with your project.

© 2024, 2025 by JahnTech GmbH and/or Christoph Jahn. No unauthorized use or distribution permitted.

Share:

Facebook
Twitter
Pinterest
LinkedIn

3 Responses

    1. Thanks. To be honest, for the time being there are no such plans. With sufficient interest that might change, though. Can you describe in a bit more details (use-case, technical requirements) what you have in mind?

Leave a Reply

Your email address will not be published. Required fields are marked *

On Key

Related Posts

Tips for job seekers

The IT job market is not a cozy place right now. To help people find their new job, I put together this list of recommendations. While originally targeted for webMethods, much of it is also applicable for other roles.

webMethods and AS/400

The IBM AS/400 midrange system is a fascinating piece of technology. But it does quite a few things differently than PC-based servers. So be prepared to run into a few challenges when working with it for the first time. Here is my story from a really interesting project.

How to give a presentation

Giving a presentation is vital not only in business but many situations in life. In essence it is about conveying some content in an effective way. Here are a few basic points that will help you.

Unique IDs in programming

A core aspect of almost all IT systems is the ability to identify a single transaction, customer, etc. So you need an ID that is guaranteed to only exist once. With distributed systems and large amounts of data, that is easier said than done.

Why switch to WxConfig NG

Configuration management is a critical part of any IT project and even more so in the context of integration. This article discusses a number of scenarios where it can make sense to switch to WxConfig NG.