Deployment options for webMethods Integration

There are several ways to deploy code to webMethods Integration. Let's look at how things work under the hood, some common pitfalls, and how you can easily automate things. This knowledge will allow you to deliver more value to projects.

Deployment is the activity to bring code onto a system such that it can perform its designated duties. In other words: Without deployment all you have produced is costs, but so far the organization does not get any value from your wonderful solution. Yet, I have often seen that deployment is thought of as something relatively trivial (or not thought of at all until it is almost too late). Sometimes developers also consider it as something that is not really their concern, but the job of a release engineer.

I disagree with both aspects. Deployment is a critical aspect not only of operations, but it affects the entire life cycle. Design choices at the very beginning of a project often have a fundamental impact on how easy or difficult it is to deploy an application. This starts with new developers joining the team. If they need to go through a multi-page list of manual activities before they can even start development, that is already a vey bad sign. Things get only worse from here. CI/CD is by definition impossible, because things need to be automated. And then there are the various situations (incl. emergencies) when you need to deploy onto your production system.

Last but not least I would like to mention that not too many people have truly mastered this topic. So it might be a good opportunity to stand out and give your webMethods career a boost.

Scope and goal

The main goal of this article is to present you with the main approaches how to do deployment on webMethods Integration. There are some differences between the classic Integration Server and Microservices Runtime. When necessary I will explicitly mention those. Otherwise the content is relevant for both product variants.

In addition there will also be some side notes. Mainly about my own practical experiences and things I learned from others over the years. Yet this blog post is not aiming to thoroughly discuss the conceptual side of deployment and how it is linked to other parts of the SDLC (Software Development Life Cycle). While it is not meant to talk about CI/CD explicitly, there will be connection every now and then. Also, towards the end there will be an outlook on deployment automation.

This article assumes you have reasonable experience with webMethods Integration. 

How things work internally

Before we can look at different approaches of doing deployments in practice, we need to understand how webMethods Integration handles things internally. The core aspect here are packages. They are the way that code (and additional things that are needed, more on that later) is organized. You can easily see this when you look at the main log file (server.log) during server startup. The two main blocks in the log file are package loading, and then package startup. The logic that identifies the to-be loaded and started packages works such that it goes over the contents of the ./packages sub-directory. All directories that contain the necessary files (esp. manifest.v3) and sub-directories (esp. ./ns for namespace) are considered  to represent/contain packages and processed accordingly.

What does that mean for package deployment? You basically have to do one of the following things:

  • Create the required sub-directories and files  in the ./packages directory and
    • perform a server restart
    • let the server know that new package exists and should be activated
  • Copy a ZIP file containing the package contents into ./replicate/inbound and tell the server to install it
  • On Microservices Runtime only:
    • You can copy a ZIP file containing the package contents into ./replicate/autodeploy
    • This feature is turned off by default. To enable it set watt.server.autodeploy.enabled=true in the “Extended Settings” menu of the admin web UI or server.cnf (restart required in the second case).

Whatever you are doing for deployment (incl. the use of webMethods Deployer) it comes down to one of those approaches.

You might now say “Hang on, what about partial deployments?”. Ok, let’s look at this in a bit more detail. The most important scenario for those is the installation of product fixes. Updates to the WmPublic and WmRoot packages usually contain code changes. Sometimes it is also updates of .jar files, but that doesn’t change things on the conceptual level. If you inspect the ZIP files that contain product fixes, you will see some additional files. Amongst other things their job is to ensure traceability by displaying the fixes in the server web UI.

On development systems, there is also the dynamic reload of discrete services due to VCS operations. This is triggered by webMethods Designer.

Full vs. partial deployment

You can either deploy an entire package (full deployment) or parts of it (partial deployment). The default deployment approach is “full” in the official tooling. With tooling I mainly refer to the server web UI and the Deployer GUI. So users are guided to selecting packages in their entirety, and consequently have them deployed as such. On the other hand you can also select individual services, document types, triggers, etc. for deployment.

In the past many organizations have approach this with full deployments for the initial release, and often also the following bigger releases. Everything in between was done using partial deployments. I must confess that it feels natural to do that. It even creates the feeling of reduced risk. After all, I am not changing everything but a carefully selected sub-set, which seems to limit the blast radius should something really go wrong.

Sorry to disappoint you. But your feeling betrayed you here. Actually it is the other way around and a full deployment is not less safe. Let me explain this counter-intuitive statement.

Deployment failure

First we need to define what is meant with “deployment failure”. For the purpose of this article it means all problems that can be traced back to a given deployment. This includes

  • Any run-time issue caused by the deployment
  • Running transactions that do not complete or get corrupted because a deployment
  • Non-completion of the deployment itself
  • Introduction of circular dependencies
  • Missing artifacts (services, adapter connections, configuration files, etc.)
  • External dependencies like entries in mapping tables in databases
  • Server startup issues because of missing package dependencies
  • Disabled adapter connections due to wrong parameters (e.g. wrong hostname or user name)

The list above is by no means complete. But it covers the most common issues.

It seems that some of those can be attributed to user error, e.g. wrong connection parameters for external systems. Yet we have to ask ourselves: How was it possible that 1) this error was possible to be made in the first place, and 2) why was it not detected before the production system? In that light I would argue that it is a combination of problems with configuration management and test automation. In other words: It is not primarily human error.

Other issues originate very clearly at the process level, like not handling running transactions properly.

Overall, it is hard to find issue that are truly caused by human error, as long as proper processes and tooling are in place.

Failure types

Knowing what can go wrong is one thing (and certainly better than not knowing). But what we are really after is of course to deal with the issues and make them disappear. For a better understanding how to address the various points it is helpful to group them. What has helped me greatly in the past is use type of the root cause here. I know this sounds abstract, so let’s look at an example.

If we take the point of missing artifacts from the list above, it is pretty likely a user error. Either someone missed to put the artifact in question onto the list of to-be-deployed parts. Or someone didn’t follow the list properly. In both cases an individual made an error. This is also the type of error we want to avoid by eliminating the activity altogether: Something that doesn’t need to be done, cannot cause problems.

Another kind of failure can be found in the adapter connection that gets disabled due to wrong parameters. Here we have a problem with configuration management. 

Why partial deployments are risky

So why is a partial deployment such a risky thing? The core reason is that we move away from one of the core aspects of reliable IT operations: automation. A partial deployment means the creation of a new deployment set. This new deployment set may not be complete or contain other errors. It may also expose hidden dependencies. A typical example would be an SQL script that adds a column to a table. Only that this other script that creates the table in the first place, never made it to the target environment.

Of course there are ways to mitigate this example. The same goes for probably all other scenario I can come up with. But the question is: What do we gain then, is not much simpler to solve the full deployment once and for all? Not only does that mean that all future situation are addressed. It also frees people’s mind of complexity. And it gets tested very frequently, incl. on lower level environments. And this is where you want problems to show up.

If don’t believe me, look into which direction the rest of the world is moving. Yes it feels uncomfortable in the beginning. But how relaxed are you today with your partial deployments? I have yet to meet someone who truthfully says “I don’t care if you perform a partial deployment on Friday afternoon”.

The “path towards deployment automation” will be covered in a separate section below.

Different approaches

Given how well established and universally deployed webMethods Integration is, a number of ways exists to perform deployments. Here is a list of approaches that I am aware of:

  • Manual deployment via admin web UI
  • webMethods Deployer
    • Manual mode
    • Scripted
    • Asset Build Environment
  • REST API
  • Custom using internal APIs
    • Shell script
    • Custom Ant tasks
  • webMethods Package Manager
  • Auto-deploy on Microservices Runtime

I will not go into details here, that would be too much.  But let’s briefly look at the two approaches that I have seen in practice. There is the “old” school of thought that looks at the solution as a monolithic application. This is not so much about an anti-Microservices view. But it moves the discrete packages a bit into the background. This is how things were seen in the late 2000s and you can recognize it in webMethods Deployer. The latter has the great advantage to handle additional webMethods products. So if you have more than just packages for webMethods Integration, please do consider using Deployer.

Then there is the perspective that an application is explicitly comprised of a number of packages that each performs its duties. Many packages are used across applications, simply because in a given organization there tends to be a lot of overlap. Example: If two applications deal with a custom database, I should really have a single package as the interface to that database. Developing duplicate code, each in the context of its main application, is recipe for disaster (plus highly inefficient). Please check out this blog post for more details on that highly successful approach and how it moves you towards a true integration platform.

Given that I have implemented applications purely via webMethods Integration for the last 10+ years, my preferred approach has been the second one from above. In terms of implementation I have used some custom scripts. They are easy to write and you can an example further below. I will concede that it seems to be a bit of extra work relative to using Deployer. But the latter is conceptually not suitable for a per-package approach and that would have caused so much (manual) work, that writing my own scripts was much easier overall.

Going forward the new webMethods Package Manager is probably the way to go. I truly like the elegance and simplicity of its approach. The one thing to look at is whether or not it matches your exact needs. It is under active development in my understanding (as of this writing), so re-checking every now and then is a good idea.

The auto-deploy feature is positioned in the context of Docker images. And I agree that it sounds tempting not having to re-create the image on every change of a single package. But there is a good chance that your auditors will not allow it. After all, it will not be clear what code exactly was running when. It is still a great feature in the context of manual testing. But for the context of CI/CD in general and production in particular I would suggest to be careful.

Deployment made simple

As with so many things in life, the key to success with deployment is keeping it as simple as possible. The core aspect here is: no exceptions. Solve a given problem in a clean/elegant/simple way once and for all. Translated into a practical outcome, this means that I only have to find a way to safely handle full deployments and address all possible topics. This primarily includes:

  • Configuration management: Ensure that settings and values are correct for all environments. Also, there are usually various requirements on the OS, the DBMS, local file system, etc. Those must be formally specified and enforced.
  • Dependency management: Ensure that dependencies of all sorts are completely specified in a way that makes them machine-enforceable. The obvious candidate are package dependencies, which in some organizations are deliberately ignored.
  • Data migration: A classic example is the addition of new fields for entities (in relational terms: additional columns to tables). This must not only be handled in the associated adapter services etc., but also the underlying databases. For RDBMS programs like Liquibase have proven to be a very solid solution.
  • Pre-/post-processing: Sometimes there are very special additional operations that must be applied. There must be some kind of hook (often called “user exit”), so that you can plug in such things.
  • Downtimes: A deployment means that the instance in question must not process requests while the deployment is running. Ever. I know that you can sometimes work around this and “design” deployments such, that no downtime is necessary. Yet there are two reasons why that is a really bad idea. Firstly, there are cases like an OS reboot that require some kind of approach for handling downtimes anyway. So it would be foolish not to re-use/leverage what is already there. Secondly, introducing custom workarounds not only requires considerable effort and probably screws up the architecture. But it also introduces additional risks for the deployment again.

All of the topics mentioned above are universal and not specific to webMethods Integration. That means they are well documented on a conceptual level and have been solved for many scenarios. Since webMethods Integration is Java based and works off of a regular file system, a ton of stuff can be re-used very easily. Liquibase was already mentioned for relational databases. For testing there is Junit and the webMethods Test Suite is based on that. Configuration values can be updated on the file system with some scripting or more elaborate approaches. The list goes on.

Your path to deployment automation

The main goal of this article was to bring structure into the topic of deployment. This allows us to see the discrete areas that play a role. And just like with good system architecture we can address each of them in isolation. That makes it a lot easier for the initial work. Much more important, though, is what it means for the users. If I can explain my entire deployment pipeline with e.g. 1 or 2 slides that has a tremendous (positive) impact on all other parties involved.

In particular we are talking about operations. If something fails on Sunday morning at 1:37 am, it is a relief to know that I as the operator on duty can handle 99% of the issues myself. And for that tiny rest where I need additional help, I am able to provide a concise report about the issue.

What does that mean in practical steps? 

Configuration management

One of the main aspects of a successful deployment approach is the configuration management. Each environment is different and we need a good way to deal with those differences. If you check out my other blog posts, you will see that I have written a lot about this topic. For advanced scenarios in the application code a dedicated package like WxConfigNG may be ehlpful. But you can always start with what is available out-of-the-box.

For values inside you application code that means using the built-in “Global Variables”. You can use some shell scripting to populate values during deployment and have a solid start. For other values, esp. adapter connections, you will need the Microservices Runtime, if you want/need out-of-the-box functionality. For classic Integration Server some custom tooling is probably necessary. As an example adapter connections (incl. passwords) can be updated using an open-source tool from JahnTech. Please have a look at this blog post for details and some YouTube videos.

If your organization hasn’t worked a lot in this area, but wants to have fast results, please consider getting in touch. I have spent more than 15 years with this topic and my code handles such things for a number of large organizations. They include government bodies, one of the largest global retailers (in more than 13k stores), logistics, and banks.

In my experience this is the single biggest challenge for most organizations. Once you have it solved, the rest is relatively easy.

Shell scripts

The great things about deployment is that it boils down to file system operations and optionally a few HTTP calls (see “How things work internally” above). This makes it surprisingly easy to automate stuff. To build a package from VCS (usually Git), and deploy it to a target environment, you only need to a few things. Below is some pseudo-code to illustrate that:

  • The first snippet creates the installable package from the VCS
  • The second part downloads and installs the package
				
					# ====================================================
# Build webMethods Integration package from Git and
# upload to binary repo for deployment from there
# ====================================================

# Git source from Git
cd $WEBMETHIDS_HOME/IntegrationServer/packages
git clone https://my.server.com/path/my_package.git MyPackage

# The following lines are only needed for Java code
cd $WEBMETHIDS_HOME/IntegrationServer/bin
./jcode.sh makeall MyPackage
./jcode.sh fragall MyPackage

# Create package ZIP archive
cd $WEBMETHIDS_HOME/IntegrationServer/packages/MyPackage
zip -r /tmp/MyPackage.zip .

# Upload ZIP to binary repo
curl -v -u ci:manage --upload-file /tmp/MyPackage.zip \
  https://binary.repo.com/webm/MyPackage.zip
				
			
				
					# ====================================================
# Download webMethods Integration package from 
# binary repo and deploy on local Integration Server
# ====================================================

# Download ZIP archive and unzip into correct directory
wget --user=ci --password=manage -O "/tmp/MyPackage.zip" \
  https://binary.repo.com/webm/MyPackage.zip
unzip "/tmp/MyPackage.zip" -d "$WEBMETHODS_HOME/IntegrationServer/packages/MyPackage"

# Perform whatever changes are needed for adjusting to the environment
# This can include the use of MSR's Dynamic Variables or the admin REST API
# of webMethods Integration

# Enable package
curl -u 'Administrator:manage' --request POST \
  --url "http://localhost:5555/admin/package/MyPackage?action=enable"

				
			

While I would certainly recommend some level of input validation and error checking for the final versions, those scripts demonstrate how easy it is to automate deployment of packages. I copied the contents from some real-world scripts with minor adjustments, but didn’t check further. So please excuse possible errors.

In closing

I could hopefully convey that deployment is an important aspect of the SDLC. When I was first pushed into the topic back in 2008 I truly underestimated its importance and fascination. Ever since then it has been in important part of my professional journey. It also played a role in boosting my career.

Of course there are more aspects and esp. the exact situation at a customer does have an influence. Do you have any particular questions here? If so, I will be happy to answer them in the comments.

If you want me to write about other aspects of this topic, please leave a comment or send an email to info@jahntech.com. The same applies if you want to talk how we at JahnTech can help you with your project.

© 2024 by JahnTech GmbH and/or Christoph Jahn. No unauthorized use or distribution permitted.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

On Key

Related Posts

Transforming the business with CI/CD

On the business level there are huge benefits to be gained from applying CI/CD. This article looks at a few of them and helps you to move your business to the next level, when it comes to software delivery. This is not limited to webMethods Integration Server, but of course it can make a huge positive impact especially on an integration platform. And it is a great vehicle to boost your career.

Deployment options for webMethods Integration

There are several ways to deploy code to webMethods Integration. Let’s look at how things work under the hood, some common pitfalls, and how you can easily automate things. This knowledge will allow you to deliver more value to projects.

Coding vs. career?

Let me share some thoughts on why quite a few people think that coding, which is only the last step in software development, is only for beginners.

CI/CD learning environment for webMethods Integration Server

A key aspect of my upcoming online course about CI/CD for webMethods Integration Server is the automated creation of a proper environment. You don’t need to deal with all the underlying details, but can fully concentrate on learning CI/CD. This article describes the environment and explains the rational behind it.