Microservices and code reuse for integration platforms

Today I want to write about a software development approach I was able to put into practice a few years ago. I was myself very much surprised how well things worked out, and especially how fast I was able to see tangible benefits for the business side.

Before going into those points in detail, let me give a quick description of the environment, in which I was working at the time. We are talking about a corporate integration platform with a fairly standard set of requirements. In the game were Salesforce.com (SFDC) as the CRM system, on-premise SAP as ERP, and a bunch of custom applications also running in the company’s data center. The majority of work was around SFDC. In some cases changes to opportunities had to be reflected in home-grown applications running on-premise. Another scenario was adding selected contract data (managed in SAP) to SFDC accounts for getting a better holistic view on customers.

To make things a bit more challenging, there were constant changes to the surrounding systems. Some were caused by technical requirements, but the absolute majority was business-driven. With the integration platform sitting at the center of all this, there are by definition a lot of dependencies of several kinds (watch this video on the different kinds of coupling for details). Handling those is one of the core competencies of integration software. Conceptually this was done using bounded contexts and I recommend this video with Eric Evans, for more details on the subject.

This writing was triggered by reading Sam Newman’s book “Monolith to Microservices” (here is a video with Sam and Martin Fowler on the subject), where on page 42 he discusses the aspect of reuse. The conclusion there is that people should be very careful about seeing reuse as a primary objective when optimizing their software development approach. And for time-to-market duplicating code is suggested as an alternative that should be looked at in detail, rather than immediately dismissing it as inefficient.

While this view is probably sufficient for its context (Microservices vs. monoliths), it misses out on two aspects that I want to describe here. (Let me point out that I don’t want to imply that Sam Newman is not aware of this.) Those aspects are

  1. Code maturity: How many iterations of refinement has the code gone through? Is it something brand-new, or has it been worked on for a couple of years?
  2. Time horizon: Are we looking only at the shipping date for a discrete piece of software, or is it also relevant how our organization as a whole will be doing? Or put differently: Is this about local or global optimization?

The core argument Sam is making against reuse, is that of organizational coupling. And it is a strong one that I will not dispute. But what if you slightly change the underlying conditions? Let’s assume the code in question is not (relatively) new but has gone through several iterations and is already used by five other applications or services. The latter means two things. Firstly, the code should be clean, modular, and easy to extend. It should also be underpinned with sufficient automated testing. Secondly, and that is the more important side, the “domain” of the code is well understood. Whenever an additional service started to use it, people learned about another the use-case and how it was different from the others. Overall this meant that the code evolved into something more universal with every iteration. And eventually it would be flexible enough that a new consumer would not require any change.

In practice this meant that all services accessing SFDC (and likewise those for all the other systems) were equipped with an anti-corruption layer (see this article for more details). Also, versioning of services was introduced right from the start. Interestingly, only in very few cases was it later necessary to add new versions; almost always the anti-corruption layer was sufficient to handle differences. And it was evolving this layer that allowed me to reuse code, configuration, data structures, test scripts, etc.

But there were also cases, when reuse was not immediately(!) possible. And it was indeed always due to time constraints around the required delivery date. My approach here was basically the following: I looked at how different the new code would be from the existing one that was handling a somewhat similar requirement. If, at a second glance, it was sufficiently different, the decision would be to write a new service and live with some level of duplication. This was rarely the case, however. In the majority of cases I came to the conclusion that merging things later would be the preferable approach.

So after the initial delivery, I would set aside some time for refactoring and merge the “new” service into the existing one. Interestingly, the biggest benefit here was not getting rid of duplicated code. Instead making the existing shared service support a new scenario, very often meant that it became clearer what its core was about (again, understanding the domain). So I would then move some logic to the calling services and end up with a cleaner and simpler shared service. If you go through this exercise often enough, you will end up with a clean and easy-to-maintain framework that will save you tons of time.

What I have hopefully brought across is that with a flexible software platform and some creative thinking a flexible and efficient approach to integrating many different systems is possible. The latter is a requirement that is only going to grow moving forward. And even after 20 years in that space, it still surprises me how much demand is increasing. Please leave a comment with your thoughts on this, I am very much interested to learn your view. Thanks!

If you want me to write about other aspects of this topic, please leave a comment or send an email to info@jahntech.com. The same applies if you want to talk how we at JahnTech can help you with your project.

© 2024 by JahnTech GmbH and/or Christoph Jahn. No unauthorized use or distribution permitted.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

On Key

Related Posts

Transforming the business with CI/CD

On the business level there are huge benefits to be gained from applying CI/CD. This article looks at a few of them and helps you to move your business to the next level, when it comes to software delivery. This is not limited to webMethods Integration Server, but of course it can make a huge positive impact especially on an integration platform. And it is a great vehicle to boost your career.

Deployment options for webMethods Integration

There are several ways to deploy code to webMethods Integration. Let’s look at how things work under the hood, some common pitfalls, and how you can easily automate things. This knowledge will allow you to deliver more value to projects.

Coding vs. career?

Let me share some thoughts on why quite a few people think that coding, which is only the last step in software development, is only for beginners.

CI/CD learning environment for webMethods Integration Server

A key aspect of my upcoming online course about CI/CD for webMethods Integration Server is the automated creation of a proper environment. You don’t need to deal with all the underlying details, but can fully concentrate on learning CI/CD. This article describes the environment and explains the rational behind it.