Tools & Consulting for the webMethods® platform

Tools & Consulting for the webMethods® platform

Legacy software: Better than its reputation

Moving away from "legacy software" is easier said than done. It is a complex combination of business and technical aspects that needs to be looked at. Having worked on a number of migration projects over the last 20 years, here are my thoughts. It is just a "short" overview but will give you a starting point for things to look at.

All the time I hear people talk like “this is legacy software”. And they do that as if it were a bad thing. I respectfully disagree, to the extent of believing that there is often more good than bad about so-called legacy software. Let me explain why in my view all software running in production is by definition legacy, and what that means for strategic decisions about software in the enterprise.

The starting point for any discussion about replacing an existing technical system (in this case a piece of enterprise software) should be how the new new system is better than the old one. Let’s look at various aspects that have an impact on this, but are often not well understood.

Understanding the problem

Many people have a tendency to only look at certain aspects when deciding between two options. This is a general behavior and not specific to the topic of replacing software. I guess it is a protection mechanism of the brain to cope with complexity. The only problem here is that there is a high probability that this back-fires. Simply because when we start the process we have too little understanding about the problem space. So we exclude certain options pretty (actually too) early in the process.

To illustrate why this “We need to focus” approach is often done the wrong way, we don’t have to look far. Think about agile vs. waterfall and why the latter does not work. Waterfall, with the design phase in the beginning, requires an in-depth understanding of the problem domain. Unless the project team members have already done something sufficiently similar, how are they supposed to know enough to get it right the first time within the typical budget and time boundaries?

Handling the combined complexity of business and technical requirements is a huge topic. I remember a quote fromDonald Knuth where he basically says “You only truly understand a problem, after you have implemented the first iteration to solve it”. It’s like building a house. Nobody expects anyone to be able to do this right the first time.

For additional thoughts on this subject, please have a look at my article “Understand the problem. Truly.

Multiple dimensions

To make matters worse, we have multiple dimensions or aspects, when deciding about old vs. new software. The obvious ones are functionality vs. stability. If the new software offers additional features that help the business, it looks like an obvious choice. But what if things do not work as advertised, and instead produce wrong numbers? What if strategic decisions are made based on those numbers? What if such a wrong decision not only means to loose a few big deals, but leads straight into bankruptcy? Other obvious candidates are cost of operations, compliance with existing business continuity measures, etc.

While people will usually not flatly deny that all those dimensions exist, they often still act like it. Basically they weigh the various dimensions according to personal impact and knowledge. We all have a tendency to underestimate the complexity of things we do not understand in detail. “How hard can it be to do this?” is the typical thought here. In many cases the answer is “Much harder than you thought, while being blessed with ignorance.”

As to personal impact this is a bit like federal vs. state government. In many countries the federal government (top level) mandates certain things from the state government (lower level) without considering practical problems, let alone proper funding. The same happens in commercial organizations when IT denies the necessity of business features or vice versa. This is basically silo-thinking (see my article on creating customer value).

Thinking long-term

There is a tendency in many organizations to more or less ignore mid- to long-term ramifications of important decisions. Similar to the “Understanding the problem” section in this text, this usually happens to deal with complexity. Plus important is often not clearly separated from urgent. Putting all this together, we end up with approaches (to avoid the word “solutions” here) that over-emphasize immediate benefits. It is a “let’s worry about this tomorrow” way, that has the potential to kill the organization.

In the technical domain we even have the term “technical debt” to describe that problem. Not keeping your house clean on the technical level, will very soon mean that you loose business agility. Death by a thousand cuts, is probably the figure of speech to describe this best. Some folks still think about reducing technical debt as perfectionism that software developers love too much. And sometimes that is the case. But there is a good reason why senior developers spend about three quarters of their time reading and understanding existing code, rather than writing new one. If it is your job to maintain a mission-critical system, you have to think about tomorrow.

How is all this relevant for the question of replacing an existing piece of software? With your existing software you have hopefully found a way to handle its long-term evolvability. People know it inside-out, have developed a feeling, which areas are a bit sensitive, etc. For new software all this has to be re-learned. And not only that but also the more basic aspects. Think about standard operations topics, end-user training, and developers that need to internalize how things work. All those efforts tend to be underestimated.

Then there are those situations, when people did not get their house in order with the old software. Simply changing the software and blame the old one may look like a very tempting solution to buy some time. Whether this happens with honest intentions or out of cynical calculations is not that important. But what makes you think you will be able to do this now, when the only thing that changed is the software? There are exceptions, but they require many things to coincide. In general chances are that after the “honeymoon” period things will be at least as bad as before.

What to do?

If you have neglected your software so long and hard that it is basically beyond repair, you are probably best off by switching to something new. All the issues mentioned above still apply. But apart from technical issues, it will be much easier to get political support and funding for something new.

Should your situation not be completely desperate, I recommend to very seriously check whether keeping the existing solution might be the better option. A very important factor here is the effort to turn off the old software. How often has your organization introduced a new system with the goal to replace something, only to now have both in production.

If you have opted for something new, you should prepare for a scenario without a hard switch-over. Migrating the data alone will be difficult because of volume. Even if you have an entire weekend of downtime, which in itself is unrealistic for most organizations today. But you also need to have a way to continue running the business, when (not if) something goes wrong. You can think of this as the agile way of moving to a new system. Doing a hard cut-over to the new software would be equivalent to waterfall, and we all know how well this works.

Challenges

When the decision has been made to replace the existing software a number of problems need to be taken into account. This list is far from complete but should give you a feeling what we are talking about overall.

Keeping the old software

It was already mentioned above, but I still bring it up here again. This is the number one risk, when initial estimation indicates that during the desired duration of the project it is not possible to completely get rid of the old system. (It will be much worse, if you wrongly assumed you could decommission the old software.) Sooner or later some genius will come up with the idea to keep using those parts of the old software that cannot be migrated in time. Congratulations, you just got yourself an awful load of additional problems plus the cost for running both applications in parallel.

Specifically, the complexity to split operations between two systems is huge. What about data? They will diverge faster than you can look at them. That means a data cleanup project down the road and wrong business decisions because of faulty data in the meantime. Also, you will spend considerable time deciding which functionality should stay with the old system, and which is moved to the new one. Your users will love switching between programs, by the way.

In all likelihood, the net result will be two system running in parallel for a long time. The additional operational costs are nasty, but not your true problem. The real issue are wrong data (leading to wrong decisions) and reduced business agility because of badly defined coupling. And all that is just the surface.

Data migration

An aspect that is often overlooked initially, is the migration of historical data. Taking a CRM system as an example, you absolutely need to preserve the history you had with your customers. Transferring that data to a new system comes with at least two major challenges. First, the data models will not be identical. So things are more difficult than doing an export-import run. Instead you need some data conversion. Just having different field names in the database is easy. But what about different data types? Or relationships (foreign keys) between tables that require a careful order of inserting a single record? Nothing unsolvable, but all these things are far from trivial and take up considerable time that is often not in the budget.

Second is the amount of data and what it means in terms of down-time (or not). With all the complexity, which was merely touched upon above, a data migration is likely to take much longer than the biggest maintenance window you have in the near future. And even if you had, the initial run will never go through without problems. So you must have an approach that does not depend on downtime of your old system. This alone increases the complexity by an order of magnitude. So instead of one weekend, brace yourself for the data migration taking a couple of weeks or rather months.

Data quality is next. Your old system will hold data that is not compliant with what the business process documentation mandates. What exactly the differences are, you will only find out along the way. If you are lucky, it just means that your transfer logic needs to be adjusted, re-tested, etc. so that you lost a few days. But I wouldn’t bet on that. Instead you need to be prepared to go back to the drawing board with the business folks and discuss what needs to be done. Hopefully this happens only tens and not hundreds of times. Either way, it changes the migration approach considerably.

The market does not wait for you

Replacing a non-trivial system with loads of important data takes time. It allocates a lot of resources that could be used to grab market share from the competition or make your existing customers happier than they are now. Spending millions (do you calculate the opportunity costs?) on something that is pretty much comparable to what you have, is a risky idea. What does that mean for topics like customer churn or annual recurring revenue? Those are the aspects things must be weighed against.

Know-how

The points above already lead to it. You need to have special knowledge to successfully execute a migration project. This experience can roughly be organized into four areas: 1) General knowledge about migration projects; this can easily be sourced from the outside. 2) In-depth technical knowledge about the old system and its customization; do you have people with this or will you need to bring in product experts from the outside who first need to learn about your organization and custom changes? 3) Experts for the new software who do the customization; it is highly unlikely that you have these in-house. 4) Business analysts who translate the desired future way of running your business into technical requirements for the new software; hint: the people who are working with the old software today are usually not qualified to do this.

All these different resources need to be made available and coordinated. The latter is the lesser problem, which does not mean that it will be easy. But finding qualified people for all areas is going to be tough. It is also a likely example for the old saying about the strength of a chain relative to its weakest link. You need to think about this from a risk management perspective.

In closing

If you take only one thing away from this text, it should be to look at things more closely. Check with yourself whether you have asked the right questions. Selecting the questions with care alone, will have a profound impact on the result of your analysis. And if you balance the various aspects, your chances of success will be much higher.

And lastly, it is important to realize that the term “legacy software” basically means something like “not cutting edge”. If you buy into this view, all of a sudden it is not so bad anymore. And where would you draw the line otherwise? Overall, it is always good to think critically about ones situation. But a piece of software is not unsuitable for its job simple because of age.

If you want me to write about other aspects of this topic, please leave a comment or send an email to info@jahntech.com. The same applies if you want to talk how we at JahnTech can help you with your project.

© 2024 by Christoph Jahn. No unauthorized use or distribution permitted.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

On Key

Related Posts

Microservices and code reuse for integration platforms

Today I want to write about a software development approach I was able to put into practice a few years ago. I was myself very much surprised how well things worked out, and especially how fast I was able to see tangible benefits for the business side.

Custom logging with Log4j on Integration Server

Every serious IT system needs proper logging. This article describes how I use Log4j v2 with webMethods Integration Server. There is no need to change anything on the system, just install the package and it works.

External Java libraries in webMethods Integration Server

To get most out of your Integration Server you will sooner or later need to leverage the functionality of existing Java libraries. Common examples are Log4j or Apache POI (for working with Excel or Word files). Here is a short guide for this.

Running webMethods Integration Server in containers

Organizations are moving from traditional deployments to containers and Kubernetes to increase business agility. This article discusses a lot of topics that are relevant to actually reach the goal of faster delivery of value to the business through software. Because if you do it wrong, you just end up with yet another technology stack. And, as always, technology is not your biggest challenge.