So we have a legacy application that is too expensive to maintain. What do we do about that? Alas, there is no easy answer because it costs Big Bucks and Big Resources to replace code. One is also faced with the issue of short-term cost vs. long term cost. By that I mean that one can get feature X into the software in two weeks if one had a new system but it will take six months to get it into the existing system. That six months is a whole lot less cost than the umpteen engineering years required to replace the whole system. So there is a strong temptation to spend a small amount of money today to get feature X to the customer, even though it is costing a lot more that it would in the future. In other words, in B-School jargon we have a classic present value problem.
[That problem is complicated by the fact that to make the present value case, the developers will have to make public the outrageously long time it takes them to incorporate "simple" changes. Worse, it collides directly with developer ego. In nearly half a century in this business I never met a software developer who didn't think they were are least better than average. (The duds are always the guys who just left to work someplace else while leaving their software behind.) So developers never, ever want to admit how long it actually takes to do something because it might make them look bad. So the psychology of software development provides a Catch-22 that works against making the present value case for replacement, which they would desperately like to have.]
There are four basic overall strategies, none of which are attractive:
Bite the bullet. The in-house development staff takes on the task of building the entire replacement system. Because the legacy application is probably large, to get it done in any reasonable time will require a large fraction of the development staff to essentially disappear for several years until it is done. That will leave only a skeleton crew to churn out the features the customers want now, which will reduce time-to-market a great deal compared to the even the current abysmal productivity. As a practical matter, this is rarely a viable strategy for anything other than small applications that will only divert a small number of resources.
Parallel development. In this case one hires a bunch of new developers to build the new system in-house while the existing developers continue to churn out features X, Y, and Z for the customer. Basically one is trading off bottom-line "R&D" money for in-house developer resources. There are three problems with this approach. The original in-house developers are not going to be real happy that new hires are brought in to do the new system while they are stuck with maintaining the old one. Count of a mass exodus of in-house developers if the economy is even moderately robust. The second problem is: What does one do with the extra staff once the new system is completed? The third is the short term cost of double the developer staff until the new system is completed.
Outsource the replacement. The basic idea is that one lets someone else develop the new system so that all the in-house developer's have to worry about is the transition from the old system to the new system. This leaves the in-house developers to continue churning out features X, Y, and Z to keep customers happy in the short term. This is just a trade off of money for in-house developer resources so that one gets both the short terms features and the long-term replacement without having to to juggle the "bulge" in staffing associated with parallel development. The first problem with this is the large cash outflow to buy the new system that is not going to make the bean counters happy no matter what the present value looks like. (Corporations have notoriously short vision for expenses.) The second problem is morale because the in-house developers are again stuck with doing maintenance while someone else is doing the interesting stuff. Another big problem is synchronization of the changes (features X, Y, and Z) the in-house staff is making to the application that must be incorporated in the new application as well. The biggest problem, though, if defining the requirements. One of the things that makes the old application unmaintainable is that no one on the present staff knows what the entire suite of requirements is. Providing a decent requirements specification for the contractor may end up sucking up an large amount of in-house resources anyway.
Piecemeal replacement In this scenario a smallish portion of the in-house staff replaces the worst offending modules of the legacy system one-at-a-time by developing a new module that replaces the corresponding module in the legacy system. This allows most of the developers to continue churning out features X, Y, and Z with only a modest reduction in time-to-market. The worst offending modules being replaced first ensures a maximum benefit to maintenance effort. In theory one eventually gets an entirely replaced system. While this is the most promising approach, it has traditionally not worked out well in practice. One problem is that such systems never seem to get done. That is, by the time the last module is replaced, the first module that was replaced is now on the maintainability Hit List. That's because of the second reason: if the original system was properly modularized, it wouldn't be a major maintainability problem. It is also because the traditional way to identify what to replace is the minimum cut-set. That is an awful way to subdivide already unmaintainable applications for reasons I will address in the next post where I evaluate legacy replacement techniques. The result is that it takes too long to integrate the new module. Worse, the new module is likely to have built-in poor maintainability that it inherits from the original application. It doesn't take long for the project to become unmanageable.
The first three strategies have serious drawbacks that are unrelated to the software itself. They primarily involved business issues, like time-to-market, present value cost, and developer morale, that are independent of the actual software construction. As a result, they will each be a very difficult sale to Management. For what it is worth, I have always felt that parallel development is the best choice if it is ongoing. That is, a shop should always have two teams leap-frogging one another for individual systems whenever the requirements and technologies are volatile. While one team is maintaining the old system, another is building the next generation based upon problem space changes and lessons learned. When the new system is completed, its team continues to maintain it while the other team switches onto the development mode for yet the next generation of application. That way one avoids the morale problem because each team alternates original development and maintenance. More important, one is always getting a state-of-the-art application in a timely fashion as requirements and technologies change. Finally, the maintenance phase is always time boxed so one always has as high a maintenance productivity as the current state of software development allows. Good luck, though, on selling management on perpetually having twice as many developers as needed for a single system. (As a practical matter project ramp-up and ramp-down will usually mitigate this by providing some migration between teams.)
If you can sell one of the first three, that's fine. In each case an entire new system is developed from scratch so the notion of 'replacement' becomes a pure transition issue from one to the other. As a practical matter, though, I don't see any of those being viable in mist business contexts. That leaves one with piecemeal replacement with all its technical problems for the actual construction of the replacement modules. The approach I advocate assumes such piecemeal replacement. However, the approach addresses the technical issues in a unique fashion so that risk is drastically reduced and a well managed completion is realistic. But that's a matter for a later post.