The risk of change

17 07 2011

Many of the clients that I work at find themselves in the situation where a lot of the technology that they use is outdated and the cost of changing the codebase to support the latest version is prohibitively high. The common excuse for this is that the risk of taking on a new version of their chosen platform is simply too high and they would prefer to stick to what they know works – “Better the beast you know than the one you don’t”.

On the surface this seems like a reasonable and conservative assessment that minimises corporate risk – but is it really? What do upgrades offer? New features? Improved performance? What if the system already performs just fine and do you really need those new extra features? Probably not and these are enticements to upgrade the real underlying reasons are more subtle – as a long term strategy upgrading regularly is a lower risk strategy than not upgrading at all.

This seems counterintuitive – taking on more work and an extra set of unknowns should increase risk rather than decreasing it and in the short term it probably does but what happens in the longer term? Skipping one version change means that when the next one comes out the leap needed to make that change is far greater and therefore far riskier so best stick to what you have – the security updates would be nice but not at the cost of upgrading all the code.

The problem with this strategy is that at some point in the future you will face the choice of either upgrading or losing any support from the platform vendor and at this point the differences between the platforms is so great that upgrading will often mean that other projects will need to be put on hold. So the strategy that looked the least risky actually increases the overall corporate exposure in the long term. I’ve worked at companies that are crippled by their choice not to upgrade. Large portions of support costs go into maintaining legacy code bases, developers become indispensable because of their knowledge of the legacy code – “the only person who can fix this”, development of new features is slowed down because all the best and most senior staff are maintaining the older code rather than cutting new code.

A better strategy would be to plan for change. Start writing software knowing that its going to need to change. But how can you possibly know what changes will be introduced in the next version? The point is you don’t need to know. Code that is well designed, decoupled and has comprehensive automated tests is code that is ready for change. There will inevitably be some additional work that needs doing but the tests should ensure that the changes are safe and the design should ensure that the changes are contained.

Does this mean organisations should always take the latest version? Not at all – waiting to make sure that the latest version is stable is prudent but that wait should not be so long that another version is already out before upgrading.

What about all that legacy code that is not tested or decoupled? Untested legacy code should be perceived as a risk to the organisation. It limits the organisations ability to respond quickly to changes in the marketplace, increases the cost of change and increases the dependencies on key individuals within the organisation. The best long-term risk mitigation strategy would be to write automated tests to cover untested code and to gradually refactor legacy code out of the codebase as new features are written and basic maintenance is performed.