Good software, in Humphrey’s view, “is usable, reliable, defect free, cost effective and maintainable. And software now is none of those things.”
A further thought on this. I’ve seen lists like this many times now. Often they even leave “maintainable” out – “fast, cheap and right” is the usual mantra.
But the one that almost everyone always ignores is extensibility.
Most software is built on pre-existing software. Throwing the existing code away and starting from scratch rarely happens. New versions are built on old versions. And newer versions built on that. Eventually the code becomes a hulking tangled mess of special cases, exceptions, add-ons, hack-ins and brain numbing control flow.
And so programmers get tempted by the “throw it away and start again” syndrome. And other programmers write articles on how that’s the worst thing you can possibly do.
But yet very few people seem to write articles on how to build your software in a way to deliberately facilitate building on top of it later.
Of course people think that Analysis and Design and Specifications and all those sorts of things are important. But this is different. Those things can only help you build what you already know you’re going to need. Once that’s worked out most software is then built in the “it works, we’re done” way I talked about earlier. Occasionally, it even gets refactored into something readable.
Here we’re talking about what Paul Graham refers to as Bottom Up Programming. Every time you need to write something you stop and think “I have to write X … hmmm … that’s quite tricky … but … it would really be quite easy if someone had already written Y and I could just build on that”. Then you go build Y (or if you’re writing in Perl, you go off to CPAN and discover that someone else has written it for you!). Of course, building Y should make you think “that would be quite easy if someone had already written Z”. And so on.
By the time you’ve worked your way back up the stack you’ll have the original “X” built, but you’ll also have a suite of helper code that will enable you to build many other components trivially later.
It’s often been noted in computer programming that the best programmers are an order of magnitude (or more) quicker than average programmers. But no-one ever explains why. I never quite grokked this until recently. The crucial point is that the difference isn’t obvious straight-away. Give the super programmer and the normal programmer the same initial task, and there’ll probably not be that much difference. Occasionally the super programmer will even be slighter slower.
But ask them both to now build Phase 2 on top of their code, and watch the super programmer leave the average programmer in the dust. They’ll have built so many useful tools doing the first phase that if this new requirement is even vaguely similar to the first they’ll be done in no time — probably whilst the average programmer is still untangling the logic of the existing code to work out where it’s best to add the IF statements.
In the last year I’ve worked with numerous clients who wanted web sites built. They’ve known that what they were asking for now was just Phase 1, and that in 3 months time or so they’d want Phase 2. And then Phase 3. Not one of them seemed to realise that how Phase 1 was built would effect how much Phases 2, 3, 4, 5, etc would cost. No-one asked about this when we were pitching for the job. None of them seemed to believe that a later phase of what they saw as similar complexity could be expected to be cheaper or quicker than any prior phase.
They assumed that if Phase 1 cost £20,000 and took a month, then 5 phases would cost £100,000 and take 5 months. Most of the companies pitching for the work, probably quoted for it as if this would have indeed been true.
But if done right, the economics could really be closer to:
Stage | Cost | Duration |
---|---|---|
1 | £20,000 | 20 days |
2 | £10,000 | 10 days |
3 | £5,000 | 5 days |
4 | £2,000 | 2 days |
5 | £1,000 | 1 day |
In total, they’d end up paying £38,000 instead of £100,000, which would presumably be very nice for them. But more significantly, in my view, they’d be finished in 2 months instead of 5. And each new phase they wanted to introduce could be done in a day instead of a month, at a fraction of the cost they’d originally expected. If they had the ability to keep coming up with improvements and new ideas for things to build, they could accelerate away from their competition at a phenomenal rate.
But virtually no-one really believes this is possible. The common wisdom is that systems get kludgier and dirtier over time. It always takes more time to add equivalent functionality rather than less.
At BlackStar, the first significantly large system I built, this was certainly the case. We didn’t build bottom up, and we paid the price. Over time it took longer and longer to add new functionality. The code base became so brittle that every time we fixed a bug in one place it made new bugs appear somewhere else. So we introduced more stringent procedures. This, of course, made the quality of code produced rise dramatically, but it slowed everything down even more.
We’re about to launch a major new project, potentially of comparable complexity to what we did at BlackStar. The question now is whether we can manage to reverse this trend. In four years time I want it to take significantly less time to add major new functionality that it does today.
I’ll try to document how we get on.