It doesn’t get any easier, you just get faster

Greg LeMond

As applied to software development: Software doesn’t get any better (seemingly faster, less buggy, less annoying, more reliable) it just does more. But:

You never have the wind with you – either it is against you or you’re having a good day

Daniel Behrman, The Man Who Loved Bicycles

As applied to software development: You don’t remember the times when everything “just worked,” except that those times unrealistically set your expectations. Bugs, therefore, are unexpected and unwelcome.

work expands so as to fill the time available for its completion.

Parkinson’s Law

As applied to software development: See also Wirth’s Law Application designs expand to use all available hardware resources of their day / Software application size is only constrained by available hardware. Therefore, software appears to perform no better than it ever did.

Why is Everything (Software) so Terrible?

I love that Greg LeMond quote. It succinctly points out an unstated assumption: “It will get easier. and I will go faster.” No, you only get one of those.

Experienced users, especially those of us who are software developers ourselves (who presumably would know better) carry around a similar assumption: Software should be better. This is not how things ought to be. Everything is terrible.

I feel this way too. We know it’s possible for software to be better, so why, on average is it so terrible? Just when a great version comes out, it’s followed by a dud. Remember Snow Leopard? Yeah that was good. It’s been a while.

Now let’s be clear: I’m talking about perceptions and experience. I’m not saying that we’re not doing any more with our many more lines of code and vastly more powerful hardware than we did back in 1985. As a user, there’s no way we could have had access to videos and music online, not to mention smart phones, without everything getting better every year. As developers, containerization and cloud platforms allow us to relatively simply scale online services with an ease we couldn’t have dreamed of ten or twenty years ago. (A nice problem to have, as they say.)

With that out of the way, here’s my hypothesis: Software quality isn’t determined by resources available to throw at development, it’s not a matter of what language or framework is used, but rather the maximum defect rate users will tolerate.

Now you may say that this makes little sense because a large team can debug and test faster than a small team. That’s true – though less true than many would like to believe – but what’s also true is that no matter the size of the team, what slows them down is defective software releases. The team can’t move forward until their latest release functions acceptably well.

The famous Worse is Better puts forward two opposing views of software development: The “Right Way” (or the “MIT school”) and “Worse is Better,”, (AKA the “New Jersey school.”) The “Right Way” is the view that software ought to be built with as much complexity as necessary to solve the problem correctly; “Worse is Better” emphasises simplicity of implementation over correctness and simplicity of use.

In the essay and in responses to it, the question seems to be “What’s the correct approach to building software?” Evidence strongly suggests that, right or wrong, the “New Jersey school” has ended up as the approach that was actually taken to build most of what we use today.

Given that “Worse is Better” is what has succeeded in practice, a better question would be to ask why this is.

Few teams will move slowly enough to produce near perfect software. Making software perfect is boring. It’s expensive. And in the end, customers aren’t paying for perfection. You can wait for the perfect piece of software or buy “good enough” today. Those companies trying to make the perfect software? Everyone bought their competitor’s product instead.

New and Old Software is Equally Terrible

Software quality for actively developed products – especially in long running projects – naturally gravitates to the least acceptable level. Furthermore, the market has a coarse grained mechanism for incorporating user feedback to accurately discover this limit, with the result that only the most serious issues reliably get prompt attention.

The other side of this coin is that for a “finished” application to stay in use long-term (through new operating system releases, new generations of hardware etc.) a minimum of resources are needed to maintain it; that minimum will be determined by defect rate and user acceptance / complaints, unless the company wants to go out of business.

Apple’s Mac OS and IOS illustrate this principle: With essentially infinite resources to throw at the problem Apple still puts out buggy releases; so buggy in fact it seemed they had to slow their pace and release twice as many bug fix releases to IOS a few years ago.

Microsoft and Windows Vista is another example, and really the whole history of Excel. No matter how many years they have had to perfect it, Excel still has known bugs.

In both the “Develop at maximum speed” and the “Life-support only” scenarios amount of resources devoted to the software development are constrained by the user experience as it directly relates to business.

Go too fast, and the resources expended become a net-negative / become exponentially more expensive as every extra dollar spent on developer time actually contributes to more bugs and annoyed users.

And on the flip side, spend less than the bare minimum and users will abandon your platform out of frustration.

All of this is pretty obvious, right? But here’s the result: In either case the product pushes the boundaries of what’s acceptable to users. So everything seems terrible. If things were any worse they’d cease to be viable products, they’d disappear from the market to be replaced by something else roughly as terrible as the last thing.

One could argue these two situations simplify to one principle: Resources expended on development or maintanence of a product approach only the quantity needed to acquire or retain customers rather than some uncompensated asthetic measure of quality. Put this way it’s a pretty self evident observation. You get what you pay for.

I think the mimimum / maximum resource expenditure boundary model is a useful framing however. It explains why, whether a product is new and exciting or old and dull, it often seems terrible, or needlessly defective at best.

Competitive Markets both Help and Hurt Quality

There’s one nuance here to consider: Presence of competition will tend to push development forward on new products, up to the limit of acceptable quality, whereas it’s the presence of competition with the old dull product that will keep it’s quality above the absolute minimal acceptable level. I’m sure you can come up with examples of monopolies on life-support.

Competitive Market       Bleeding Edge       Maintanence Mode 
Yes   Terrible   Seems Okay
No   Seems good   Terrible

Adjust Your Expectations

And just as competition or lack thereof materially effects quality, it effects perception as well. Without anything to directly compare to, a customer may just be happy the product even exists. Once they can compare and contrast, shortcomings are way more obvious.

The operating systems for the iPhone and the original Macintosh computers are perfect examples. At the beginning these products had no direct competition. Sure there were other computers besides the Mac, but nothing near the price with a graphical user interface and so easy to use. Apple could afford to make it exactly how they wanted so it stood out from other computers. Customers were amazed they could drag and drop at all, not expecting too much more.

Similarly, the iPhone was so different from the Nokia and HTC smart phones of 2007 that it wasn’t in an arms race; it could focus on being itself and keeping software quality high. While the Apple products undoubtably had bugs, customers had nothing to directly compare them with. They didn’t demand so much from them.

The market for software prioritizes new capabilities over quality, with many customers not aware of the fact, so they are constantly unhappy, but not so unhappy they pay for better quality.

What throws off our expectations is that from time to time a product actually exists with more than the bare minimum of quality; but the natural stable state of quality is at the “Life-support” / “Maximum growth” boundaries where quality is minimized; given enough time all software will end up there.

An exception that proves the rule might be software for very mature hardware platforms: The very best games for consoles are those that are built after the hardware has been in the field long enough for programmers to learn lots of tricks to squeeze out every possible drop of performance. See for example games for the original Atari 2600 like Space Shuttle. Just when programmers have learned to make really good software for a platform, the hardware becomes obsolete and those skills become irrelevant.

We fondly remember those few perfect versions of software, somewhat like a gambler remembers those times he won at the slots even though most times he lost. Intermittent positive reinforcement can really mess us up.

Uncompetitive Markets Reduce Software Quality

To be fair, maybe customers can’t find better quality even if they want to pay. Other types of products have all sorts of price points for levels of quality: Clothes, food etc. But outside of a few nitche markets you don’t find this in software.

Some of the most expensive software is the worst, probably because it’s only as good as it absolutely has to be without getting replaced, but replacement is a tough nut to crack for competitors, when purchasing (think PeopleSoft, SAP, Oracle) happens infrequently. Clearly these are cases where the daily users aren’t the customers which partly explains the situation.

In general the market can deliver new features or high quality, but not both, except at a very high premium. And hardly anyone is really in the market for high quality software: Even those you’d expect to behave differently (car manufacturers for instance) don’t put a premium on quality software. NASA and some aerospace, mmedical and energy companies are the exceptions.

Imperfect Information

At the bleeding edge of capabilities it’s competition that drives quality down. This is rather perverse since if you asked most software users they’d prioritize reliability over new features; but they do not behave this way, partly because they can only see promises of new features, while future problems are unseen.

Features (iPhone for instance) are the attraction to paying for an upgrade; bugs, since they can’t be discovered until you own the product, are consistently discounted before purchasing. You know you’re probably not stuck with the bugs forever since Apple pushes out IOS updates regularly, so instead of demanding your money back you wait for updates.

Most software these days has online updateing, which should mean the amount of time users suffer bugs should be less than in the past. And yet, the updates introduce about as many new bugs as existing bug fixes. Why?

Software Development is Terrible

The mental model so many of us as developers and project managers carry around (if unconciously) goes like this:

Now that we have more developers on the team we can add those features on our wish-list so that we stand out from our competition.

The way we’re running this service is terrible; the new way is what everyone else is doing and it will relieve our pain.

Once we re-write this feature properly it won’t be such a headache and everything will be better after that.

Once we update the web framework we’re using the app will be properly designed and we can breathe a sigh of relief.

The re-write should be done in six weeks or less. Then we’ll have time to address our back-log of features.

Hey, it’s beliefs like these which keep us going, no matter how false. Only the first one – “… so that we stand out from our competition.” was right. We got more users. Because we added more features. Which had bugs that annoy the users. And we added maintanence costs. And we had to slow down and fix the bugs. And we had to update the web framework to reduce maintanence burdens introduced by the new features. And then we had to add new features from the back-log that built up. At this point we badly need to update our now out of date design on the front end.

and that new deployment method that was going to make everything better? We’re using it now, leaving us plenty of time to notice other problems, like turning over a rock to see the bugs underneath.

The treadmill is familiar to us of course. It’s fine. That’s life. It should be clear, however that it’s not going to stop. We’re never getting off.

So how does the development treadmill explain why everything is terrible? Terrible is a perception. The reality of the treadmill doesn’t match the model in our head of “Once we finish X everything will be better / easier and we can relax.”

Sure, everything may get easier for a short time after achieving one of these milestones like updating our web-framework. Then the team faces a question: “What do we do with this time on our hands?” (not that this has happened often.) Answer: “Add new features from our wish-list” or “Begin that new app we never had time for.” (Let’s hope it isn’t “Lay off everyone we no longer need.”)

It may well be that the original application really does become easy to keep running. The subjective experience of the team never changes, however, because they’ve jumped right back onto the treadmill with the new products they’re developing. They don’t notice the smooth running application, only the problems filling their time.

What we shouldn’t do is read the situation as a crisis or indication of someone screwing up. Screw-ups happen, bad code happens, poor design decisions happen (and should be avoided, of course.) They should get fixed, definitely. Fixing everything that’s obviously broken won’t change our experience in the long-term, however.

The development treadmill doesn’t result from out of the ordinary bad practices, and no amount of improvement of bad practices and poor design choices will get us off the treadmill. At best we’ll make more progress in feature development – which is what the business wants – but subjectively we’ll still feel like we’re running in place.

It’s a nearly trivial observation that the less time you spend on maintanence of existing software, the more time you have to make new software. The more new software you build, the more time you’ll spend on maintanence. I don’t know what the ratio of support time to creation time is – it’s so different for every type of software and user base – but it’s a limiting factor to growth.

We should measure past success by number of working features delivered and customers paying (or using successfully if you’re a non-profit) your software products.

When considering the future, decisions should hinge on what will secure and increase your income (more customers, higher usage.) This probably means doing what’s needed to continue delivering features and keeping currently valuable features working. You do this by minimizing risk your current code base / infrastructure poses to future operations and growth, and that’s where refactoring, rearchitecting, and testing come in; not to make your life as a development team better but to keep it going.