Editorial note: I originally wrote this post for the SubMain blog. You can check out the original here, at their site. While you’re there, have a look at CodeIt.Right, which can help you with automated code reviews.
How many development shops do you know that complain about having too much time on their hands? Man, if only we had more to do. Then we wouldn’t feel bored between completing the perfect design and shipping to production … said no software shop, ever. Software proliferates far too quickly for that attitude ever to take root.
This happens in all sorts of ways. Commonly, the business or the market exerts pressure to ship. When you fall behind, your competitors step in. Other times, you have the careers and reputations of managers, directors, and executives on the line. They’ve promised something to someone and they rely on the team to deliver. Or perhaps the software developers apply this drive and pressure themselves. They get into a rhythm and want to deliver new features and capabilities at a frantic pace.
Whatever the exact mechanism, software tends to balloon outward at a breakneck pace. And then quality scrambles to keep up.
Software Grows via Predictable Mechanisms
While the motivation for growth may remain nebulous, the mechanisms for that growth do not. Let’s take a look at how a codebase accumulates change. I’ll order these by pace, if you will.
- Pure maintenance mode, in SDLC parlance.
- Feature addition to existing products.
- Major development initiatives going as planned.
- Crunches (death marches).
- Copy/paste programming.
- Code generation.
Of course, you could offer variants on these themes, and they do not have mutual exclusivity. But nevertheless, the idea remains. Loosely speaking, you add code sparingly to legacy codebases in support mode. And then the pace increases until you get so fast that you literally write programs to write your programs.
The Quality Conundrum
Now, think of this in another way. As you go through the list above, consider what quality control measures tend to look like. Specifically, they tend to vary inversely with the speed.
Even in a legacy codebase, fixes tend to involve a good bit of testing for fear of breaking production customers. We treat things in production carefully. But during major or greenfield projects, we might let that slip a little, in the throes of productivity. Don’t worry — we’ll totally do it later.
But during a death march? Pff. Forget it. When you slog along like that, tons of defects in production qualifies as a good problem to have. Hey, you’re in production!
And it gets even worse with the last two items on my bulleted list. I’ve observed that the sorts of shops and devs that value copy/paste programming don’t tend to worry a lot about verification and quality. Does it compile? Ship it. And by the time you get to code generation, the problem becomes simply daunting. You’ll assume that the tool knows what it’s doing and move on to other things.
As we go faster, we tend to spare fewer thoughts for quality. Usually this happens because of time pressure. So ironically, when software grows the fastest, we tend to check it the least.