DaedTech

Stories about Software

By

Walking the Line between Fanboy and Luddite

Are Unit Tests Overused?

I was reading this blog post by Andrew Hunter recently where he poses the question as to whether or not unit tests are overused. The core of his argument is a rather nuanced one that seems to hinge on two main points of wariness:

  1. Developers may view unit tests as a golden hammer and ignore other important forms of verification such as integration tests
  2. Unit tests that are too finely grained cause maintenance problems by breaking encapsulation.

I think the former is a great point, but am not really sold on the latter since I’d argue that the thing to do be to create a design where you didn’t have this problem (i.e. the problem isn’t with the amount of tests written, but with the design of the system).  But I don’t have any serious qualms with the point of the article, and I do like reading things like this from time to time as an ardent unit test proponent because the second you stop questioning the merit of your own practices is the same second you become the “because I’ve always done it that way, so get off my lawn” guy. Once I finished reading the article and scrolled down to the comments, however, I noticed an interesting trend.

From the Peanut Gallery

There were a total of 11 comments on the article. One of them was a complete non sequitur (probably SPAM), so let’s shave the ranks down to 10 comments, which makes the math easier anyway for percents. Four readers agreed with the author’s nuanced “we’re overdoing it” take, four seemed to take issue in favor of unit testing and two were essentially rants against unit testing that seemed to mistake the author’s questioning of the extent to which unit tests should be taken for sympathy with the position that they aren’t necessary. And thus in the percentage of these blog reading developers (which many would argue are the most informed developers) 40% defend TDD/Unit testing wholesale, 40% are generally amenable to it but think that some take it too far, and 20% are (angrily) opposed to the practice.

Of the 20% demographic, the percentage of them that have experience writing unit tests is 0, taking them at face value. One says “In over 10 years of development I can count the number of unit tests I’ve written on one hand,” and another says “I event[sic] tried to learn how TDD could help me.” Of the others, it appears that 100% of them have experience writing unit tests, though with some it is not as clear as others. None of them cops to having zero experience the way the detractors do. So among people with experience writing unit tests, there is a 50/50 split as to whether TDD practitioners have it right or whether they should write fewer unit tests and more integration tests. Among people with no experience writing unit tests, there is a 100/0 split as to whether or not writing unit tests is stupid.

That’s all just by the numbers, but if you actually delve into the ‘logic’ fueling the anti-testing posts, it’s garden variety fallacy. One argument takes the form of a simple false syllogism, “Guy X did TDD and guy X sucked, therefore TDD sucks.” The other takes the form of argument from ignorance, “I tried to learn TDD and didn’t like it/failed at it, ergo there is no benefit to it.” (I say failed because in spite of its brevity the post contains a fundamental misunderstanding of the ‘rules’ of TDD and so there was either a failure to understand or the argument is also a strawman).

To play dime store psychologist for a moment, it seems pretty likely to me that statements/opinions like these are attempts to rationalize one’s choices and that the posters protest too much, methinks — angry/exasperated disparaging of automated testing by those who have never tried it is likely hiding a bit of internal insecurity that the overwhelming consensus in the developer world is right and they are wrong. After all, this blog post and the subsequent comments are sort of like a group of chefs debating whether something would taste better with two teaspoons of sugar or three when along comes an admitted non-chef who says “I hate sugar because this guy I don’t like puts it in his coffee and I’ve never eaten it anyway, so it can’t be any good.” The chefs probably look at him with a suffering, bemused expression and give out with an “Aaannnnywho…”

Skeptic or Luddite, Informed or Fanboy?

Another way that one might describe this situation and pass judgment on both the development industry as a whole and these commenters too is to label them the developer equivalent of Luddites. After all, the debate as to the merits of unit testing is widely considered over and most of those who don’t do it say things like “I’d like to start” or they make up weird excuses. Unit testing is The Way Forward (caps intentional) and commenters like these are standing around, wishing for a simpler time where their approach was valued and less was asked of them.

But is that really fair? I mean, if you read my posts, my opinion on the merits of TDD and automated verification in general is no secret, but there’s an interesting issue at play here. Are people with attitudes like this really Luddites, or are they just conservative/skeptic types providing a counterpoint to the early-adopter types only ever interested in the new hotness of the week. You know the types that I mean – the ones who think TDD was so last year and BDD was so last month, and now it’s really all about YDD, which is so new and hot that nobody, including those doing it, knows what the Y stands for yet.

So in drawing a contrast between two roughly described archetypes, how are we to distinguish between “this might be a blip or passing fad” and “this seems to be a new way of doing things that has real value?” Walk too far on one side of the line and you risk being left behind or not taken seriously (and subsequently leaving angry justifications in blog comments) and walk too far on the other side of the line and you’ll engage in counterproductive thrashing and tilting at windmills in your approach, becoming a jack of all new trades while mastering none. Here are some things that I do personally in my attempt to walk this fine line, and would offer as advice to you if you’re interested:

  1. Pick out some successful developers/bloggers/authors/experts that you admire/respect and scan their opinions on things, when offered. It may seem a little “follow the herd,” but I view it as like asking friends for movie recommendations without going to see every movie that comes out.
  2. Don’t adopt any new practice/approach/etc unless you can articulate a problem that it solves for you.
  3. Limit your adoption bandwidth: if you’ve decided to adopt a new persistence model such as a NoSQL alternative to RDBMS, you might not also want to do it a new language (assuming that you’re doing this on someone else’s dime, anyway)
  4. Let others kick the tires a bit and then try it out. This gives you a bit of a bet hedge on whether something will fizzle and die before you commit to it.
  5. If you decide not to adopt something that seems new and hot, keep your finger on the pulse of it and read about its progress and the pains and joys of its adopters. It is possible not to adopt something without avoiding (or refusing to learn about) it.
  6. If you don’t see the value in something or you simply don’t have time/interest in it, don’t presume to think it has no value.
  7. Go to shows/conferences/talks/etc to see what all the fuss is about. This is a low-impact/low-commitment way to see what the fuss is about.

If you have suggestions for how to approach this balancing act, I’d be interested to hear them as well. Because while I think there’s definitely some margin for error, it’s important, if not easy, to avoid being either the person whose life is a never-ending string of partially completed projects and the person who has had the same year of COBOL programming experience for each of the past 30 years.