DaedTech

Stories about Software

By

Get Good at Testing Your Own Software

Editorial Note: This is a post that I originally wrote for the Infragistics blog.  I’ve decided that I’ll only cross post here when I can link canonical and give you the entire article text.  That way, you can read it in its entirety from your feed reader or on my site, without having to click through to finish.  If you do like the post, though, please consider clicking on the original to show the post some love on its site and to give a like or a share over there.

There’s a conventional wisdom that says software developers can’t test their own code.  I think it’s really more intended to say that you can’t meaningfully test the behavior of software that you’ve written to behave a certain way.  The reasoning is simple enough.  If you write code with the happy path in mind, you’ll always navigate the happy path when testing it, being hoodwinked by a form of confirmation bias.

Scientist

To put it more concretely, imagine that you write a piece of code that reads a spreadsheet, tabulates sums and averages, and reports these to a user.  As you build out this little application, one of the first things you’ll do is get it successfully reading the file so that you can write the other parts of the application that depend on this prerequisite.  Over the course of your development, you’ll be less likely to test all of the things that can go wrong with reading the spreadsheet because you’ll develop kind of a muscle memory of getting the import right as you move on to test the averages and sums on which you’re concentrating.  You won’t think, “what if the columns are switched around” or “what if I pass in a Word document instead of a spreadsheet?”

Because of this effect, we’re scolded not to test our own software.  That’s why QA departments exist.  They have the proper perspective and separation so as not to be blinded by their knowledge of how things are supposed to go.  But does this really mean that you can’t test your own software?  It may be that others are more naturally suited to do it, but you can certainly work to reduce the size and scope of your own blind spot so that you can be more effective in situations where circumstances press you into testing your own code.  You might be doing a passion project on the side or be the only technical member of a startup – you won’t always have a choice.

Let’s take a look at some techniques that will help you be more effective at testing software, whether written by you or someone else.  These are actual approaches that you can practice and get better at.

Exploratory Testing

Exploratory testing is the idea of finding creative, weird ways to break the software.  One of the things you’ll find is that users have an amazing capacity to use software in some of the most improbable and, frankly, stupid ways that you could ever imagine.  “I hit saved and then poured water into the disk drive, and the save didn’t work.”

You want to cultivate the ability to dream up crazy things that users may do and ask yourself what would happen.  A great way to do this is to observe non-savvy users using your software or, really, any software.  They’ll do weird and unexpected things – the kind of things you wouldn’t – and you can make note of them and use these as ideas for things to do to your own software.  Visit a forum for QA folks or user support people to vent about the dumb things they’ve encountered, and use those.  Build an inventory that you can launch at your stuff.

Pitfall Testing

In addition to developing a repertoire of bone-headed usage scenarios to throw at your software, you should also understand common mistakes that will be made by regular users.  These are not the kinds of things that will make you do a double take but rather the kinds of things that happen all the time and would surprise no one.

Did a user type text into the phone number field?  Did the user accidentally click “pay” four times in a row?  Any fields left blank?  These are the kinds of common software errors that you should catalog and get in the habit of throwing at your own software.  If you practice regularly, it will become second nature.

Reasoning About Edge Cases

Edge cases are subtly different from common pitfalls.  Edge cases are the way your software behaves around specific, meaningful values to your code.  For instance, in our spreadsheet example, perhaps you’ve designed the software to handle a maximum number of lines in the spreadsheet input.  If you accept 10,000 lines, get in the habit of testing 9,999, 10,000, and 10,001 lines to see how it behaves.  If it gets those three right, it’s exceedingly likely to get 4,200 and 55,340 right.

Picking edge cases gets you the most bang for your buck.  You’ll get in the habit of locating the greatest number of possible bugs using the least amount of effort.

Helpful, Not Infallible

Building up an arsenal of things to throw at your software will make you more effective at testing your own stuff.  This is a valuable skill and one you should develop.  But, at the end of the day, there’s no substitute for a second set of eyes on your work.  Use the techniques from this post as a complement for having others test it – not a substitute.

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Kevin O'Shaughnessy
Kevin O'Shaughnessy
8 years ago

All developers test their code, at least to some extent. The question is whether that testing work should be recognised or duplicated by QA?

Erik Dietrich
8 years ago

Off the top, I’d say my preference would be to have the testing efforts both automated and transparent. So, QA can see what sorts of tests the developers have run (coded) and build on them. I’ve always liked QA to be more of a creative, exploratory role.