Stories about Software


Make Yourself Big And Get More Job Offers

A few posts ago, I answered a reader question about getting around lowest common denominator hiring practices.  It’s a subject I’ve talked about before as well.  I addressed the reader’s question mainly as it pertains to the front end of the hiring process.  After that post, you might understand a strategy for not really dealing with recruiters and other low-knowledge screener types.

(If you want to submit a question, check out the “Ask Erik” form at the bottom right of the side bar)

But even if you secure the inside track to a conversation about a job instead of going through the tuple-focused HR machine, it’s still in your best interest to paint yourself in as advantageous a light as possible.  I’m going to plant my tongue slightly in cheek and refer to this process for the rest of the post as “making yourself big.”

The phrase is intended to evoke imagery from the animal kingdom.  Probably the most dramatic example of “make yourself big” is a puffer fish who, when threatened, balloons to many times its original size.  But it’s pretty common throughout the animal kingdom from birds with ruffled feathers to cats with puffy Halloween tails.  The animals react to adversity by creating the illusion (or reality, in the case of the puffer fish) of substantially more size.

A bird making itself big

When it comes to your how you display yourself to prospective employers, you want to make yourself big.

Before offering some specific tips on how to do this, I’ll speak to the general philosophy and the rules of the employer-candidate matchmaking came.  And I mean that I’ll explain them in an honest, realpolitik sense.  But prior to doing that, I’ll digress briefly into a realpolitik explanation of, well, US politics. Read More


Get Good at Testing Your Own Software

Editorial Note: This is a post that I originally wrote for the Infragistics blog.  I’ve decided that I’ll only cross post here when I can link canonical and give you the entire article text.  That way, you can read it in its entirety from your feed reader or on my site, without having to click through to finish.  If you do like the post, though, please consider clicking on the original to show the post some love on its site and to give a like or a share over there.

There’s a conventional wisdom that says software developers can’t test their own code.  I think it’s really more intended to say that you can’t meaningfully test the behavior of software that you’ve written to behave a certain way.  The reasoning is simple enough.  If you write code with the happy path in mind, you’ll always navigate the happy path when testing it, being hoodwinked by a form of confirmation bias.


To put it more concretely, imagine that you write a piece of code that reads a spreadsheet, tabulates sums and averages, and reports these to a user.  As you build out this little application, one of the first things you’ll do is get it successfully reading the file so that you can write the other parts of the application that depend on this prerequisite.  Over the course of your development, you’ll be less likely to test all of the things that can go wrong with reading the spreadsheet because you’ll develop kind of a muscle memory of getting the import right as you move on to test the averages and sums on which you’re concentrating.  You won’t think, “what if the columns are switched around” or “what if I pass in a Word document instead of a spreadsheet?”

Because of this effect, we’re scolded not to test our own software.  That’s why QA departments exist.  They have the proper perspective and separation so as not to be blinded by their knowledge of how things are supposed to go.  But does this really mean that you can’t test your own software?  It may be that others are more naturally suited to do it, but you can certainly work to reduce the size and scope of your own blind spot so that you can be more effective in situations where circumstances press you into testing your own code.  You might be doing a passion project on the side or be the only technical member of a startup – you won’t always have a choice.

Let’s take a look at some techniques that will help you be more effective at testing software, whether written by you or someone else.  These are actual approaches that you can practice and get better at.

Exploratory Testing

Exploratory testing is the idea of finding creative, weird ways to break the software.  One of the things you’ll find is that users have an amazing capacity to use software in some of the most improbable and, frankly, stupid ways that you could ever imagine.  “I hit saved and then poured water into the disk drive, and the save didn’t work.”

You want to cultivate the ability to dream up crazy things that users may do and ask yourself what would happen.  A great way to do this is to observe non-savvy users using your software or, really, any software.  They’ll do weird and unexpected things – the kind of things you wouldn’t – and you can make note of them and use these as ideas for things to do to your own software.  Visit a forum for QA folks or user support people to vent about the dumb things they’ve encountered, and use those.  Build an inventory that you can launch at your stuff.

Pitfall Testing

In addition to developing a repertoire of bone-headed usage scenarios to throw at your software, you should also understand common mistakes that will be made by regular users.  These are not the kinds of things that will make you do a double take but rather the kinds of things that happen all the time and would surprise no one.

Did a user type text into the phone number field?  Did the user accidentally click “pay” four times in a row?  Any fields left blank?  These are the kinds of common software errors that you should catalog and get in the habit of throwing at your own software.  If you practice regularly, it will become second nature.

Reasoning About Edge Cases

Edge cases are subtly different from common pitfalls.  Edge cases are the way your software behaves around specific, meaningful values to your code.  For instance, in our spreadsheet example, perhaps you’ve designed the software to handle a maximum number of lines in the spreadsheet input.  If you accept 10,000 lines, get in the habit of testing 9,999, 10,000, and 10,001 lines to see how it behaves.  If it gets those three right, it’s exceedingly likely to get 4,200 and 55,340 right.

Picking edge cases gets you the most bang for your buck.  You’ll get in the habit of locating the greatest number of possible bugs using the least amount of effort.

Helpful, Not Infallible

Building up an arsenal of things to throw at your software will make you more effective at testing your own stuff.  This is a valuable skill and one you should develop.  But, at the end of the day, there’s no substitute for a second set of eyes on your work.  Use the techniques from this post as a complement for having others test it – not a substitute.


Chess TDD 49: Castling is Hard

In this episode, I start to discover that castling is hard.  There’s unusual movement, the fact that 2 pieces move simultaneously, and the fact that you have to keep track of a lot of status.  People have even commented on this being a particularly hard facet of the game.  Oh well, c’est la vie.  I think I came up with an idea for a helpful strategy during the course of this episode, even though this episode itself wasn’t wholly productive.

One thing to note if you’ve been following this series is that I’ve switched to Visual Studio 2015.  I brough CodeRush, NDepend, and NCrunch along for the ride, not to mention some of my preferred VS plugins.  If you want more information on stuff I use, check out my resources page.

What I accomplish in this clip:

  • Fixed castling implementation to the long side of the board.
  • Got a little more organized in Trello.
  • Started on implementation of not allowing castling when pieces have moved.

Here are some lessons to take away:

  • When you need to correct a mistake or bug, make sure you start with a red test that exposes the mistake.
  • It’s okay to have a situation where making a test you’ve altered green makes others red.  If those tests are now wrong or out of date, this gives you basically a checklist of the tests you need to fix.
  • If you consider yourself advanced enough to skip a step, it’s still always possible that you’re making a mistake.  TDD is all about micro-hypotheses and verification of your understanding of the code.  If you find yourself being wrong, slow down and get back to basics.
  • No matter how long you’ve been at this, you’ll still make mistakes, especially if you get a little fast and loose.
  • If you write a test to confirm your understanding of behavior in a certain context, I recommend leaving it in.  If you were wondering about that behavior, chances are someone else will, later, as well.


Prediction Markets for Software Estimates

The ongoing kerfuffle over the “No Estimates” movement is surreal to me.  So before I get to prediction markets for software estimates, I’ll discuss the surreal a bit. Instead of attempting to describe it, which, I can only imagine, would be meta-surreal, I’ll make use of an allegory of sorts.

No Wedding Estimates

Imagine that wedding planners have long struggled with an important issue.  For years and years, their clients have asked them to help pick a date on which it would not rain so that they could have outdoor weddings.  And, for years and years, their predictions have been spotty at best, resulting in irate clients, wet formal wear, and general heartburn.

In all that time, wedding planners have pored over weather patterns and diligently studied farmers’ almanacs.  And yet, the predictions did not substantially improve.  It got so bad that a group of upstart wedding planners met one year in the mountains and authored a document called “The Contingency Manifesto.”  This resulted in an important advance in wedding planning for outdoor weddings: reserving a backup plan not dependent on the weather.


And yet, not all was fixed.  People still wanted outdoor weddings, and continued to be disappointed when it rained, contingency notwithstanding.  They still demanded wedding planners help them figure out months and months in advance whether or not it would rain on a particular day.  And, this is understandable after all — weddings are important.

Quite recently, a group of wedding planners emerged under the hashtag #NoOutdoorWeddings.  Their message was simple: “we can’t predict the weather, so have your wedding inside and stop whining.”  The message was, of course, music to the ears of frustrated wedding planners everywhere.  But some planners and most clients balked.  “How can you tell the clients not to have weddings outside?  They’re the ones paying, so it’s our obligation to facilitate their wishes!”

This schism, it seemed, was irreparable.  And surreal.

  • Why is it necessary for wedding planners all to agree?  Can’t the ones that don’t want to deal with weather contingency just not do it and the ones who want to can?
  • Why can’t people figure out that trying to predict a chaotic system like the weather is a fool’s errand?
  • Why would you ask a wedding planner to make a prediction that could easily be influenced by his own interest?
  • Frankly, if one person is dumb enough to ask another to predict the weather on a day 9 months from now, and the other person is dumb enough to do it, can’t we just agree that the two deserve each other and the inevitable lawsuit?

How Estimation Actually Works Today

Read More


Chess TDD 48: Getting Started with Castling

Back in the saddle and making these regularly once again.  In this episode, I start implementing castling.  This proves to be something of a challenge because I’d gotten into such a routine of adding acceptance tests for the Pawn feature and changing mainly the Board class.  Here I’m in a different set of acceptance tests and changing a different set of production code, so it took a bit to get my bearings.  Castling, like en passant, is also a non-trivial edge case that deviates a fair bit from standard piece movement.

What I accomplish in this clip:

  • Implemented castling to the short side of the board.
  • Implemented castling to the far side of the board (though I think I got the move wrong).

Here are some lessons to take away:

  • It’s a big help if you keep a nice, large surface area of testable code in your code base.  This lets you dig in with the granularity of your choosing for writing tests.
  • You need to carve out time to keep your code clean and do boy scout refactorings.  If anyone is telling you not to do this, that’s a serious organization/group smell.  You need to keep the code reasonably clean to sustain the pace at which you deliver value.
  • As you have a larger and increasingly complex code base, “do the simplest thing that will work” becomes an increasingly tall order.  With more tests that can go red, it gets harder and harder to do trivial things that satisfy all tests.
  • It’s important to audit your tests continually to make sure they continue to add value.