The Value of Failure

Over the course of time leading people and teams, I’ve learned various lessons. I’ve learned that leading by example is more powerful than leading by other attempts at motivation. I’ve learned that trust is important and that deferring to the expertise of others goes a lot further than pretending that you’re some kind of all-knowing guru. I’ve learned that listening to people and valuing their contributions is vital to keeping morale up, which, in turn, is vital to success. But probably the most important thing that I’ve learned is that you have to let people fail.

My reasoning here isn’t the standard “you learn a lot by failing” notion that you probably hear a lot. In fact, I’m not really sure that I buy this. I think you tend to learn better by doing things correctly and having them “click” than by learning what not to do. After all, there is an infinite number of ways to screw something up, whereas precious few paths lead to success. The real benefit of failure is that you often discover that your misguided attempt to solve one problem solves another problem or that your digression into a blind alley exposes you to new things you wouldn’t otherwise have seen.

If you run a team and penalize failure, the team will optimize for caution. They’ll learn to double and triple check their work, not because the situation calls for it but because you, as a leader, make them paranoid. If you’re performing a high risk deployment of some kind, then double and triple checking is certainly necessary, but in most situations, this level of paranoia is counter-productive in the same way it is to indulge an OCD tendency to check three times to see if you locked your front door. You don’t want your team paralyzed this way.

A paranoid team is a team with low morale and often a stifled sense of enjoying what it does. Programming ceases to be an opportunity to explore ideas and solve brain teasers and becomes a high-pressure gauntlet instead. Productivity decreases predictably because of second-guessing and pointless double checking of work, but it’s also adversely affected by the lack of cross-pollination of ideas resulting from the aforementioned blind alleys and misses. Developers in a high pressure shop don’t tend to be the ones happily solving problems in the shower, stumbling across interesting new techniques and having unexpected eureka moments. And those types of things are invaluable in a profession underpinned by creativity.

So let your team fail. Let them flail at things and miss and figure them out. Let them check in some bad code and live with the consequences during a sprint. Heck, let it go to production for a while, as long as it’s just technical debt and not a detriment to the customer. Set up walled gardens in which they can fail and be relatively shielded from adverse consequences but are forced to live with their decisions and be the ones to correct them. It’s easy to harp on about the evils of code duplication, but learning how enormously tedious it is to track down a bug pasted in 45 different places in your code base provides the experience that code reuse reduces pain. Out of the blind alley of writing WET code, developers discover the value of DRY.

The walled garden aspect is important. If you just let them do anything at all, that’s called chaos, and you’re setting them up to fail. You have to provide some ground rules that stave off disaster and then within those boundaries you have to discipline yourself to keep your hands off the petri dish in order to see what grows. It may involve some short term ickiness and it might be difficult to do, but the rewards in the end are worth it — a happy, productive, and self-sufficient team.

  • JefClaes

    I have had the opportunity to be a team lead for the last year and half, and have come to similar insights. Cultivating a culture where failure is OK was not that easy since the standard used to be fear based management. We also discovered _some_ ground rules; no one works on something too long by himself and code needs to be under test. On a higher level, it is important to avoid your application becoming a big ball of mud; failure needs to be small and isolated.

  • http://genehughson.wordpress.com/ Gene Hughson

    Absolutely agree re: not instilling fear of failure. The “why” behind the choice is (in most cases) the most important thing. The craziest thing in the world is punishing someone who did the “wrong” thing for the right reason (particularly if events prove them right).

  • http://www.daedtech.com/blog Erik Dietrich

    Definitely agreed about the ground rules. I referred to this concept as “walled garden,” but ground rules might be a better way of describing it. I’m also a believer in trying to structure things in such a way that the failures mark progress by letting you say, “well, we tried that and it didn’t work, so now we know how to handle this going forward.”

  • http://www.daedtech.com/blog Erik Dietrich

    It’s a sad thing, but I’ve seen this a lot over the years. Being right becomes some kind of game of oneupsmanship or proving something, so you have these cultural cold wars between peers or leads and team members where being wrong about anything is taken as a sign of weakness. Environments like this are pretty much always unproductive ones.

  • Pingback: DaedTech – The Value of Failure | Real World Software()