The Developer Incentive Snakepit

Getting Incentives Wrong

A while ago I was talking to a friend of mine, and he told me that developers in his shop get monetary bonuses for absorbing additional scope in a “waterfall” project. (I use quotes because I don’t think that “waterfall” is actually a thing — I think it’s just collective procrastination followed by iterative development) . That sounds reasonable and harmless at first. Developers get paid more money if they are able to adapt successfully to late-breaking changes without pushing back the delivery date. Fair enough…. or is it?

Image credit to Renaud d’Avout d’Auerstaedt via wikimedia commons.

Let’s digress for a moment.  In colonial India, the ruling British got tired of all of the cobras hanging around and being angry and poisonous and whatnot, so they concocted a scheme to address this problem. Reasoning that more dead cobras meant fewer living cobras, they began to offer a reward/bounty for any cobra corpses brought in to them by locals. As some dead cobras started coming in, circumstances initially improved, but after a while the number of dead cobras being brought in exploded even though the number of living cobras actually seemed to increase a little as well.  How is this possible?  The British discovered that Indian locals were (cleverly) breeding cobras that they could kill and thus cash in on the reward. Incensed, the British immediately discontinued the program. The cobra breeders, who had no more use for the things, promptly discarded them, resulting in an enormous increase in the local, wild cobra population. This is an iconic example of what has come to be called “The Law of Unintended Consequences“, which is a catch-all for describing situations where an incentive-based plan has an effect other than the desired outcome, be it oblique or directly counter. The Cobra Effect is an example of the latter.

Going back to the “money for changes absorbed” incentive plan, let’s consider if there might be the potential for “cobra breeders” to start gaming the system or perhaps less cynical, subconscious games that might go on. Here are some that I can think of:

  1. Change requests arise when the actual requirements differ from those defined initially, so developers have a vested interest in initial requirements being wrong or incomplete.
  2. Change requests will happen if the marketing/UX/PM/etc stakeholders don’t communicate requirements clearly to developers, so developers have a vested interest in avoiding these stakeholders.
  3. Things reported as bugs/defects/issues require changes to the software, which developers have to fix for free, but if those same items were, for some reason, re-classified as change requests, developers would make more money.
  4. The concept of “change request” doesn’t exist in agile development methodologies since any requirement is considered a change request, and thus developers would lose money if a different development methodology were adopted.

So now, putting our more cynical hats on, let’s consider what kind of outcomes these rational interests will probably lead to:

  1. Developers are actively obstructionist during the “requirements phase”.
  2. Developers avoid or are outright hostile toward their stakeholders.
  3. Developers battle endlessly with QA and project management, refusing to accept any responsibility for problems and saying things like “there was nothing in the functional spec that said it shouldn’t crash when you click that button!”
  4. Developers actively resist changes that would benefit the business, favoring rigidity over flexibility.

Think of the irony of item (4) in this scenario.  The ostensible goal of this whole process is to reward the development group for being more flexible when it comes to meeting the business demands.  And yet when incentives are created with the intention of promoting that flexibility, they actually have the effect of reducing it.  The managers of this department are the colonial British and the cash for absorbed change requests is the cash for dead cobras.  By rewarding the devs for dead cobras instead of fewer cobras, you wind up with more cobras; the developers’ goal isn’t more flexibility but more satisfied change requests, and the best way to achieve this goal is to generate software that needs a lot of changes as far as the business is concerned.  It’s an example of the kind of gerrymandering and sandbagging that I alluded to in an earlier post and it makes me wonder how many apparently absurd developer behaviors might be caused by nonsensical or misguided incentive structures triggering developers to game the system.

So what would be a better approach?  I mean with these Cobra Effect situations, hindsight is going to be 20/20.  It’s easy for us to say that it’s stupid to pay people for dead cobras, and it was only after ruminating on the idea for a while that the relationship between certain difficulties of the group in question and the incentive structure occurred to me.  These ideas seem generally reasonable on their faces and people who second guess them might be accused of arm-chair quarterbacking.

Get Rid of Cobras with Better Incentives

I would say that the most important thing is to align incentives as closely as possible with goals and to avoid rewarding process adherence.  For example, with this change request absorbed incentive, what is the actual goal, and what is being rewarded?  The actual reward is easy, particularly if you consider it obtusely (and you should): developers are rewarded when actual requirements are satisfied on time in spite of not matching the original requirements.  There’s nothing in there about flexibility or business needs — just two simple criteria: bad original requirements and on-time shipping.  I think there’s value in stating the incentive simply (obtusely) because it tends to strip out our natural, mile-a-minute deductions.  Try it on the British Empire with its cobra problem:  more cobra corpses means more money.  There’s nothing that says the cobra corpses have to be local or that the number of cobras in the area has to decrease.  That’s the incentive provider being cute and clever and reasoning that one is the logical outcome of the other.

Flexibility image credit to “RDECOM” via wikimedia commons.

Going back to the bad software incentives, the important thing is to pull back and consider larger goals.  Why would a company want change requests absorbed?  Probably because absorbed change requests (while still meeting a schedule) are a symptom (not a logical outcome, mind you) of flexibility.  Aha, so flexibility is the goal, right?  Well, no, I would argue.  Flexibility is a means rather than an end.  I mean, outside of a circus contortionist exhibit, nobody gets paid simply to be flexible.  Flexibility is a symptom of another trend, and I suspect that’s where we’ll find our answer.

Why would a company want to be flexible when it comes to putting out software?  Maybe they have a requirement from the marketing department stating that they want to cut a sort of hip, latest-and-greatest feel, so the GUI has to constantly be updated with whatever latest user experience/design trends are coming from Apple or whoever is currently the Georgio Armani of application design.  Maybe they’re starting on a release now and they know their competition is coming out with something in 3 months that they’re going to want to mimic in the next release, but they don’t yet know what that thing is.  Maybe they just haven’t had any resources available to flesh out more than a few wireframes for a few screens but don’t want all progress halted, “waterfall” style, until every last detail of every last screen is hammered out in IRS Tax Code-like detail.

Aha!  Now we’re talking about actual business goals rather than slices of process that someone thinks will help the goals be achieved.  Why not reward the developers (and QA, marketing, UX, etc as well) for those business goals being met rather than for adherence to some smaller process?  Sadly, I think the answer is a command and control style of hierarchical management that often seeks to justify positions with fiefdoms of responsibility and opacity.  In other words, many organizations will have a CEO state these business goals and then hand down a mandate of “software should be flexible” to some VP, who in turn hands down a mandate of “bonuses for change requests absorbed” to the manager of the software group or some such thing.  It is vital to resist that structure as much as possible since providing an incentive structure divorced from broader goals practically ensures that the prospective recipients of the structure care nothing whatsoever for anything other than gaming the system in their own favor.  And in some cases, such as our case, this leads to open and predictable conflicts with other groups that have (possibly directly contradicting) incentive structures of their own.  As an extreme example, imagine a tech group where the QA team gets money for each bug they find and the developers lose money for the same.  A good way to ensure quality… or a good way to ensure fistfights in your halls?

Take-Away

Why keep things so simple and not fan out the deductive thinking about incentives, particularly in the software world?  Well, software people by nature are knowledge workers that are paid to use well-developed, creative problem-solving skills.  Incentives that work on assembly line workers such as “more money for more holes drilled per hour” are more likely to backfire when applied to people who make their living inventing ways to automate, enhance and optimize processes.  Software people will become quickly adept at gaming your system because it’s what you pay them to do.  If you reward them for process, they’ll automate to maximize the reward, whether or not it’s good for the business.  But if you reward them for goals met, they will apply their acute problem solving skills to devising the best process for solving the problem — quite possibly better than the one you advise them to follow.

If you’re in a position to introduce incentives, think carefully about the incentives that you introduce and how closely they mirror your actual goals.  Resist the impulse to reward people for following some process that you assume is the best one.  Resist the impulse to get clever and think “If A, then B, then C, then D, so I’ll reward people for D in order to get A.”  That’s the sort of thinking that leads to people on your team dropping cobras in your lunch bag.  Metaphorical cobras.  And sometimes real cobras.  If you work with murderers.

  • http://genehughson.wordpress.com/ Gene Hughson

    Designing and maintaining a process differs little from system development (aside from the less reliable hardware). Coming up with the “happy day scenario” is easy, identifying and dealing with what can go wrong is where you earn you your money.

  • http://www.daedtech.com/blog Erik Dietrich

    That’s a good way to think of it that hadn’t occurred to me. Rewarding “downstream” incentives really is just like bopping along, generating code, only thinking of the scenario where everything behaves the way you want it to.

  • http://genehughson.wordpress.com/ Gene Hughson

    Another analogy is fixing the visible bug rather than fixing the problem behind – lots of management/process follies are just instances of squeezing the balloon.

  • Pingback: LATW Episode 20 (October 1–November 30, 2012) « shahriar