DaedTech

Stories about Software

By

Salary Negotiations: Win by Losing

I’ve been reading a book called, “The Four Hour Work Week” lately, and the timing is pretty interesting. In the book, Ferris outlines a positively cold-blooded plan to seize control of your life and career using an approach that he calls “Lifestyle Design.” For me, the timing is interesting because “lifestyle design” is a good way to describe the way that I’ve been re-shaping my life over the last several years, thinking in terms of things that I want to be true about my life (e.g. “I should be able to go where I want when I feel like it and work from wherever”) rather than my career (e.g. “I wanna be a SENIOR Architect”). It also reinforces and then some my desire to focus increasingly on passive income. So, basically, reading this book for me is sort of like a gigantic pat on the back: “you’re on the right track, Erik, but you should double down!”

You should buy and read this book. Seriously. It’s, at times, audacious to the point of discomfort, and it can feel a little Amyway-this-is-too-good-to-be-true-ish (though it probably isn’t), but he makes some incisive observations that will rattle you and alter the way you think of the corporate world… like I’m about to do (I hope). Some of the inspiration for this post is derived from my experience (particularly the focus on programmers, which he doesn’t do), and some of it from the book. So, without further ado…

The Hard Truth

Salaried, exempt employment is an atrocious economic deal, especially for programmers. Weird as it sounds to say now, I’m not saying that your employer is screwing you nor that you shouldn’t be a salaried, exempt employee. I actually rewrote that first sentence several times, trying to be a little less blunt, but it is what it is. It’s not a value judgment and it’s not intended to be click bait or offensive — it’s just the stark truth. Let’s go through some numbers, to put an exclamation point on it.
Read More

By

Lifting the Curse of Knowledge

As most of you know, one of the biggest anti-patterns when you’re instantiating program slots is to forget to set CanRemoveOverride to true. But what you probably didn’t know was that the SlotConfig is — Just kidding. I lifted this from a post I wrote almost 3 years ago about legacy code I was working with then. I have little more idea than you do what any of that means.

Read More

By

Performance Reviews Simplified

If you were to ask people in the corporate world about the most significant moments of their careers, a lot of them would probably talk about annual performance reviews. That’s a curious thing. Anyone who talks about performance reviews when asked this question is not talking about an idea they had that saved the company hundreds of thousands of dollars or about rescuing a project that had gone sideways. Instead, their careers were defined sitting in a room with their managers, going through a worksheet that purports to address how well they’ve matched up against the company’s ‘values.’

FriendlyBoss

Read More

By

10x Developer, Reconsidered

Unless my memory and a quick search of my blog are both mistaken, I’ve never tackled the controversial concept of a “10x developer.” For those who aren’t aware, this is the idea that some programmers are order(s) of magnitude more productive than others, with the exact figure varying. I believe that studies of varying statistical rigor have been suggestive of this possibility for decades and that Steve McConnell wrote about it in the iconic book, “Code Complete.” But wherever, exactly, it came from, the sound byte that dominates our industry is the “10x Developer” — the outlier developer archetype so productive as to be called a “rock star.” This seems to drive an endless debate cycle in the community as to whether or not it’s a myth that some of us are 10 or more times as awesome as others. You can read some salient points and a whole ton of interesting links from this jumping off stack exchange question.

Read More

By

The Retreat of the Introvert

If you’ve ever been nervous at a cocktail party or similar event and spilled a drink, you have an acute understanding of how time can elongate. There’s the second your hand brushes the glass and the second the glass offers its liquid to the expensive carpet, and while the seconds are right next to one another, it seems as though minutes or even hours pass in between them. In periods of duress, time spreads out, like (to borrow the words of T.S. Eliot) a patient etherized upon a table.

When you’re an introvert, this happens when social situations become volatile. And, by “social situations,” I don’t mean cocktail parties exclusively, but really any sort of interaction with others. It could be that you’re put on the spot at a meeting in which you weren’t expected to talk. It could happen at a night out when some friend volunteers you for karaoke at the bar. It could happen from something as simple and apparently routine as your girlfriend saying “we need to talk” when the two of you are eating Chinese at home. There’s a trigger and then the elongation occurs and it seizes you, demanding action that you know to be socially impossible. But if you gave in, it would be oh so sweet…

Read More

By

Aggregation of Indignities

I’m about to head off for a small vacation, so I’ll leave you with a relatively short post until I pick things up, probably a week from now.

If one were to track the small indignities of life that add up to serious grievances, it’s temping to think that these would be expressed in terms of, “things I have to do but don’t want to.” If you listen to someone talk about quitting a job, you’d hear about the rolled up grievances and imagine the small indignities. “I can’t stand my boss” would really be a series of things like, “have to go to his status meeting” and “have to respond to emails at all hours of the night.” You can imagine the exercise with other things that fill up the occasional exit interview that’s honest.

But at its core, I don’t think it’s “things we don’t want to do but have to do anyway” that create the important grievances, but rather, “things we think are stupid but have to do anyway.” There are plenty of things that I don’t really want to do, but know make sense, and so I do them anyway. Going to the dentist, working out for the most part, getting up in time to go to a meeting even after sleeping for only 4 hours come to mind. I make the best of the situation because I know these things make sense, and any resultant unhappiness is fleeting.

It’s the things that I think are stupid that result in unhappiness that is more than fleeting. If I have to get up to go to the dentist to have my teeth cleaned, I suffer through and then, after, think, “well, my teeth feel great, and I don’t have to do that again for six months, so, alright, onto the rest of my day.” If I have to go to the DMV because Illinois randomly decided that I owe them $200 for some new tax and the ‘convenience fee’ for online payment is $45, I’ll be in a seriously bad mood while I do it and then still residually salty for the rest of the day.

Back in the exit interview, “I can’t stand my boss” doesn’t actually expand do “I didn’t like being in the status meeting” but rather, “I hate attending those pointless status meetings.” It doesn’t expand to “have to respond to emails at all hours of the night” but rather “have to respond to emails at all hours of the night when there’s no reason it couldn’t wait until morning.” Most people are basically diligent and will do the unpleasant stuff… if they feel it carries forward some purpose.

When it comes to sources of perceived stupidity or pointlessness, I can think of three loose categories:

  1. Institutional/Bureaucratic
  2. Inadvertent/Earnest Disagreement
  3. Posturing/Malicious

Clipboard

The first category exists at most institutions to varying degrees and is straightforward enough: it’s your TPS reports and other infantilizing things that you have to do because someone, somewhere once got sued for something.  The second category boils down to “team lead pulls rank on you about a course of action and you have to go along with what she says, even though you think it’s stupid.”  The third category is “mid level manager shows up 10 minutes late to meeting because he can’t bother to be on time and then dispatches someone to get coffee for him, even though that person is a professional engineer.”  It’s a uniquely aggressive action in that both parties are completely aware that the aggressor is manufacturing stupidity, but the message is essentially, “I have the power to make you do stupid things, and I enjoy using it.”

These things aggregate and fester over the course of time, probably roughly in the numbered ordered above.  People will usually tolerate a maze of bureaucracy more easily than semi-regular demeaning interactions, all other things being equal.  But everyone has some kind of aggregated stupidity score that, when hit, triggers them to quit.  Actual ‘scores’ will vary widely depending on situation and individual personality.

The moral of the story is this.  By and large, people aren’t going to burn out and be driven away by hard or unpleasant work alone.  They’ll burn out and be driven away by work that they think is stupid.  So if you’re focused on retention and morale, pay specific attention not so much to whether people are doing grunt work but rather the value (or lack thereof) they perceive in the work.

By

Programmer IS A Career Path, Thank You

If you’re a programmer, think back for a moment to the first time you hear the career question. You know the one I mean, even if you don’t recognize it as the question: “do you see yourself on the architect track or the management track?” Caught off guard, you panic momentarily as you feel that you have about 5 seconds to decide whether your long term future involves lots of UML diagrams and flow charts or whether it involves lots of Power Point presentations and demanding TPS reports from underlings. If you’re like most, and you were to answer honestly, you’d probably say, “neither, really, because I kind of like writing code.” But you don’t give that answer (I never did) because you’d effectively be responding to a career development question with, “I have no interest in career development.” But let’s put a pin in that for a moment.

Imagine a kid going to law school and graduating to go work at a law firm somewhere as an associate, doing whatever it is that associates do. Now imagine a conversation where a partner at the firm pulls this associate aside and says, “so, have you thought about your future? Do you see yourself as more of a partner in the firm, continuing to practice law, or do you see yourself as more of a lawyer-manager?” I imagine the response would be, “what on Earth are you talking about? I’m a lawyer. I want to practice law and be a partner. What else is there?”

Why is it okay (or would it be okay, since this conversation would never actually take place) for an ambitious lawyer to say, “I just want to be a lawyer” and not for an ambitious programmer to say, “I just want to be a programmer?” For the purposes of this post, I’m going to leave that question as rhetorical one. I’m actually going to answer it at length in the book that I’m starting to write, but until the publication date, I’ll leave the why as an exercise for the reader and just posit that it should also be okay for a programmer to say this.

I’d like to see a culture change, and I think it starts with our current generation of programmers. We need to make it okay to say, “I just want to be a programmer.” Right now, the only option is to ‘graduate’ from programming because failure to do so is widely construed as failure to advance in your career. If you become a line manager (or the diet version, project manager), you stop writing code and become the boss. If you become an architect, you kinda-sort-usually-mostly stop writing code and kinda-sort-sometimes-maybe become sorta like a boss. But however you slice it, organizational power and writing code have historically been mutually exclusive. You can play around with teh codez early in your career, but sooner or later, you have to grow up, take your hands off the keyboard, and become a boss. You have to graduate or risk being the metaphorical ‘drop-out’ with the title “Super Principal Fellow Engineer,” who looks great on paper but is generally ignored or smiled at indulgently.

ProgrammerGraduation

That’s going to change sooner or later. As someone who has looked for work and looked to hire pretty steadily for a number of years, I’ve witnessed an increase in developer salary that is both sharp and sustained. As the average software developer’s wage starts to creep into 6 figure territory, it’s simply not possible to keep the pecking order intact by paying overhead personnel more and more ungodly sums of money. Just as it makes no sense for a law firm billing out at $500/hour to hire a “lawyer manager” as a 1 mil/year cost center, it eventually won’t make sense to pay a quarter million a year to a pointy-hair, when Scrum and basic market forces both offer the allure of a self-managed team. In both cases, overhead work still happens, but it reports to the talent rather than ordering it around and demanding status reports.

How quickly it changes is up to us, though. We can change this culture, and we can change it pretty quickly, I think. The first thing you can do is fix it in your mind that being an “architect” or “project manager” or “manager” isn’t a graduation and it isn’t a rite of passage. It’s an agreement to do something different than what you’re doing now. Nothing more, nothing less. The second thing you can do is vote with your feet.

I’m not advising that you do anything drastic, but rather that you take stock of your circumstances. Are you at an organization where programming is clearly viewed as how you bide time until you get promoted to a boss’s chair? If so, consider adding a new criterion to your next job search. Look for organizations that feature prominent industry figures, such as conference speakers, authors, or people with some “tech celebrity.” These are the organizations that are the “early adopters” of the lawyer/partner dynamic of “best at the trade calls the shots.” If you hook up with these organizations, nobody is going to ask you what “track” you see taking you out of programming. They’ll assume that you’re there because you’re deadly serious about programming as a profession, interested in learning from the best, and interested in subsequently becoming the best.

Sooner or later, we’ll hit some kind of critical mass with this approach. My hope for all of our sake is that it’s sooner. Because the sooner we hit critical mass, the sooner you’ll stop having to explain that doing what you love wasn’t a backup plan for failing to rise in the ranks.

By

Avoiding the Perfect Design

One of the peculiar ironies that I’ve discovered by watching the way a lot of different software shops work is that the most intense moments of exuberance about software seem to occur in places where software development happens at glacial speeds. If you walk into an agile shop or a startup or some kind of dink-and-dunk place that bangs out little CRUD apps, you’ll hear things like, “hey, a user said she thought it’d be cool if she could search her order history by purchase type, so let’s throw that in and see how it goes.” If it goes insanely well, there may be celebrations and congratulations and even bonuses changing hands which, to be sure, makes people happy. But their happiness is Mercury next to the blazing Sun of an ivory tower architect describing what a system SHALL do.

“There will be an enterprise service bus. That almost goes without saying. The presentation tier and the business tier will be entirely independent of one another, and literally any sort of pluggable module that you dream up as a client can communicate with any sort of rules engine embedded within the business tier. Neither one will EVER know about the other’s existence. The presentation layer collaborators are like Schrodinger and the decision engines are like the cat!

And the clients. Oh yes, there will be clients. The main presentation tier client is a mobile staging environment that will be consumed by Android, iOS, Windows Phone, Blackberry, and even some modified Motorolla walkie-talkies. On top of the mobile staging environment will be a service adapter that makes it so that clients don’t need to worry about whether they’re using SOAP or REST or whatever comes next. All of those implementations will hide behind the interface. And that’s just the mobile space. There are more layers and subtleties in the browser and desktop spaces, since both of those types of clients may be SPAs, other thick clients, thin clients, or just leaf nodes.

Wait, wait, wait, I’m not finished. I haven’t even told you about the persistence factories yet and my method for getting around the CAP theorem. The performance will be sublime. We’re talking picoseconds. We’re going to be using dynamically generated linear programming algorithms to load balance the vertical requests among the tiers, and we’re going to take a page out of the quantum computing book to introduce a new kind of three state boolean… oh, sorry, you had a question?”

“Uh, yeah. Why? I mean, who is going to use this thing and what do they want with it?”

“Everyone. For everything. Forever.”

You back out slowly as the gleam in his eye turns slightly worrisome and he starts talking about the five year plan that involves this thing, let’s call it HAL, achieving sentience, bringing humankind to the cusp of the singularity, and uploading the consciousnesses of all network and enterprise architects.

Singularity

Like I said, the Sun to your Mercury. Has your puny startup ever passed the Turing Test? Well, his system has… as spec’ed out in a document management system with 8,430 pages of design documents and a Visio diagram that’s rumored to have similar effects to the Ark of the Covenant. And that, my friends, is why I think that a failing ATDD scenario should be the absolute first thing anyone who says, “I want to get into programming” learns to do.

Now to justify that whiplash-inducing segue. I wrote a book about unit testing in which I counseled complete initiates to automated testing to forgo TDD and settle for understanding the mechanics of automated tests and test runners first before making the leap. I stand by that advice, but I do so because I think that there is a subtle flaw to the way that most people currently get started down the programming path.

I was watching a Pluralsight course about NUnit to brush up on their latest and greatest assertion semantics, and the examples were really well done. In particular, there was a series of assertions oriented around a rudimentary concept of a role playing game with enumerations of weapons, randomization of events, and hit points. This theme exercised concepts like ranges, variance, collection cardinality, etc and it did so in a way that lent itself to an easy mental model. The approach was very much what mine would have been as well (I wouldn’t have come at this with TDD because there’d have been a lot of ‘downtime’ writing the production code as opposed to just showing the assert methods).

Nevertheless, it’s been a while since I’ve watched someone write tests against a pre-baked system when they weren’t characterization tests in a legacy rescue, and the experience was sort of jarring. I couldn’t help but think, “I wouldn’t want to write these tests if I were in his position — why bother when the code is already done?” Weird as it sounds from a big advocate of testing, writing tests after you’ve completed your implementation feels like a serious case of “going through the motions” in the same way that developers fill out random “SDLC” artifacts for no other purpose than to get PMPs to leave them alone.

And that’s where the connection to the singularity architect comes in. One of the really nice, but subtle perks of the TDD (especially ATDD) approach is that it forces you to define exit criteria before you start doing things. For instance, “I know I’ll be done with this development effort when my user can search her order history by purchase type.” Awesome — you’re well on your way because you’ve (presumably) agreed with stakeholders ahead of time when you can stop coding and declare victory. The next thing is to prove it, and you can approach this in the same way that you might approach fixing a leaking pipe under your sink. Turn the water on, observe the leak, turn the water off, fix the leak, turn the water back on, observe that there is no leak. Done.

In the case of the search, you write a client call to your web service that supplies a “purchase type” parameter and you say that you’re done when you get a known result set back, instead of the current error message: “I do not understand this ‘purchase type’ nonsense you’ve sent — 400 for you!” Then you scurry off to code, and you just keep going until that test that you’ve written turns green and all of the other ones stay green. There. Done, and you can prove it. Ship it.

Our poor architect never knows when he’s done (and we know he’ll never be done). The origin of this Sisyphean struggle started with hobby programming or a CS degree or something. It started with unbounded goals and the kinds of open-ended tasks that allow hobbyists to grow and students to excel. Alright, you’ve got the A, but try to play with it. See if you can make it faster. Try adding features to it. Extra credit! Sky’s the limit! At some point, a cultural norm emerges that says it’s more about the journey than the destination. And then you don’t rise through the ranks by automating for the sake of solving people’s problems but rather by building ever-more impressive juggernauts, leveraging the latest frameworks, instrumented with the most analytics, and optimized to run in O(Planck Time).

I really would like to see initiates to the industry learn to set achievable (but slightly uncomfortable) goals with a notion of value and then reach them. Set a beneficial goal, reach it, rinse, repeat. The goal could be “I want to learn Ruby and I’ll consider a utility that sorts picture files to be a good first step.” You’re adding to your skill set as a developer and you have an exit criteria It could be something for a personal project, a pro-bono client, or for pay. But tie it back to an outcome and assess whether that outcome is worthwhile. This approach will prevent you from shaving microseconds off of an app that runs overnight on a headless server and it will prevent you from introducing random complexity and dependency to an app because you wanted to learn SnazzyButPointless.js. True, the approach will stop you from ever delighting in design documents that promise the birth of true artificial intelligence, but it will also prevent you from experiencing the dejection when you realize it ain’t ever gonna happen.

By

You Want an Estimate? Give Me Odds.

I was asked recently in a comment what I thought about the “No Estimates Movement.” In the interests of full disclosure, what I thought was, “I kinda remember that hashtag.” Well, almost. Coincidentally, when the comment was written to my blog, I had just recently seen a friend reading this post and had read part of it myself. That was actually the point at which I thought, “I remember that hashtag.”

That sounds kind of flippant, but that’s really all that I remember about it. I think it was something like early 2014 or late 2013 when I saw that term bandied about in my feed, and I went and read a bit about it, but it didn’t really stick with me. I’ve now gone back and read a bit about it, and I think that’s because it’s not really a marketing teaser of a term, but quite literally what it advertises. “Hey, let’s not make estimates.” My thought at the time was probably just, “yeah, that sounds good, and I already try to minimize the amount of BS I spew forth, so this isn’t really a big deal for me.” Reading back some time later, I’m looking for deeper meaning and not really finding it.

Oh, there are certainly some different and interesting opinions floating around, but it really seems to be more bike-sheddy squabbling than anything else. It’s arguments like, “I’m not saying you shouldn’t size cards in your backlog — just that you shouldn’t estimate your sprint velocity” or “You don’t need to estimate if all of your story cards are broken into small enough chunks to be ones.” Those seem sufficiently tactical that their wisdom probably doesn’t extend too far beyond a single team before flirting with unknowable speculation that’d be better verified with experiments than taken as wisdom.

The broader question, “should I provide speculative estimates about software completion timelines,” seems like one that should be answered most honestly with “not unless you’re giving me pretty good odds.” That probably seems like an odd answer, so let me elaborate. I’m a pretty knowledgeable football fan and each year I watch preseason games and form opinions about what will unfold in the regular season. I play fantasy football, and tend to do pretty well at that, actually placing in the money more often than not. That, sort of by definition, makes me better than average (at least for the leagues that I’m in). And yet, I make really terrible predictions about what’s going to happen during the season.

ArmchairQuarterback

At the beginning of this season, for instance, I predicted that the Bears were going to win their division (may have been something of a homer pick, but there it is). The Bears. The 5-11 Bears, who were outscored by the Packers something like 84-3 in the first half of a game and who have proceeded to fire everyone in their organization. I’m a knowledgeable football fan, and I predicted that the Bears would be playing in January. I predicted this, but I didn’t bet on it. And, I wouldn’t have bet even money on it. If you’d have said to me, “predict this year’s NFC North Division winner,” I would have asked what odds you were giving on the Bears, and might have thrown down a $25 bet if you were giving 4:1 odds. I would have said, when asked to place that bet, “not unless you’re giving me pretty good odds.”

Like football, software is a field in which I also consider myself pretty knowledgeable. And, like football, if you ask me to bet on some specific outcome six months from now, you’d better offer me pretty good odds to get me to play a sucker’s game like that. It’d be fun to say that to some PMP asking you to estimate how long it would take you to make “our next gen mobile app.” “So, ballpark, what are we talking? Three months? Five?” Just look at him deadpan and say, “I’ll bite on 5 months if you give me 7:2 odds.” When he asks you what on earth you mean, just patiently explain that your estimate is 5 months, but if you actually manage to hit that number, he has to pay you 3.5 times the price you originally agreed on (or 3.5 times your salary if you’re at a product/service company, or maybe bump your equity by 3.5 times if it’s a startup).

See, here’s the thing. That’s how Vegas handles SWAGs, and Vegas does a pretty good job of profiting from the predictions racket. They don’t say, “hey, why don’t you tell us who’s going to win the Super Bowl, Erik, and we’ll just adjust our entire plan accordingly.”

So, “no estimates?” Yeah, ideally. But the thing is, people are going to ask for them anyway, and it’s not always practical to engage in a stoic refusal. You could politely refuse and describe the Cone of Uncertainty. Or you could point out that measuring sprint velocity with Scrum and extrapolating sized stories in the backlog is more of an empirically based approach. But those things and others like them tend not to hit home when you’re confronted with wheedling stakeholders looking to justify budgets or plans for trade shows. So, maybe when they ask you for this kind of estimate, tell them that you’ll give them their estimate when they tell you who is going to win next year’s super bowl so that you can bet your life savings on their guarantee estimate. When they blink at you dubiously, smile, and say, “exactly.”

By

Is This Problem Worth Solving?

I’ve done a little bit of work lately on a utility that reads from log files that are generated over the course of a month. Probably about 98% of the time, users would only be interested in this month’s file and last month’s file. How should I handle this?

At different points in my career, I’d have answered this question differently. Early on, my answer would have been to ignore the sentence about “98% of the time” and just implement a solution wherein the user had to pick which file or files he wanted. For a lot of my career, I would have written the utility to read this month and last month’s files by default and to take an optional command line parameter to specify a different file to read (“sensible defaults”). These days, my inclination is more toward just writing it to read this month’s and last month’s file, shipping it, and seeing if anyone complains — seeing if that 2% is really 2% or if, maybe, it’s actually 0%.

Part of this evolution in me was the evolution of the software industry itself. When I started out doing a lot of C and C++ in the Linux/Unix worlds, gigantic documents and users’ manuals were the norm. If you were a programmer, it was your responsibility to write massive, sprawling APIs and if you used someone’s code, it was your responsibility to use those massive APIs and read those gigantic manuals. So, who cares what percentage of time a user might do what? You just write everything that could ever possibly happen into your code and then tell everyone to go read the manual.

Then things started to get a little more “agile” and YAGNI started to prevail. On top of that, the user became more of a first class citizen, so “let’s default to the most common path” became the attractive thing to do and “RTFM” went the way of the dodo. The iconic example would be a mid 2000’s Windows or web application that would give you a sensible experience and then offer a dizzying array of features in some “Advanced Settings” window.

The next step was born with the release of the iPhone when it started to become acceptable and then normal to write dead simple things that didn’t purport to do everything. Apple’s lead here was, “this is the app, this is what it does, take it or leave it.” The “advanced settings” window was replaced by “we’ll tell you what settings you want,” which requires no window.

This shifting environment over the last 15 years informed my perspective but wasn’t entirely responsible for it. I’d say what was responsible for the shift were two realizations. First, I realized, “a ‘business spec’ isn’t nearly as important as understanding your users, their goals, and how they will use what you give them.” It became important to understand that one particular use case was so dominant that making it a pleasant experience at the cost of making another experience less pleasant was clearly worthwhile. The second realization came years later, when I learned that your users do, frequently, want you to tell them how to use your stuff.

Some of this opinion arose from spending good bits of time in a consulting capacity, where pointing out problems and offering a handful of solutions typically results in, “well, you’re the expert — which one should I do?” You hear that enough and you start saying instead, “here’s a problem I’ve noticed and here’s how I’ve had success in the past fixing this problem.” It makes sense when you think about it. Imagine having someone out to fix your HVAC system and he offers you 4 possible solutions. You might ask a bit about cost/benefit and pros/cons, but what you’ll probably wind up saying is, “so…. what would you do and/or what should I do?”

TerrifiedOfFurnace

There’s an elegance to coding up what a user will do 98% of the time and just shipping that, crude as it sounds. As I mentioned, it will confirm whether your “98%” estimate was actually accurate. But, more importantly, you’ll get a solution for 98% of the problem to market pretty quickly, letting the customer realize the overwhelming majority of ROI right away. On top of that, you’ll also not spend a bunch of money chasing the 2% case up front before knowing for sure what the impact of not having it will be. And finally, you add the possibility for a money-saving work-around. If the utility always reads this month’s and last month’s files, and we need to read one from a year ago… rather than writing a bunch of code for that… why not just rename the file from a year ago and run it? That’ll cost your client 10 seconds per occurrence (and these occurrences are rare, remember) rather than hundreds or thousands of dollars in billable work as you handle all sorts of edge cases around date/time validation, command line args, etc.

I always wince a little when I offer anecdotes of the form, “when I was younger, I had position X but I’ve come to have position Y” because it introduces a subtle fallacy of “when you get wiser like me, you’ll see that you’re wrong and I’m right.” But in this case, the point wasn’t to discredit my old ways of thinking, per se, but rather to explain how past experiences have guided the change in my approach. I’ve actually stumbled a fair bit into this, rather than arrived here with a lot of careful deliberation. You see, there were a lot of times that I completely whiffed on considering the 2% case at all and just shipped, realizing to my horror only after the fact that I’d forgotten something. Bracing for getting reamed as soon as someone figured out that I’d overlooked the 2% case, I battened down the hatches and prepared to face the fire only to face… absolutely nothing. No one noticed or cared, and the time spent on the 2% would have been a total waste.

So next time you find yourself thinking about how to handle some bizarre edge case or unlikely scenario, pull back and ask yourself whether handling that is worth delaying the normal cases or not. Ask yourself if your user can live with a gap in the edge case or work-around it. And really, ask yourself what the ROI for implementation looks like. This last exercise, more so than learning any particular framework, language or library, is what distinguishes a problem solver from a code slinger.

Acknowledgements | Contact | About | Social Media