Stories about Software


The Secret to Avoiding Paralysis by Analysis

A while ago Scott Hanselman wrote a post about “paralysis by analysis” which I found to be an interesting read.  His blog is a treasure trove of not only technical information but also posts that humanize the developer experience, such as one of my all time favorites.  In this particular post, he quoted a stack overflow user who said:

Lately, I’ve been noticing that the more experience I gain, the longer it takes me to complete projects, or certain tasks in a project. I’m not going senile yet. It’s just that I’ve seen so many different ways in which things can go wrong. And the potential pitfalls and gotchas that I know about and remember are just getting more and more.

Trivial example: it used to be just “okay, write a file here”. Now I’m worrying about permissions, locking, concurrency, atomic operations, indirection/frameworks, different file systems, number of files in a directory, predictable temp file names, the quality of randomness in my PRNG, power shortages in the middle of any operation, an understandable API for what I’m doing, proper documentation, etc etc etc.

Scott’s take on this is the following:

This really hit me because THIS IS ME. I was wondering recently if it was age-related, but I’m just not that old to be senile. It’s too much experience combined with overthinking. I have more experience than many, but clearly not enough to keep me from suffering from Analysis Paralysis.

(emphasis his)

Paralysis by Lofty Expectations

The thing that struck out to me most about this post was reading Scott say, “THIS IS ME.” When I read the post about being a phony and so many other posts of his, I thought to myself, “THIS IS ME.” In reading this one, however, I thought to myself, “wow, fortunately, that’s really not me, although it easily could be.” I’ll come back to that.

Scott goes on to say that he combats this tendency largely through pairing and essentially relying on others to keep him more grounded in the task at hand. He says that, ironically, he’s able to help others do the same. With multiple minds at work, they’re able to reassure one another that they might be gold plating and worrying about too much at once. It’s a sanity check of sorts. At the end of the post, he invites readers to comment about how they avoid Paralysis by Analysis.

For me to answer this, I’d like to take a dime store psychology stab at why people might feel this pressure as they move along in their careers in the first place — pressure to “[worry] about permissions, locking, concurrency, atomic operations, indirection/frameworks, different file systems, number of files in a directory, predictable temp file names, the quality of randomness in my PRNG, power shortages in the middle of any operation, an understandable API for what I’m doing, proper documentation, etc etc etc.” Why was it so simple when you started out, but now it’s so complicated?


I’d say it’s a matter not so much of diligence but of aversion to sharpshooting. What I mean is, I don’t think that people during their careers magically acquire some sort of burning need to make everything perfect if that didn’t exist from the beginning; I don’t think you grow into perfectionism. I think what actually happens is that you grow worried about the expectations of those around you. When you’re a programming neophyte, you’ll proudly announce that you successfully figured out how to write a file to disk and you’d imagine the reaction of your peers to be, “wow, good work figuring that out on your own!” When you’re 10 years in, you’ll announce that you wrote a file to disk and fear that someone will say, “what kind of amateur with 10 years of experience doesn’t guarantee atomicity in a file-write?”

The paralysis by analysis, I think, results from the opinion that every design decision you make should be utterly unimpeachable or else you’ll be exposed as a fraud. You fret that a maintenance programmer will come along and say, “wow, that guy sure sucks,” or that a bug will emerge in some kind of odd edge case and people will think, “how could he let that happen?!” This is what I mean about aversion to sharpshooting. It may even be personal sharpshooting and internal expectations, but I don’t think that the paralysis by analysis occurs as a proactive desire to do a good job but out of a reactive fear of doing a bad job.

(Please note: I have no idea whether this is true of Scott, the original Stack Overflow poster or anyone else individually; I’m just speculating about this general phenomenon that I have observed)

Regaining Your Movement

So, why doesn’t this happen to me? And how might you avoid it? Well my hope is that the answer to the first question is the answer to the second question for you. This doesn’t happen to me for two reasons:

  1. I pride myself not on what value I’ve already added, but what value I can quickly add from here forward.
  2. I make it a point of pride that I only solve problems when they become actual problems (sort of like YAGNI, but not exactly).

Let’s consider the first point as it pertains to the SO poster’s example. Someone tells me that they need an application that, among other things, dumps a file to disk. So, I spend a few minutes calling File.Create() and, hey, look at that — a file is written! Now, if someone comes to me and says, “Erik, this is awful because whenever there are two running processes one of them crashes.” My thought at this point isn’t, “what kind of programmer am I that I wrote this code that has this problem when someone might have been able to foresee this?!?” It’s, “oh, I guess that makes sense — I can definitely fix it pretty quickly.” Expanding to a broader and perhaps less obtuse scope, I don’t worry about the fact that I really don’t think of half of that stuff when dumping something to a file. I feel that I add value as a technologist since even if I don’t know what a random number generator has to do with writing files, I’ll figure it out pretty quickly if I have to. My ability to know what to do next is what sells.

For the second point, let’s consider the same situation slightly differently. I write a file to disk and I don’t think about concurrent access or what on Earth random number generation has to do with what I’m doing. Now if someone offers me the same, “Erik, this is awful because whenever there are two running processes…” I also might respond by saying, “sure, because that’s never been a problem until this moment, but hey, let’s solve it.” This is something I often try to impress upon less experienced developers, particularly about performance. And I’m not alone. I counsel them that performance isn’t an issue until it is — write code that’s clean, clear, and concise and that gets the job done. If at some point users want/need it to be faster, solve that problem then.

This isn’t YAGNI, per se, which is a general philosophy that counsels against writing abstractions and other forms of gold plating because you think that you’ll be glad you did later when they’re needed. What I’m talking about here is more on par with the philosophy that drives TDD. You can only solve one problem at a time when you get granular enough. So pick a problem and solve it while not causing regressions. Once it’s solved, move on to the next. Keep doing this until the software satisfies all current requirements. If a new one comes up later, address it the same as all previous ones — one at a time, as needed. At any given time, all problems related to the code base are either problems that you’ve already solved or problems on a todo list for prioritization and execution. There’s nothing wrong with you or the code if the software doesn’t address X; it simply has yet to be enough of a priority for you to do it. You’ll get to it later and do it well.

There’s a motivational expression that comes to mind about a journey of a thousand miles beginning with a single step (though I’m really more of a despair.com guy, myself). There’s no real advantage in standing still and thinking about how many millions of steps you’re going to need to take. Pick a comfortable pair of shoes, grab some provisions, and go. As long as you pride yourself in the ability to make sure your next step is a good one, you’ll get to where you need to be sooner or later.

  • Steven Hunt

    Thanks for this post! I’ve been struggling with this sort of problem in my own development recently; I’m working with some people on a .com site that will need to scale massively, so I’ve been driving myself a bit crazy trying to “do it right the first time”. I’m going to implement your advice and see how it goes.

    • Hopefully it works out for you! For me, the key has always been making sure that, at any given point, I was keeping the code flexible and my options open. Everything I’m saying here starts to fall apart a bit when you paint yourself into corners. One big help for me in avoiding this problem has been writing very testable code.

  • Raghave

    Hi Eugene,

    I do agree to your post but to some extent.

    I feel (specially for the software community) software development is a balancing act. Mostly the time is lost in identifying the correct design to proceed. A core idea, a central thought that gives an over all direction. But sometimes, people get carried away and instead of identifying a core idea they prefer identifying the entire idea and thus paralysis by analysis occurs.

    There are also example which are contrary to the topic for example, in one of the companies i worked with, that followed agile from scratch which nearly follows the same philosophy of Quick Break and Fix cycles for improving applications. The various modules, submodules and parts of the application were so disconnected that despite the logic being not so difficult, the application turned out to be a night mare for the new joinees. The design was ridiculous rather there was no time spent on overall app design. The moments of truth came when we were required to move the application to a mavenized environment and there was a real havoc unleashed. There was direct voilation of re-usability principles across app. Same kind of operations were performed in hundreds of different ways spread across the application. meaningless classes created, too much code , poor organization, too many files and no design document. You can imagine what the newbies had to face during this transition.

    Giving adequate time to analysis makes the developers confident and they start to get to understand the outcomes and extension points of the appliciation

    • I’m not sure whether this is addressed to me or not, but I certainly understand your point. It seems there’s a natural back and forth swing based on what’s scarred you in the past. Go work for a big CMMI/Waterfall shop, and throwing out the boxes full of design docs and getting right to code sounds perfect. Go work somewhere chaotic, and process seems better.

      On the whole, I’m trying to tailor a message more centered around “you don’t have to work out every detail before getting started; just start with the highest priority thing, get going, and keep your options open.” Subtly, I’d contrast this with “just dive in and do whatever.” The kind of analysis and planning that you mention I feel to be critical, particularly having spent good chunks of time in an “architect” role. It’s basically what makes the sequence of steps that I mentioned take you to your destination instead of some random place. By all means, figure out where you’re going and lay out a plan to get there, but don’t sweat that the first step isn’t exactly the right amount between north and west to minimize the number of steps you’ll take.

      In tech terms, have an architectural framework and a scheme for dividing labor and managing complexity (big picture) but don’t sweat the details up front.

      • Raghave

        Absolutely right. Good article. Sorry for addressing this article to Eugene, i think this was referenced through eugene’s Google Hangout blog. I did had the feeling of questioning myself “Is Application Design Over-Hyped” Which is another related good topic to give some thought to. The reality is when you fall into the situation you understand why it is important. The problem is we have read and studied number of books and articles on Good Designs, Good Programming only without adequately and exposing us about the pitfalls of bad design or bad programming which could be an outcome of simply starting of. I will share a very recent example, where i had to redesign 3 times just because i did not analyse enough in the beginning. iText does not provide any REST API and we were required to convert from JSON to PDF. We found that the task was horrendous. We had to change the strategy to convert from XML to PDF. iText provides some support for conversion from html to pdf but its not full proof. We thought of creating our own xml parser that would parse custom designed XML to PDF. Not much design thought was given initially and we started of conversion just to find that delegating all iText features using our own api would be very difficult for example, creating delegate method call for every functionality of iText would be hell of a task. We were simply creating 60-70 methods in every class. We understood we are heading for maintenance nightmare. We had to come back to the drawing board and redesign, this time we introduced some necessary design patterns like composite design pattern combined with strategy pattern + some builder patter and spring to take care of singletons and all this lead to highly reusable code, we reduced the api size by 80% almost. Its understandable and easily extendable. A simple change in one or two files would now allow us to add a new iText feature to our API. The purpose is to share a different insight about analysis. That is why i seriously feel this is a balancing task (Raghave)

        • Your anecdote reminds me of something I didn’t mention at all here in discussing up front design versus “emergent” design, which is the value of prototyping. If a project/team has time and management buy-in for it, prototyping can be a great way to expose the issues/mistakes up front and guide the team toward a better design. Reminds me of the saying, “write one to throw away because you’re going to wind up doing it anyway.”

          • Raghave

            🙂 now i agree

  • Pingback: The Baeldung Weekly Review 27()

  • (What should I say here… I have only one opportunity to make the right decision… )

    Another Victim…

    Great Share 🙂

    • Thanks for the kind words! Glad you liked.

      • Haha… You’ve got me refactoring java chess apps now … 😉

  • Steven

    Lots of eye-opening information here. I find myself at this nexus a lot lately.

    When I didn’t know as much I would take the “grab a tool and start hammering away” approach. It got things done and it felt satisfying. Now, I tend to sit and stare imagining as many permutations of problems that can arise if I don’t get things right first.

    Wherever you go it seems programmers with any experience are expected to observe any number of high-level design and architecture principles by fiat. Everything is supremely engineered, decoupled, test automated. Except in reality it’s not. In reality a lot of development seems to be a combination of putting out fires, discovering problems during development you would never foresee and leaving what you call “gold-plating” to the obvious and structural parts where you know to set up protections ahead of time and it’s easy to do (IE: not mixing up persistence data with presentation layer.)

    I still tend towards outright perfection (even though it’s not possible) and figure I should at least aim for that as my starting point. Afterall, why did I read all of these books and watch these videos explaining architecture and principles?

    Early on I just read a bit and then did some programming (you know, actual problem solving and creating things). There was a clear arrow from what I read to what I did. Now I read a lot… and I I’m not sure if I’m ever making use of any of it in the code that I write. I try to make use of it, but it tends to solve problems that probably don’t exist.

    There’s so much conflicting information as to how to do things. Programmers, as a group, want there to be a solution for every problem. In the digital world this is usually quantifiable. If there’s a bug, you can be sure there’s a fix. But in the real world you can’t protect against, or even bug-fix, every single contingency. It’s so hard to know if you’re just wasting time and money (or both) or being proactive and prudent.

    • This strikes me as the Dreyfus skill model of going from advanced beginner to competent. Advanced beginner is the last stage where you fail to grasp how hard things really are and where you stand in the big picture. At competent, you get it, and impostor syndrome kicks in heavily. “How naive I was!”

      I understand, and I wish I had some kind of easy answer of how to walk the line between due dilligence and gold plating. I really don’t. There are definitely times in conversation or practice when I do something that feels like a rookie mistake and people look at me as if I’m an idiot.

      I think the insurance against suffering too much in situations like this is to build credibility…. in the world, in your circle, in your organization. If you become known as a competent problem solver, then the occasional lapse or oversighted is greeted with “everyone’s human” instead of “and you call yourself a programmer!”