Stories about Software


Rewrite or Refactor?

Editorial Note: I originally wrote this post for the NDepend blog.  You can find the original here, at their site.  While you’re there, take a look at some of the other posts and announcements.  

I’ve trod this path before in various incarnations, and I’ll do it again today.  After all, I can think of few topics in software development that draw as much debate as this one.  “We’ve got this app, and we want to know if we should refactor it or rewrite it.”

For what it’s worth, I answer this question for a living.  And I don’t mean that in the general sense that anyone in software must ponder the question.  I mean that CIOs, dev managers and boards of directors literally pay me to help them figure out whether to rewrite, retire, refactor, or rework an application.  I go in, gather evidence, mine the data and state my case about the recommended fate for the app.


Because of this vocation and because of my writing, people often ask my opinion on this topic.  Today, I yet again answer such a question.  “How do I know when to rewrite an app instead of just refactoring it?”  I’ll answer.  Sort of.  But, before I do, let’s briefly revisit some of my past opinions.

Getting the Terminology Right

Right now, you’re envisioning a binary decision about the fate of an application.  It’s old, tired, clunky and perhaps embarrassing.  Should you scrap it, write it off, and start over?  Or, should you power through, molding it into something more clean, modern, and adaptable.  Fine.  But, let’s first consider that the latter option does not constitute a “refactoring.”

A while back, I wrote a post called “Refactoring is a Development Technique, Not a Project.”  You can read the argument in its entirety, but I’ll summarize briefly here.  To “refactor” code is to restructure it without altering external behavior.  For instance, to take a large method and extract some of its code into another method.  But when you use “refactor” as an alternative to “rewrite,” you mean something else.

Let’s say that you have some kind of creaky old Webforms application with giant hunks of gnarly code and logic binding the GUI right to the database.  Worse yet, you’ve taken a dependency on some defunct payment processing library that prevents you from updating beyond .NET 2.0.  When you look at this and say, “should I refactor or rewrite,” you’re not saying “should I move code around inside this application or rewrite it?”  Rather, you’re saying, “should I give this thing a face lift or rewrite it?”

So let’s chase some precision in terms here.  Refactoring happens on an ongoing and relatively minor basis.  If you undertake something that constitutes a project, you’re changing the software.  You’re altering the way it interacts with the database, swapping out a dependency, updating your code to a new version of the framework, etc.  So from here forward, let’s call that a reworking of the application.

Read More


The Journeyman Idealist: Architect of Programmer Paycuts

A couple of months ago, I mentioned that I’d be featuring more cross posts so that I could concentrate on my book.  I’ve lived up to that, mixing in the occasional answer to a reader question with posts I’ve written for other sites.  I haven’t queued up a good old fashioned rant in a while, but I think it might be time.

I want to start talking about topics from the book, and this particular topic, the “journeyman idealist” has relevance to a number of different, random conversations I’ve heard of late.  Don’t worry if you don’t know what “journeyman idealist” means — you shouldn’t because I made that up while writing my book.  And I’ll get to that and to our self-defeating pay tendencies a bit later.

Hourly Billing

Recently, I have consumed a great deal of content related to freelancing, consulting, and billing models.  This includes the following items, for those interested.

As I fall further into this rabbit hole, I become increasingly convinced that billing by the hour for knowledge work is a pile of fail.  Jonathan Stark of “Ditching Hourly” makes the case more eloquently in this episode, but I’ll offer a tl;dr.

Let’s say that a prospective client comes to you and says, “I want you to build me a website.”  Great!  Let’s do some business!


Hourly Billing as a Zero Sum Game

At this point, you begin to think in terms of cost and how high you can go above it.  For the purpose of your business, this means “what is the minimum amount for which I will do this project?”  The client begins to think in terms of value and how far they can go below it.  For them, this means “what is the maximum amount I can pay and still profit?”  Perhaps you won’t build the site in question for less than $10,000 and the client needs the figure to be less than $100,000 for the venture to bring a profit.  Thus if you agree on a price between $10,000 and $100,000, you both benefit, though the amount of the benefit will slide one way or the other, depending on how close to each end point you settle.

If you were selling websites as commodities, you’d haggle, then settle on price, as with a used car.  But building custom websites by the hour differs substantially.  In that world, you strike a deal without agreeing to price.  You just both hope that when the dust settles, the price tag falls in the range of mutual profit, and no lawsuits commence.  But within that range, each party hopes for a different end of the spectrum.  And what’s more is that neither party knows the other’s figure.  You know only that you need more than $10K and client knows only that it needs less than $100K.

As the website provider, you want the project to take as long as possible.  It needs to go sailing past $10K, and hopefully as close to client’s upper bound as possible.  The less efficiently you work — the more hours it takes to build the site — the better your financial outlook.

Read More


With or Without the US, The Future of Tech is Globalism

I spent most of August, September, and October on the road for work.  I then capped that with a celebratory vacation week in Panama, exploring cities, beaches and jungles.  As luck would have it, this also allowed me to miss the acrimony and chaos of the national US elections.

Earlier this week, I returned to a country in which Donald Trump had pulled of a surprising upset, causing the world to scramble to adjust its mental model of the coming 4 years.  The night of the election alone, markets plummeted and then subsequently rallied.  In the time since, people all over the world have furiously tried to make sense of what the development means for them.

Quick Disclaimer

I personally find partisan politics (at least in the US — I can’t speak as well for other countries) to resemble rooting for sports teams.  Americans decide, usually based on their parents’ loyalties, to root for The Republicans or The Democrats, and they get pretty upset when their team loses and the other team wins, ala fans of the Boston Red Sox and the New York Yankees.  Think of partisan US politics as like baseball, except the winner of the World Series gets to declare wars and approve federal budgets.


So as an entrepreneur and someone with a readership of unknowable team loyalty distribution, it behooves me not to choose sides, notwithstanding my own political beliefs (though, for the record, I don’t view politics as a spectator sport and so I genuinely have no home team loyalty).  I try to remain publicly, politically neutral.  And I will do my best to do so in this post, even as I talk about a theme heavily informed by US politics.

The Beginning of a Tech Dispersion

Specifically, I want to talk today about what this election means for the future of tech.  As a free agent and entrepreneur, I monitor relevant events more closely than most, looking for opportunities and warning signs.  And I think this unexpected outcome of the US election presents both opportunities and warning signs for software developers and technologists.

I believe the US has charted a course away from its status as a global technology leader and that the next decade will reveal opportunities for other countries to fill any resultant void.  The world constantly looks for “the next Silicon Valley.”  It should start looking for this in other countries.

I’m going to lay out in this post why I think this, and I’m going to do it without value judgment editorializing (or try my best, anyway).  And then I’m going to talk about what I think this means for people that earn a living writing software or making technology.  How do you prepare for and capitalize on a less US-centric techie world?

So, first up, the why.  Why do I say that the US role in global technology will become de-emphasized during a Trump presidency?  Caveat emptor.  I could be totally wrong about all of this, but the plays I suggest are ones I plan to make, so I will put my money where my mouth is.

Read More


Should You Review Requirements and Design Documents?

Editorial Note: I originally wrote this post for the SmartBear blog.  You can check out the original here, at their site.  While you’re there, have a look around at posts and knowledge from other authors.

I remember working in a shop that dealt with medical devices some years back.  I can’t recall whether it was the surrounding regulatory requirements or something about the culture at this place, but there was a rule in place that required peer review of everything.

Naturally, this meant that all code was reviewed, but it went beyond that and extended to any sort of artifact produced by the IT organization.  And, since this was a waterfall shop, that meant review (and audit-trails of approval) of the output of the requirements phase and the design phase, which meant requirements and design documents, respectively.  I can thus recall protracted meetings where we sat and soberly reviewed dusty documents that made proclamations about what “the system” shall and shan’t do.  We also reviewed elaborate sequence, flow, hierarchy, and state design artifacts that were probably obsolete before we even reviewed them.

If I make this activity sound underwhelming in its value, that’s because I routinely felt underwhelmed by its value.  It was a classic case of process over common sense, of ceremony over pragmatism.  Everyone’s attention wandered, knowing that all bets would be off once the development started.  Sign-offs were a formality — half-hearted and cursory.

But is it worth throwing the baby out with the bathwater?  Should the fact that waterfall shops waste time on and around these documents stop you from producing them and subsequently reviewing them?  Is it worth reviewing requirements and design documents?


Read More


How to Deliver Software Projects on Time

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and give it a try.

Someone asked me recently, almost in passing, about the keys to delivering software projects on time.  In this particular instance, it was actually a question of how to deliver .NET projects on time, but I see nothing particularly unique to any one tech stack or ecosystem.  In any case, the question piqued my interest, since I’m more frequently called in as a consultant to address issues of quality and capability than slipped deadlines.

To understand how to deliver projects on time (or, conversely, the mechanics of failing to deliver on time) requires a quick bit of term deconstruction.  The concept of “on time” consists of two concerns of software parlance: scope and delivery date.  Specifically, for something to be “on time” there has to be an expectation of what will be delivered and when it will be delivered.


How We Get This Wrong

Given that timeliness of delivery is such an apparently simple concept, we sure do find a lot of ways to get it wrong.  I’m sure that no one reading has to think long and hard to recall a software project that failed to deliver on time.  Slipped deadlines abound in our line of work.

The so-called “waterfall” approach to software delivery has increasingly fallen out of favor of late.  This is a methodology that attempts simultaneously to solve all unknowns through extensive up-front planning and estimation.  “The project will be delivered in exactly 19 months, for 9.4 million dollars, with all of the scope outlined in the requirements documents, and with a minimum standard of quality set forth in the contract.”  This approach runs afoul of a concept sometimes called “the iron triangle of software development,” which holds that the more you fix one concern (scope, cost, delivery date), the more the others will wind up varying — kind of a Heisenburg’s Uncertainty Principle of software.  The waterfall approach of just planning harder and harder until you get all of them right thus becomes something of a fool’s errand.

Let’s consider the concept of “on time” then, in a vacuum.  This features only two concerns: scope and delivery date.  Cost (and quality, if we add that to the mix as a possible variant and have an “iron rectangle”) fails to enter into the discussion.  This tends to lead organizations with deep pockets to respond to lateness in a predictable way — by throwing resources at it.  This approach runs afoul of yet another aphorism in software known as “Brooks’ Law:” adding manpower to a late software project makes it later.

If we accept both Brooks’ Law and the Iron Triangle as established wisdom, our prospects for hitting long range dates with any reliability start to seem fairly bleak.  We must do one of two things, with neither one being particularly attractive.  Either we have to plan to dramatically over-spend from day 1 (instead of when the project is already late) or we must pad our delivery date estimate to such an extent that we can realistically hit it (really, just surreptitiously altering delivery instead of cost, but without seeming to).

Read More