DaedTech

Stories about Software

By

Rewrite or Refactor?

Editorial Note: I originally wrote this post for the NDepend blog.  You can find the original here, at their site.  While you’re there, take a look at some of the other posts and announcements.  

I’ve trod this path before in various incarnations, and I’ll do it again today.  After all, I can think of few topics in software development that draw as much debate as this one.  “We’ve got this app, and we want to know if we should refactor it or rewrite it.”

For what it’s worth, I answer this question for a living.  And I don’t mean that in the general sense that anyone in software must ponder the question.  I mean that CIOs, dev managers and boards of directors literally pay me to help them figure out whether to rewrite, retire, refactor, or rework an application.  I go in, gather evidence, mine the data and state my case about the recommended fate for the app.

ProjectManager

Because of this vocation and because of my writing, people often ask my opinion on this topic.  Today, I yet again answer such a question.  “How do I know when to rewrite an app instead of just refactoring it?”  I’ll answer.  Sort of.  But, before I do, let’s briefly revisit some of my past opinions.

Getting the Terminology Right

Right now, you’re envisioning a binary decision about the fate of an application.  It’s old, tired, clunky and perhaps embarrassing.  Should you scrap it, write it off, and start over?  Or, should you power through, molding it into something more clean, modern, and adaptable.  Fine.  But, let’s first consider that the latter option does not constitute a “refactoring.”

A while back, I wrote a post called “Refactoring is a Development Technique, Not a Project.”  You can read the argument in its entirety, but I’ll summarize briefly here.  To “refactor” code is to restructure it without altering external behavior.  For instance, to take a large method and extract some of its code into another method.  But when you use “refactor” as an alternative to “rewrite,” you mean something else.

Let’s say that you have some kind of creaky old Webforms application with giant hunks of gnarly code and logic binding the GUI right to the database.  Worse yet, you’ve taken a dependency on some defunct payment processing library that prevents you from updating beyond .NET 2.0.  When you look at this and say, “should I refactor or rewrite,” you’re not saying “should I move code around inside this application or rewrite it?”  Rather, you’re saying, “should I give this thing a face lift or rewrite it?”

So let’s chase some precision in terms here.  Refactoring happens on an ongoing and relatively minor basis.  If you undertake something that constitutes a project, you’re changing the software.  You’re altering the way it interacts with the database, swapping out a dependency, updating your code to a new version of the framework, etc.  So from here forward, let’s call that a reworking of the application.

Read More

By

How to Deliver Software Projects on Time

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and give it a try.

Someone asked me recently, almost in passing, about the keys to delivering software projects on time.  In this particular instance, it was actually a question of how to deliver .NET projects on time, but I see nothing particularly unique to any one tech stack or ecosystem.  In any case, the question piqued my interest, since I’m more frequently called in as a consultant to address issues of quality and capability than slipped deadlines.

To understand how to deliver projects on time (or, conversely, the mechanics of failing to deliver on time) requires a quick bit of term deconstruction.  The concept of “on time” consists of two concerns of software parlance: scope and delivery date.  Specifically, for something to be “on time” there has to be an expectation of what will be delivered and when it will be delivered.

Clipboard

How We Get This Wrong

Given that timeliness of delivery is such an apparently simple concept, we sure do find a lot of ways to get it wrong.  I’m sure that no one reading has to think long and hard to recall a software project that failed to deliver on time.  Slipped deadlines abound in our line of work.

The so-called “waterfall” approach to software delivery has increasingly fallen out of favor of late.  This is a methodology that attempts simultaneously to solve all unknowns through extensive up-front planning and estimation.  “The project will be delivered in exactly 19 months, for 9.4 million dollars, with all of the scope outlined in the requirements documents, and with a minimum standard of quality set forth in the contract.”  This approach runs afoul of a concept sometimes called “the iron triangle of software development,” which holds that the more you fix one concern (scope, cost, delivery date), the more the others will wind up varying — kind of a Heisenburg’s Uncertainty Principle of software.  The waterfall approach of just planning harder and harder until you get all of them right thus becomes something of a fool’s errand.

Let’s consider the concept of “on time” then, in a vacuum.  This features only two concerns: scope and delivery date.  Cost (and quality, if we add that to the mix as a possible variant and have an “iron rectangle”) fails to enter into the discussion.  This tends to lead organizations with deep pockets to respond to lateness in a predictable way — by throwing resources at it.  This approach runs afoul of yet another aphorism in software known as “Brooks’ Law:” adding manpower to a late software project makes it later.

If we accept both Brooks’ Law and the Iron Triangle as established wisdom, our prospects for hitting long range dates with any reliability start to seem fairly bleak.  We must do one of two things, with neither one being particularly attractive.  Either we have to plan to dramatically over-spend from day 1 (instead of when the project is already late) or we must pad our delivery date estimate to such an extent that we can realistically hit it (really, just surreptitiously altering delivery instead of cost, but without seeming to).

Read More

By

Software Architect as a Developer Pension Plan

I’m pretty sure that I’m going to get myself in trouble with this one.  Before I get started and the gnashing of teeth and stamping of feet commence, let me offer an introductory disclaimer here.  What I am about to say offers no commentary on people with the title, “software architect” (a title that I’ve had myself, by the way).  Rather, I offer commentary on the absurd state of software development in the corporate world.

The title “software architect” is silly (mostly because of the parallel to building construction) and the role shouldn’t exist.  Most of the people that hold this title, on the other hand, are smart, competent folks that know how to produce software and have the battle scars to prove it.  We’ve arrived at this paradoxical state of affairs because of two essential truths about the world: the corporation hasn’t changed much in the last century and we software developers have done an utterly terrible job capitalizing on the death grip we have on the world’s economy.

Architect

A Question of Dignity

I’m not going to offer thoughts on how to correct that here.  I’m doing that in my upcoming book.  Today, I’m going to answer a question I heard posed to the Freelancer’s Show Podcast.  Paraphrased from memory, the question was as follows.

I work for a small web development firm.  I was in a meeting where a guy said that he’d worked for major players in Silicon Valley.  He then said that what web and mobile engineers offer a commodity service and that he wanted us to serve as architects, leaving the less-skilled work to be done by offshore firms.  How does one deal with this attitude?  It’s a frustrating and demeaning debate to have with clients.

This question features a lot that we could unpack.  But I want to zero in on the idea of breaking software work into two categories: skilled work and unskilled work.  This inherently quixotic concept has mesmerized business people into poor software decisions for decades.  And it shows no signs of letting up.

Against this backdrop, “major player’s” attitude makes sense.  Like the overwhelming majority of the business world, he believes the canard about dividing work this way.  His view of the unskilled part as a commodity that can be done offshore smacks of business wisdom.  Save the higher-waged, smart people for the smart people work, and pay cheap dullards to do the brainless aspects of software development.

Of course, the podcast listener objects.  He objects to the notion that part of what he does fits into the “cheap commodity” category.  It “demeans” him and his craft.  He understands the complexities of building sites and apps, but his client views these things as simple and best delegated to unskilled grunts.

Why the Obsession with Splitting Software Work?

It bears asking why this thinking seems so persistent in the business world.  And at the risk of oversimplifying for the sake of a relatively compact blog post, I’ll sum it up with a name: Taylor.  Frederick Taylor advanced something simultaneously groundbreaking and mildly repulsive called Scientific Management.  In short, he applied scientific method principles to the workplace of the early 1900s in order to realize efficiency gains.

At first, this sounds like the Lean Startup.  It sounds even better when you factor in that Taylor favored more humanizing methods to get better work out of people than whacking them and demanding that they work harder.  But then you factor in Taylor’s view of the line level worker and you can see the repulsive side.

The labor should include rest breaks so that the worker has time to recover from fatigue. Now one of the very first requirements for a man who is fit to handle pig iron as a regular occupation is that he shall be so stupid and so phlegmatic that he more nearly resembles in his mental make-up the ox than any other type. The man who is mentally alert and intelligent is for this very reason entirely unsuited to what would, for him, be the grinding monotony of work of this character. Therefore the workman who is best suited to handling pig iron is unable to understand the real science of doing this class of work.

Basically, you can split industry into two camps of people: managers who think and imbeciles who labor.  Against this backdrop, the humanizing angle becomes… actually sorta dehumanizing.  Taylor doesn’t think grunts shouldn’t be whipped like horses because it’s dehumanizing, but because it’s not effective.  Better ways exist to coax performance out of the beasts.  Feed them carrots instead of hitting them with sticks.

Depressingly, the enterprise of today looks a lot like the enterprise of 100 years ago: efficiency-obsessed and convinced that middle management exists to assemble humans into bio-machines that needn’t think for themselves.  Nevermind that this made sense for assembling cars and textile manufacture, but not so much for knowledge work projects.  Like the eponymous cargo-culters, modern corporations are still out there waving sticks around in the air and hoping food will drop out of the sky.

Read More

By

Defining Developer Collaboration

Editorial Note: I originally wrote this post for the SmartBear blog.  Check out the original here, at their site.  While you’re there, take a look around at the pieces written by other authors as well.

A certain ideal of rugged individualism has always permeated programmer culture, at least in some circles. It’s easy enough for writing code to be a highly solitary activity. There’s a clear set of rules, output is a function of input, feedback is automated, and the profession tends to attract night owl introverts in many cases.

The result is a history punctuated by tales of late night, solo efforts where someone stares at a computer until overcome by sleep. Obsessed, brilliant hackers expend inconceivable efforts to bring amazing open source tools or useful libraries into existence. It squares with movies, popular culture, and our own knowledge of our profession’s history.

SuperFastTyping

And yet, how many significant pieces of software are developed this way anymore? This ideal was born into a world where complex software might have required 10 thousand lines of code, but does it carry forward in a world where 10 million lines are common? Even projects started by individuals on a mission tend to wind up on Github and to gather contributors like a snowball rolling down a mountain.

Today’s technical landscape is a highly complex, highly specialized, and highly social one. In a world where heroic individual contributions are increasingly improbable, collaboration becomes the name of the game. Like it or not, you need other people to get much done because the work and the breadth of knowledge tends to be too much for any one person.

Defining Collaboration for Developers

I’ll spare you the obligatory, “Webster’s Dictionary defines collaboration as…” It means what you think it means: people working together. I’d like, instead, to establish some axiomatic ground rules in applying the term to the world of software development.

The first requirement is, of course, that you need to have more than one human being involved to consider the effort to be collaboration. The mechanisms, protocols and roles can vary widely, but table stakes require more than one person.

The second requirement is that the participants be working toward a shared goal. Without this, a bunch of people in a coffee shop could be said to be ‘collaborating’ simply by virtue of colocation and the occasional, polite interaction. Collaboration requires multiple people and it requires them to be united in purpose for the activity in question.

A final and developer-oriented requirement is that the participants must be working on the same work product (i.e. codebase). Again, consider a counter-example in which iOS and Android developers could be said to be ‘collaborating’ since there are more than one of them and they’re united in the purpose of improving users’ smartphone experience. Their shared purpose has to bring them together to work on the same actual thing.

Really, though, that’s it. More than one person, united in purpose and working on the same thing, are necessarily collaborating. Whether they do it well or not is another matter.

Read More

By

How to Get Developers to Adopt a Coding Standard

Editorial Note: I originally wrote this post for the NDepend blog. Head over to their site and check out the original, here.  While you’re there, have a look around at the other posts; if you enjoy static analysis, you’ll enjoy what’s there.

If you’re a manager, there’s a decent chance that the subject of coding standards makes you want to bang your head against a wall repeatedly.  If you’re a developer, you can probably understand why this is true of your manager, if you think about it.

The group coding standard has the following properties.

  • It’s extremely technical and granular.
  • Developers tend to squabble about it.
  • If it produces any business value, it does so obliquely and theoretically, from management’s perspective.

Thus, if I’m a manager, there’s this inscrutable document out there that causes my team to argue, and whose existence I must take on faith to be a good thing.  As you can see, it doesn’t start out on my good side.

Does It Matter?

There are all manner of opinions and arguments to be found on the internet as to why a coding standard matters (or doesn’t).  I’ve wrote about the subject myself from time to time.  So, without belaboring the point, I’ll make the quick business case for it.

The purpose of a coding standard, at its core, is twofold:

  1. It promotes maintainability by making all code look similar and familiar to all developers.
  2. It standardizes approaches that promote correct code (and avoid incorrect code).

Thus, on the whole, the coding standard makes the total cost of ownership of the codebase decrease by some amount ranging between 0 and “indeterminate.”  As an outside party, you simply have to take on faith that there is an ROI and that the amount of time wasted in arguments, complaining, and compliance is offset by the reduced troubleshooting and onboarding times.

And, in my experience, it generally is.  Notwithstanding pouting and bickering that happens early in the project and makes life unpleasant, it’s better to have these things than not to have them.  The key, then, becomes minimizing the cost of having the standard.  And you do this by securing compliance with the least amount of heartburn for all involved.

So how do you do that?  How do you convince the developers to buy in with a minimum of resistance and friction?

Clipboard

Read More