Stories about Software


What Does Code Quality Really Mean?

Editorial Note: I originally wrote this post for the SmartBear blog.  You can check out the original here, at their site.  While you’re there, have a look around and check out Collaborator, the code review tool.

Let’s say that you wanted a definitive explanation, once and for all, as to what constitutes code quality.  You might take to google and type “definition of code quality,” which would yield a post from this very blog as well as a sampling of Q&A sites.  For the purposes of this post, however, I’d like to examine the entry that occurs here, at programmers’ stack exchange.  I think it may be the perfect microcosm for this discussion.

The Q&A Site Consensus

The question is simple enough: “what does it mean to write good code?”  The answer receiving the most votes is one in which the respondent draws a parallel to top-notch pool players, who are so good that they set up their next shot as an easy one, even while making the current shot. High-quality code “looks like it was easy and straightforward to do.”  This is a classic example of “I can’t define it, but I know it when I see it.”  You can recognize code quality because it “looks easy.”


Next up in vote total was a response that cited a popular, somewhat crass cartoon.  The gist of it is that code quality is inversely proportional to the number of perturbed utterances per minute on the part of people reading the code.  High quality code is code that triggers this response infrequently, whereas low quality code triggers this response vehemently and regularly.  Like the answer preceding it, this is not a definition but a recognition heuristic.  But with this response, the respondent clarifies that such a definition may actually be impossible, and heuristics are the best approximation.

From there, the answers get more specific and acquire fewer votes each.  One respondent offers “how fast can you understand the code” as the defining criteria for high quality code.  It is easy to imagine push-back by someone pointing out that “while(true) { }” is trivial to understand, but the resultant application that simply hangs is probably not the product of high quality code.  For the most part, the rest of the answers list properties of good code in an attempt to provide the requested definition.  “Good code is bug-free, reusable, well-documented, easy to change, etc.”  None of these garnered many votes, but each did receive a few.

So what was the verdict?  There was no accepted answer and the moderators closed the question as “not constructive.”  The reason cited was, “this question will likely solicit debate, arguments, polling, or extended discussion,” thus rendering the question not a fit for Stack Exchange’s paradigm of being a Q&A site with discrete and concrete answers.  This is not an indictment of the question, per se, but rather a determination that it’s inherently subjective.

Read More


The Human Cost of Tech Debt

Editorial Note: I originally wrote this post for the Infragistics blog.  Head over to their site and check out the original.  While you’re there, have a look at the other blog authors and their product offering.

If you’re not already familiar with the concept of technical debt, it’s worth becoming familiar with it.  I say this not only because it is a common industry term, but because it is an important concept.

Coined by Ward Cunningham, the term introduces the idea that taking shortcuts in your software today not only means paying the price eventually — it means paying that price with interest.  In other words, introducing that global variable today and saving half a day’s work ahead of shipping means that you’re going to pay for it with more than half a day’s labor down the line.

Man holding heavy chest

The Power of the Metaphor

I’ve spent significant time doing IT management consulting in recent years, after spending years and years writing code.  And I can tell you that this metaphor for shortcuts in the codebase is a powerful one when it comes to communication between the business and the software development group.  When you explain software decisions to managers and MBAs using the language of economics, they get it.

Because of the metaphor’s power and subsequent effectiveness for business concerns, it is often used to describe the health of projects, particularly vis a vis deadlines and milestones.  The developers may communicate that an aggressive deadline will result in technical debt, making features in the future take longer to ship.  Analysts and project managers might account for technical debt when discussing slipped deadlines.  IT upper management might ask for an assessment of the amount of technical in an application when making a strategic, replace/retire/rewrite decision.

The problem of technical debt is for most people, in essence, a problem of time to market.

But I’d like to talk today about the human side of the problem.  And, make no mistake — in business, all human problems are also business problems,viewed with a wide enough lens.  Unhappy humans are unhappy workers, and unhappy workers are less productive.  Yet, this angle of technical debt is seldom discussed, in my experience.

Read More


Surviving The Dreaded Company Framework

This week, I’m making it two in a row with a reader question.  Today’s question is about an internal company framework and how to survive it.  Well, it’s actually about how to work effectively with it, but the options there are sort of limited to “get in there and fix it yourself” and “other” when the first option is not valid (which it often is not).

How does one work effectively with a medium to large sized company-internal framework? The framework has one or two killer features, but overall is poorly abstracted and [released] with little QA.

I think the best way for me to tackle this question is first to complain a little about Comcast, my erstwhile cable company, and then come around to the point.  They recently explained, in response to one of my occasional Comcast rage-tweets, “[The] promotional pricing is intended to offer you the services at a reduced rate, in the hopes that you enjoy them enough to keep them.”

This is, in a nutshell, the Comcast business model — it was an honest, and forthright tweet (unlike the nature of the business model itself).  What Comcast does is reminiscent of what grocery stores do when they flood the local shopping magazines with coupons: differential pricing.  Differential pricing is a tactic where in you charge different rates for the same product to different customers, generally on the basis of charging more when people are non-price averse.

The trouble is that outright differential pricing is usually illegal, so companies have to get creative.  With the grocery store, you can pay full price if you’re generally too busy for coupons, but if you load up with serious couponing, you can get that same grocery trip for half the price or less.  They can’t say, “you look rich so that’ll be $10.00 instead of $5.00” but they can make the thing $10.00 and serially discount it to $5.00 for people willing to jump through hoops.

Comcast does this same thing, but in a sneakier way.  They advertise “promotions” and “bundles” to such an extent that these appear to be the normal prices for things, which encourages people to sign up.  Then, after some time you’ll never keep track of and never be reminded of, the “regular” price kicks in.  For me, recently, this turned out to be the comical price of $139 per month for 24 Mbps of internet.


When you call them to ask why your most recent bill was for $10,928 instead of $59.99, they’ll say “oh noes, too bad, it looks like your bundle expired!”  And this is where things take a turn for the farcical.  You can ask them for another bundle, and they’ll offer to knock your monthly bill down to $10,925.  If you want to secure the real savings, you have to pretend for a while to be canceling your service, get transferred over to the “retentions department,” and then and only then will you be offered to have your service returned to a price that isn’t absolutely insane.  I suspect that Comcast makes a lot of hey on the month or two that you get billed before you call up and do that again, because the ‘normal’ prices are equal parts prohibitive and laughable.

Why am I mentioning all this?  Well, it’s because when the time came for my most recent annual Comcast gouge ‘n’ threaten, things got a little philosophical.  I wound up on the phone with an agent to whom I confessed I was sick of this stupid charade.  Instead of arguing with me, he said something along the lines of, “yeah, it’s pretty ridiculous.  Before I started working here, I used to hate calling up and threatening them every year, but the thing is, all of the other companies do it too.”  This was either a guy being refreshingly honest, or a really shrewd customer service tactic or, perhaps, both.

But the interesting message came through loud and clear.  In my area, if you want TV and internet, it’s Comcast or it’s AT&T.  And if both of them behave this way, it goes to show you the power of a monopoly (or a nominally competing cartel).  Their motto is, in essence, “Comcast: it’s not like you have a choice.”

Read More


Please Stop “Geeking Out”

Mostly, I try to stay away from semantic quibbling and I almost always try to stay away from anything sensational, so please forgive me in advance.  I’m taking a hard line of sorts with a blog post entitled, “stop geeking out.” and I’m somewhat doing so for effect.  But this is coming from a place of earnestness.

I don’t really care too much about the term “geek” in and of itself; it’s the concept of “geeking out” that frustrates me.  And it truly is the concept — I’m not interested in term policing.  It’s not like I’d blow a gasket and be insufferable toward someone saying this in front of me; rather saying it in front of me inspires sort of a vague sadness in me, upon which I wouldn’t bother to comment.

But before I get to the reasons for my objection and my sadness, let’s take a look at the origins of the term “geek” as we know it today, originating from the idea of a “geek show.

The Online Etymology Dictionary give the following for “geek”: “sideshow freak,” 1916, U.S. carnival and circus slang, perhaps a variant of geck “a fool, dupe, simpleton” (1510s), apparently from Low Ger. geck, from an imitative verb found in North Sea Germanic and Scandinavian meaning “to croak, cackle,” and also “to mock, cheat.” The modern form and the popular use with reference to circus sideshow “wild men” is from 1946, in William Lindsay Gresham‘s novel Nightmare Alley (made into a film in 1947 starring Tyrone Power).

The billed performer’s act consisted of a single geek, who stood in center ring to chase live chickens. It ended with the performer biting the chickens’ heads off and swallowing them.[1] The geek shows were often used as openers for what are commonly known as freak shows. It was a matter of pride among circus and carnival professionals not to have traveled with a troupe that included geeks. Geeks were sometimes alcoholics or drug addicts, and could be paid with liquor – especially during Prohibition – or with narcotics.

Okay, so quick recap.  Before the term meant “technology enthusiast” it meant, “idiot substance-abuser that earns a living performing unspeakable acts to amuse mobs.”  That’s quite the transition!


The Historical Connection

Let’s examine that transition a bit.  When I was a kid in the 1980s, I remember “geek” being an insult, but it was sort of interchangeable among a bevvy of insulting synonyms: dork, dweeb, nerd, etc.  You can go looking for meaning in a taxonomy if you’re so inclined, but I don’t remember much distinction.

But, as I’d learn later, “geek” had a special and unique flavor of implication.  It conveyed obsession and interest in technology.  Interesting.  But how do you get from “slow-witted alcoholic that will eat chicken heads for free booze” to “guy that really, really likes computers?”

Read More


That Code’s Not Dead — It Went To a Farm Upstate… And You’re Paying For It

Editorial Note: I originally wrote this post for the NDepend blog.  Head over there and check out the original, if you’re so inclined.  I encourage you to go give the NDepend blog a read, in general.

When it comes to pets, there’s a heartbreaking lie that parents often tell little children when they believe that those children are not yet ready to wrap their heads around the concept of death.  “Rex went to a nice farm in the countryside where he can run and play with all of the other animals all day!”  In this fantasy, Rex the dog isn’t dead — he lives on in perpetuity.


Memoirs of a Dead Method

In the source code of  an application, you can witness a similar lie, but in the other direction.  Code lives on indefinitely, actively participating in the fate of an application, and yet we call it “dead.”  I know this because I’ve lived it.  Let me explain.

You see, I’m a method in a codebase — probably one that would be familiar to you.  My name is GetCustomerById(int id) and I hail from a class called CustomerDaoMySqlImpl that implements the interface ICustomerDao.

I was born into this world during a time of both promise and tumult — a time when the application architects were not sure whether the application would be using SQL Server or MySQL.  To hedge their bets, they mandated data access interfaces and had developers do a bit of prototyping with both tools.  And so I came into this world, my destiny taking a single integer and using MySQL to turn that integer into a customer.

I was well suited to this task.  My code was small, focused, and compact, and I performed ably even to the point of gracefully handling exceptions in the unlikely even that such would occur.  In the early days life was good.  I fetched customers on development machines from unit tests and from application code, and I starred for a time on the staging server.

But then my life was cut tragically short — I was ‘killed.’  The application architects proclaimed that, from this day forward, SQL Server was the database of choice for the team.  Of course, neither my parent class nor any of the methods in it were actually removed from the codebase.  We were left hanging around, “just in case,” but still, we were dead.  CustomerDaoMySqlImpl was instantiated only in the unit test suite and never in the application source code.  We would never shine in staging again, let alone production.  My days of gamely turning integers into customers with the help of a MySQL driver were over.

Read More