Stories about Software


4 Ways Custom Code Metrics Make a Difference

Edit: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and give it a try.  If you like static analysis, you’ll find yourself hooked.

One of the things that has surprised me over the years is how infrequently people take advantage of customizable code metrics.  I say this not from the perspective of a geek with esoteric interest in a subject, wishing other people would share my interest.  Rather, I say this from the perspective of a business man, making money, and wondering why I seem to have little competition.

As I’ve mentioned before, a segment of my consulting practice involves strategic code assessments that serve organizations in a number of ways.  When I do this, the absolute most important differentiator is my ability to tailor metrics to the client and specific codebases on the fly.  Anyone can walk in, install a tool, and say, “yep, your cyclomatic complexity in this class is too high, as evidenced by this tool I installed saying ‘your cyclomatic complexity in this class is too high.'”  Not just anyone can come in and identify client-specific idiosyncrasies and back those findings with tangible data.


But, if they would invest some up-front learning time in how to create custom code metrics, they’d be a lot closer.

Being able to customize code metrics allows you to reason about code quality in very dynamic and targeted terms, and that’s valuable.  But you might think that, unless you want a career in code base assessment, that value doesn’t apply to you.  Let me assure you that it does, albeit not in quite as direct way as it applies to me.

Custom code metrics can help make your team better, and they can do so in a variety of ways.  Let’s take a look at a few.

Read More


Code Review Beyond Meeting Rooms and Projectors

Editorial Note: I originally wrote this post for the SmartBear blog.  You can check out the original here, at their site.  While you’re over there, have a look at their Collaborator offering.

It must have been a decade ago.  I was sitting in a spaciously appointed conference room around a large, round table, surrounded by fellow software developers from my company.  Coffee and bright lights notwithstanding, we were all struggling to varying degree to keep alert.

The reason for our struggle was that we were spending the day doing a marathon code review, and we were barely halfway through the morning.  A representative of the team that had written the code was on hand, walking us through it, showing the code against a pull down screen with a projector.  We were being treated to file after file after file, in alphabetical order, in a text editor.

There were hours of this yet to go.  And yet, it was an unavoidable matter of due diligence.  In spite of our wandering attention spans, we believed this, as did our management.  Our company had contracted with a custom software vendor to write an application that our company would take over and maintain.  This firm had scurried off for 9 months or so, and they were now presenting “phase 1” to us.  This was our chance to raise concerns and provide input.  Theoretically, anyway.


As you can imagine, this was not an especially effective way to spend a day (or several, as it would turn out).  We technically saw all of the code that was being delivered, but it wasn’t as though this activity produced a lot of meaningful output.  There were some obvious reasons for this, such as the enormous batch of reviewing 9 months of code and the marathon, all-in nature of the session.  But there were also some issues related to tooling and process that were pretty limiting compared to what we have now.

10 years have elapsed since this yawn-inducing series of days I spent in review.  And, while you’re still going to be in trouble if you try meaningfully to review 9 months of a team’s code in a few days, tools and workflows have emerged in the interceding years that make code review much easier.  You should be taking advantage of them.

Read More


Undercover Testability Killers

Editorial Notes: I originally wrote this post for the Infragistics blog.  You can check out the original here, at their site.  While you’re there, take a look around at the other posts by their contributing authors.

If you were to take a poll of software development shops and ask whether or not they unit tested, you’d get varied responses.  Some would heartily say that they are, and some would sheepishly say that they totally mean to get around to that next year and that they’ve totally been looking into it.  In the middle, you’d get a whole lot of responses that amounted to, “it’s complicated.”

In my travels as a consultant, I witness the reason for this firsthand.  The adoption rate of automated testing has increased dramatically in the last decade, and that increased adoption means that a lot of shops are taking the plunge.  And naturally, this means that a lot of shops with a lot of legacy code and awkward constructs in their codebases are taking the plunge, which leads to interesting, complicated results.


It’s Complicated

“It’s complicated” generally involves variants of “we tried but it wasn’t for us” and “we do it when we can, but the switch hasn’t flipped yet.”  And, at the root of all of these variants lies a truth that’s difficult to own up to when talking about your group – “we’re having trouble getting any good at this.”

If this describes you or folks you know, take heart, though.  The “Intro to TDD” and “NUnit 101” guides make it look really, really easy.  But those sources of learning usually show you how to write unit tests for things like “in-memory calculator,” intending to simplify the domain and code so that you understand the mechanics of a unit test.  But, in doing this, they paint a deceptive picture of how easy covering your code with tests should be.

If you’ve been writing code for years with nary a thought to testing at the unit level, it’s likely that familiar, comfortable coding practices of yours are proving to be false friends.  In other words, your codebase is probably littered with things that are actively making your life extremely difficult as you try to adopt automated testing.  What follows are some of the most common ones that I see.

Read More


Habits that Help Code Quality

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  If you like posts about static analysis, code quality, and the like, check out the rest of the blog.

When I’m called in to do a strategic assessment of a codebase, it’s never the result of everything being awesome.  That is, no one calls me up and says, “we’re ahead of schedule, under budget, and knocking it out of the park, so can you come in and tell us what you think of our code?”  Rather, I get calls when something isn’t going according to plan and the business people involved want to get some insight into what underlying causes there are in the code and in the team’s approach.

When the business gets involved this way, there is invariably a fiscal operational concern, either overtly or lurking just beneath the surface.  I’ll roll this up to the general consideration of “total cost of ownership” for the codebase.  The business is thus asking, “why are things proving more expensive than we thought?”


Typically, I come in, size up the situation, quantify it objectively, and then use analogies and examples to make clear what’s happening.  After I do this, pretty much without exception, the decision-makers to whom I’m speaking want to know what small things they can do, internally, to course correct.  This makes sense when you think about it.  If your doctor told you that your health outlook wasn’t great, you’d cross your fingers and say, “but I can fix it by changing my diet and exercise a little, right?”  You wouldn’t throw yourself on the table and say, “cut me open and make sure whatever you do is expensive!”

I am thus frequently asked, by both developers and by management, “what are the little things we can do to improve and maintain code quality.”  As such, this seems like excellent fodder for a blog post.  Here are my tips, based on years of observation of what correlates with healthy codebases and what correlates with distressed ones.

Read More


Why Automate Code Reviews?

Editorial Note:  I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  This is a new partner for whom I’ve started writing recently.  They offer automated code review and documentation tooling in the .NET space, so if that interests you, I encourage you to take a look.

In the world of programming, 15 years or so of professional experience makes me a grizzled veteran.  That certainly does not hold for the work force in general, but youth dominates our industry via the absolute explosion of demand for new programmers.  Given the tendency of developers to move around between projects and companies, 15 years have shown me a great deal of variety.


Perhaps nothing has exemplified this variety more than the code review.  I’ve participated in code reviews that were grueling, depressing marathons.  On the flip side, I’ve participated in ones where I learned things that would prove valuable to my career.  And I’ve seen just about everything in between.

Our industry has come to accept that peer review works.  In the book Code Complete, author Steve McConnell cites it, in some circumstance, as the single most effective technique for avoiding defects.  And, of course, it helps with knowledge transfer and learning.  But here’s the rub — implemented poorly, it can also do a lot of harm.

Today, I’d like to make the case for the automated code review.  Let me be clear.  I do not view this as a replacement for any manual code review, but as a supplement and another tool in the tool chest.  But I will say that automated code review carries less risk than its manual counterpart of having negative consequences.

Read More