DaedTech

Stories about Software

By

What Metrics Should the CIO See?

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, give NDepend a try — download it and see if your code falls in the dreaded Zone of Pain.

I’ve worked in the programming industry long enough to remember a less refined time.  During this time, the CIO (or CFO, since IT used to report to the CFO in many orgs) may have counted lines of code to measure the productivity of the development team.  Even then, they probably understood the folly of such an approach.  But, if they lacked better measures, they might use that one.

Today, you rarely, if ever see that happen any longer.  But don’t take that to mean reductionist measures have stopped.  Rather, they have just evolved.

Most commonly today, I see this crop up in the form of automated unit test coverage.  A CIO or high level manager becomes aware of generally quality and cadence problems with the software.  She may consult with someone or read a study and conclude that a robust, automated test suite will cure what ails her.  She then announces the initiative and rolls out.  Then, she does the logical thing and instruments her team’s process so that she can track progress and improvement with the testing initiative.

The problem with this arises from what, specifically, the group measures and improves.  She wants to improve quality and predictability, so she implements a proxy solution.  She then measures people against that proxy.  And, often, they improve… against that proxy.

If you measure your organization’s test coverage and hold them accountable, you can rest assured that they will improve test coverage.  Improved quality, however, remains largely an orthogonal concern.

The CIO’s Leaky Abstraction

The issue here stems from what I might call a leaky organizational abstraction.  If the CIO came from a software development background, this gets even more thorny.

Consider that a CIO or high level manager generally concerns himself with organizational strategy.  He approves and monitors budgets, signs off on major initiatives, decides on the fate of applications in the application portfolio, etc.  The CIO, in other words, makes business decisions that have a technical flavor.  He deals in profits, losses, revenues, expenses, and organizational politics.

Through that lens, he might look at quality problems across the board as hits to the company’s reputation or drags on the bottom line.  “We’re losing subscribers due to these bugs that happen at each roll out.  We estimate that we lose $10,000 more each month in revenue.”  He would then pull the trigger on business solutions: hiring consultants to fix this problem, realigning his org chart, putting off milestones to focus on quality, etc.

But if he dives into the weeds, he’s shedding a business person’s hat for a techie’s.  “Move over architects,” he says, “I know how you can fix this at the line level.  I call it ‘automated test coverage’ and I order you to start doing it.”  In a traditionally organized corporate structure, the CIO begins doing the job of folks in his organization at his peril.

Read More

By

The Fastest Way to Get to Know NDepend

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and, well, get to know it.

I confess to a certain level of avoidance when it comes to tackling something new.  If pressed for introspection, I think I do this because I can’t envision a direct path to success.  Instead, I see where I am now, the eventual goal, and a big uncertain cloud of stuff in the middle.  So I procrastinate by finding other things that need doing.

Sooner or later, however, I need to put this aside and get down to business.  For me, this usually means breaking the problem into smaller problems, identifying manageable next actions, and tackling those.  Once things become concrete, I can move methodically.  (As an aside this is one of many reasons that I love test driven development — it forces this behavior.)

When dealing with a new product or utility that I have acquired, this generally means carving out a path toward some objective and then executing.  For instance, “learn Ruby” as a goal would leave me floundering.  But “use Ruby to build a service that extracts data via API X” would result in a series of smaller goals and actions.  And I would learn via those goals.

For NDepend, I have a recommendation along these lines.  Let’s use the tool to help you visualize your the reality of your codebase better than anyone around you.  In doing this, you will get to know NDepend quickly and without feeling overwhelmed.

Running Analysis

First things first.  Before you can do much of anything with NDepend, you need to set it up to analyze your codebase.  You can get here with a pretty simple sequence of steps.

  • Download NDepend and unpack it.
  • Run the Visual Studio Extension installer and install the extension for your preferred version.
  • Launch Visual Studio and open the solution you want to visualize.
  • Attach a new NDepend project to your solution, selecting the assemblies you’d like to analyze and then clicking “analyze” (leave the “build report” checkbox checked).

That’s all there is to getting started and running an analysis.  I won’t go into more detail here, as NDepend has good online documentation and the nuts and bolts are beyond the scope of what I want to cover.  But you’ll find it fairly easy to get going here on your own, and to get your code analyzed.

Read More

By

How to Scale Your Static Analysis Tooling

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and have a look at the tech debt forecasting features in the latest version.

If you wander the halls of a large company with a large software development organization, you will find plenty of examples of practice and process at scale.  When you see this sort of thing, it has generally come about in one of two ways.  First, the company piloted a new practice with a team or two and then scaled it from there.  Or, second, the development organization started the practice when it was small and grew it as the department grew.

But what about “rolled it out all at once?”  Nah, (mercifully) not so much.  “Let’s take this thing we’ve never tried before, deploy it in an expensive roll out, and assume all will go well.”  Does that sound like the kind of plan executives with career concerns sign off on?  Would you sign off on it?  Even the pointiest haired of managers would feel gun shy.

When it comes to scaling a static analysis practice, you will find no exception.  Invariably, organizations grow the practice as they grow, or they pilot it and then scale it up.  And that begs the question of, “how?” when it comes to scaling static analysis.

Two main areas of concern come to mind: technical and human.  You probably think I’ll spend most of the post talking technical don’t you?  Nope.  First of all, too many tools, setups, and variations exist for me to scratch the surface.  But secondly, and more importantly, a key person that I’ll mention below will take the lead for you on this.

Instead, I’ll focus on the human element.  Or, more specifically, I will focus on the process for scaling your static analysis — a process involving humans.

Read More

By

How Much Code Should My Developers Be Responsible For?

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and see if your code manages to avoid the dreaded zone of pain.

As I work with more and more organizations, my compiled list of interesting questions grows.  Seriously – I have quite the backlog.  And I don’t mean interesting in the pejorative sense.  You know – the way you say, “oh, that’s… interesting” after some drunken family member rants about their political views.

Rather, these questions interest me at a philosophical level.  They make me wonder about things I never might have pondered.  Today, I’ll pull one out and dust it off.  A client asked me this once, a while back.  They were wondering, “how much code should my developers be responsible for?”

Why ask about this?  Well, they had a laudable enough goal.  They had a fairly hefty legacy codebase and didn’t want to overtax the folks working on it.  “We know our codebase has X lines of code, so how many developers comprise an ideally staffed team?”

In a data-driven way, they asked a great question.  And yet, the reasoning falls apart on closer inspection.  I’ll speak today about why that happens.  Here are some problems with this thinking.

Read More

By

Entering the Zone of Pain

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and see if your code falls into the infamous Zone of Pain.

Years ago, when I first downloaded a trial of NDepend, I chuckled when I saw the “Abstractness vs. Instability” graph.  The concept itself does not amuse, obviously.  Rather, the labels for the corners of the graph provide the levity: “zone of uselessness” and “zone of pain.”

When you run NDepend analysis and reporting on your codebase, it generates this graph.  You can then see whether or not each of your assemblies falls within one of these two dubious zones.  No doubt people with NDepend experience can recall seeing a particularly hairy assembly depicted in the zone of pain and thinking, “I knew it!”

But whether you have experienced this or not, you should stop to consider what it means to enter the zone of pain.  The term amuses, but it also informs.  Yes, these assemblies will tend to annoy developers.  But they also create expensive, risky churn inside of your applications and raise the cost of ownership of the codebase.

Because this presents a real problem, let’s take a look at what, exactly, lands you in the zone of pain and how to recover.

Read More