DaedTech

Stories about Software

By

The Relationship Between Team Size and Code Quality

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, take a look at NDepend and see how your code stacks up.

Over the last few years, I’ve had occasion to observe lots of software teams.  These teams come in all shapes and sizes, as the saying goes.  And, not surprisingly, they produce output that covers the entire spectrum of software quality.

It would hardly make headline news to cite team members’ collective skill level and training as a prominent factor in determining quality level.  But what else affects it?  Does team size?  Recently, I found myself pondering this during a bit of downtime ahead of a meeting.

Does some team size optimize for quality?  If so, how many people belong on a team?  This line of thinking led me to consider my own experience for examples.

A Case Study in Large Team Dysfunction

Years and years ago, I spent some time with a large team.  For its methodology, this shop unambiguously chose waterfall, though I imagine that, like many such shops, they called it “SDLC” or something like that.  But any way you slice it, requirements and design documents preceded the implementation phase.

In addition to big up front planning, the codebase had little in the way of meaningful abstractions to partition it architecturally.  As a result, you had the perfect incubator for a massive team.  The big up front planning ensured the illusion of appropriate resource allocation.  And then the tangled code base ensured that “illusion” was the best way to describe the notion that people could be assigned tasks where they didn’t severely impact one another much.

On top of all that, the entire software organization had a code review mandate for compliance purposes.  This meant that someone needed to review each line of code committed.  And, absent a better scheme, this generally meant that the longest tenured team members did the reviewing.  The same longest tenured team members that had created an architecture with no meaningful partitioning abstractions.

This cauldron of circumstances boiled up a mess.  Team members bickered over minutiae as code sprawled, rotted, and tangled.  Based solely on this experience, less is more.

Read More

By

How to Scale Your Static Analysis Tooling

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and have a look at the tech debt forecasting features in the latest version.

If you wander the halls of a large company with a large software development organization, you will find plenty of examples of practice and process at scale.  When you see this sort of thing, it has generally come about in one of two ways.  First, the company piloted a new practice with a team or two and then scaled it from there.  Or, second, the development organization started the practice when it was small and grew it as the department grew.

But what about “rolled it out all at once?”  Nah, (mercifully) not so much.  “Let’s take this thing we’ve never tried before, deploy it in an expensive roll out, and assume all will go well.”  Does that sound like the kind of plan executives with career concerns sign off on?  Would you sign off on it?  Even the pointiest haired of managers would feel gun shy.

When it comes to scaling a static analysis practice, you will find no exception.  Invariably, organizations grow the practice as they grow, or they pilot it and then scale it up.  And that begs the question of, “how?” when it comes to scaling static analysis.

Two main areas of concern come to mind: technical and human.  You probably think I’ll spend most of the post talking technical don’t you?  Nope.  First of all, too many tools, setups, and variations exist for me to scratch the surface.  But secondly, and more importantly, a key person that I’ll mention below will take the lead for you on this.

Instead, I’ll focus on the human element.  Or, more specifically, I will focus on the process for scaling your static analysis — a process involving humans.

Read More

By

Customizing Generated Method Header Comments

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at GhostDoc, the subject of this post.

Last month, I wrote a post introducing you to T4 templates.  Near the end, I included a mention of GhostDoc’s use of T4 templates in automatically generating code comments.  Today, I’d like to expand on that.

To recap very briefly, recall that Ghost Doc allows you to generate things like method header comments.  I recommend that, in most cases, you let it do its thing.  It does a good job.  But sometimes, you might have occasion to want to tweak the result.  And you can do that by making use of T4 Templates.

Documenting Chess TDD

To demonstrate, let’s revisit my trusty toy code base, Chess TDD.  Because I put this code together for instructional purposes and not to release as a product, it has no method header comments for Intellisense’s benefit.  This makes it the perfect candidate for a demonstration.

If I had released this as a library, I’d have started the documentation with the Board class.  Most of the client interaction would happen via Board, so let’s document that.  It offers you a constructor and a bunch of semantics around placing and moving pieces.  Let’s document the conceptually simple “MovePiece” method.

Read More

By

How Much Code Should My Developers Be Responsible For?

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and see if your code manages to avoid the dreaded zone of pain.

As I work with more and more organizations, my compiled list of interesting questions grows.  Seriously – I have quite the backlog.  And I don’t mean interesting in the pejorative sense.  You know – the way you say, “oh, that’s… interesting” after some drunken family member rants about their political views.

Rather, these questions interest me at a philosophical level.  They make me wonder about things I never might have pondered.  Today, I’ll pull one out and dust it off.  A client asked me this once, a while back.  They were wondering, “how much code should my developers be responsible for?”

Why ask about this?  Well, they had a laudable enough goal.  They had a fairly hefty legacy codebase and didn’t want to overtax the folks working on it.  “We know our codebase has X lines of code, so how many developers comprise an ideally staffed team?”

In a data-driven way, they asked a great question.  And yet, the reasoning falls apart on closer inspection.  I’ll speak today about why that happens.  Here are some problems with this thinking.

Read More

By

CodeIt.Right Rules, Explained

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, take a look at CodeIt.Right, an automated Code Review tool.

I’ve heard tell of a social experiment conducted with monkeys.  It may or may not be apocryphal, but it illustrates an interesting point.  So, here goes.

Primates and Conformity

A group of monkeys inhabited a large enclosure, which included a platform in the middle, accessible by a ladder.  For the experiment, their keepers set a banana on the platform, but with a catch.  Anytime a monkey would climb to the platform, the action would trigger a mechanism that sprayed the entire cage with freezing cold water.

The smarter monkeys quickly figured out the correlation and actively sought to prevent their cohorts from triggering the spray.  Anytime a monkey attempted to climb the ladder, they would stop it and beat it up a bit by way of teaching a lesson.  But the experiment wasn’t finished.

Once the behavior had been established, they began swapping out monkeys.  When a newcomer arrived on the scene, he would go for the banana, not knowing the social rules of the cage.  The monkeys would quickly teach him, though.  This continued until they had rotated out all original monkeys.  The monkeys in the cage would beat up the newcomers even though they had never experienced the actual negative consequences.

Now before you think to yourself, “stupid monkeys,” ask yourself how much better you’d fare.  This video shows that humans have the same instincts as our primate cousins.

Static Analysis and Conformity

You might find yourself wondering why I told you this story.  What does it have to do with software tooling and static analysis?

Well, I find that teams tend to exhibit two common anti-patterns when it comes to static analysis.  Most prominently, they tune out warnings without due diligence.  After that, I most frequently see them blindly implement the suggestions.

I tend to follow two rules when it comes to my interaction with static analysis tooling.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

You syllogism buffs out there have, no doubt, condensed this to a single rule.  Anytime you encounter a suggested fix you don’t understand, go learn about it.

Once you understand it, you can implement the fix or ignore the suggestion with eyes wide open.  In software design/architecture, we deal with few clear cut rules and endless trade-offs.  But you can’t speak intelligently about the trade-offs without knowing the theory behind them.

Toward that end, I’d like to facilitate that warning for some CodeIt.Right rules today.  Hopefully this helps you leverage your tooling to its full benefit.

Read More