DaedTech

Stories about Software

By

CodeIt.Right Rules Explained, Part 2

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right to help with automated code review.

A little while back, I started a post series explaining some of the CodeIt.Right rules.  I led into the post with a narrative, which I won’t retell.  But I will reiterate the two rules that I follow when it comes to static analysis tooling.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

Because I follow these two rules, I find myself researching every fix suggested to me by my tooling.  And, since I’ve gone to the trouble of doing so, I’ll save you that same trouble by explaining some of those rules today.  Specifically, I’ll examine 3 more CodeIt.Right rules today and explain the rationale behind them.

Mark assemblies CLSCompliant

If you develop in .NET, you’ve no doubt run across this particular warning at some point in your career.  Before we get into the details, let’s stop and define the acronyms.  “CLS” stands for “Common Language Specification,” so the warning informs you that you need to mark your assemblies “Common Language Specification Compliant” (or non-compliant, if applicable).

Okay, but what does that mean?  Well, you can easily forget that many programming languages target the .NET runtime besides your language of choice.  CLS compliance indicates that any language targeting the runtime can use your assembly.  You can write language specific code, incompatible with other framework languages.  CLS compliance means you haven’t.

Want an example?  Let’s say that you write C# code and that you decide to get cute.  You have a class with a “DoStuff” method, and you want to add a slight variation on it.  Because the new method adds improved functionality, you decide to call it “DOSTUFF” in all caps to indicate its awesomeness.  No problem, says the C# compiler.

And yet, if you you try to do the same thing in Visual Basic, a case insensitive language, you will encounter a compiler error.  You have written C# code that VB code cannot use.  Thus you have written non-CLS compliant code.  The CodeIt.Right rule exists to inform you that you have not specified your assembly’s compliance or non-compliance.

To fix, go specify.  Ideally, go into the project’s AssemblyInfo.cs file and add the following to call it a day.

But you can also specify non-compliance for the assembly to avoid a warning.  Of course, you can do better by marking the assembly compliant on the whole and then hunting down and flagging non-compliant methods with the attribute.

Specify IFormatProvider

Next up, consider a warning to “specify IFormatProvider.”  When you encounter this for the first time, it might leave you scratching your head.  After all, “IFormatProvider” seems a bit… technician-like.  A more newbie-friendly name for this warning might have been, “you have a localization problem.”

For example, consider a situation in which some external supplies a date.  Except, they supply the date as a string and you have the task of converting it to a proper DateTime so that you can perform operations on it.  No problem, right?

That should work, provided provincial concerns do not intervene.  For those of you in the US, “03/02/1995” corresponds to March 2nd, 1995.  Of course, should you live in Iraq, that date string would correspond to February 3rd, 1995.  Oops.

Consider a nightmare scenario wherein you write some code with this parsing mechanism.  Based in the US and with most of your customers in the US, this works for years.  Eventually, though, your sales group starts making inroads elsewhere.  Years after the fact, you wind up with a strange bug in code you haven’t touched for years.  Yikes.

By specifying a format provider, you can avoid this scenario.

Nested types should not be visible

Unlike the previous rule, this one’s name suffices for description.  If you declare a type within another type (say a class within a class), you should not make the nested type visible outside of the outer type.  So, the following code triggers the warning.

To understand the issue here, consider the object oriented principle of encapsulation.  In short, hiding implementation details from outsiders gives you more freedom to vary those details later, at your discretion.  This thinking drives the rote instinct for OOP programmers to declare private fields and expose them via public accessors/mutators/properties.

To some degree, the same reasoning applies here.  If you declare a class or struct inside of another one, then presumably only the containing type needs the nested one.  In that case, why make it public?  On the other hand, if another type does, in fact, need the nested one, why scope it within a parent type and not just the same namespace?

You may have some reason for doing this — something specific to your code and your implementation.  But understand that this is weird, and will tend to create awkward, hard-to-discover code.  For this reason, your static analysis tool flags your code.

Until Next Time

As I said last time, you can extract a ton of value from understanding code analysis rules.  This goes beyond just understanding your tooling and accepted best practice.  Specifically, it gets you in the habit of researching and understanding your code and applications at a deep, philosophical level.

In this post alone, we’ve discussed language interoperability, geographic maintenance concerns, and object oriented design.  You can, all too easily, dismiss analysis rules as perfectionism.  They aren’t; they have very real, very important applications.

Stay tuned for more posts in this series, aimed at helping you understand your tooling.

By

The Fastest Way to Get to Know NDepend

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and, well, get to know it.

I confess to a certain level of avoidance when it comes to tackling something new.  If pressed for introspection, I think I do this because I can’t envision a direct path to success.  Instead, I see where I am now, the eventual goal, and a big uncertain cloud of stuff in the middle.  So I procrastinate by finding other things that need doing.

Sooner or later, however, I need to put this aside and get down to business.  For me, this usually means breaking the problem into smaller problems, identifying manageable next actions, and tackling those.  Once things become concrete, I can move methodically.  (As an aside this is one of many reasons that I love test driven development — it forces this behavior.)

When dealing with a new product or utility that I have acquired, this generally means carving out a path toward some objective and then executing.  For instance, “learn Ruby” as a goal would leave me floundering.  But “use Ruby to build a service that extracts data via API X” would result in a series of smaller goals and actions.  And I would learn via those goals.

For NDepend, I have a recommendation along these lines.  Let’s use the tool to help you visualize your the reality of your codebase better than anyone around you.  In doing this, you will get to know NDepend quickly and without feeling overwhelmed.

Running Analysis

First things first.  Before you can do much of anything with NDepend, you need to set it up to analyze your codebase.  You can get here with a pretty simple sequence of steps.

  • Download NDepend and unpack it.
  • Run the Visual Studio Extension installer and install the extension for your preferred version.
  • Launch Visual Studio and open the solution you want to visualize.
  • Attach a new NDepend project to your solution, selecting the assemblies you’d like to analyze and then clicking “analyze” (leave the “build report” checkbox checked).

That’s all there is to getting started and running an analysis.  I won’t go into more detail here, as NDepend has good online documentation and the nuts and bolts are beyond the scope of what I want to cover.  But you’ll find it fairly easy to get going here on your own, and to get your code analyzed.

Read More

By

Customizing Generated Method Header Comments

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at GhostDoc, the subject of this post.

Last month, I wrote a post introducing you to T4 templates.  Near the end, I included a mention of GhostDoc’s use of T4 templates in automatically generating code comments.  Today, I’d like to expand on that.

To recap very briefly, recall that Ghost Doc allows you to generate things like method header comments.  I recommend that, in most cases, you let it do its thing.  It does a good job.  But sometimes, you might have occasion to want to tweak the result.  And you can do that by making use of T4 Templates.

Documenting Chess TDD

To demonstrate, let’s revisit my trusty toy code base, Chess TDD.  Because I put this code together for instructional purposes and not to release as a product, it has no method header comments for Intellisense’s benefit.  This makes it the perfect candidate for a demonstration.

If I had released this as a library, I’d have started the documentation with the Board class.  Most of the client interaction would happen via Board, so let’s document that.  It offers you a constructor and a bunch of semantics around placing and moving pieces.  Let’s document the conceptually simple “MovePiece” method.

Read More

By

CodeIt.Right Rules, Explained

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, take a look at CodeIt.Right, an automated Code Review tool.

I’ve heard tell of a social experiment conducted with monkeys.  It may or may not be apocryphal, but it illustrates an interesting point.  So, here goes.

Primates and Conformity

A group of monkeys inhabited a large enclosure, which included a platform in the middle, accessible by a ladder.  For the experiment, their keepers set a banana on the platform, but with a catch.  Anytime a monkey would climb to the platform, the action would trigger a mechanism that sprayed the entire cage with freezing cold water.

The smarter monkeys quickly figured out the correlation and actively sought to prevent their cohorts from triggering the spray.  Anytime a monkey attempted to climb the ladder, they would stop it and beat it up a bit by way of teaching a lesson.  But the experiment wasn’t finished.

Once the behavior had been established, they began swapping out monkeys.  When a newcomer arrived on the scene, he would go for the banana, not knowing the social rules of the cage.  The monkeys would quickly teach him, though.  This continued until they had rotated out all original monkeys.  The monkeys in the cage would beat up the newcomers even though they had never experienced the actual negative consequences.

Now before you think to yourself, “stupid monkeys,” ask yourself how much better you’d fare.  This video shows that humans have the same instincts as our primate cousins.

Static Analysis and Conformity

You might find yourself wondering why I told you this story.  What does it have to do with software tooling and static analysis?

Well, I find that teams tend to exhibit two common anti-patterns when it comes to static analysis.  Most prominently, they tune out warnings without due diligence.  After that, I most frequently see them blindly implement the suggestions.

I tend to follow two rules when it comes to my interaction with static analysis tooling.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

You syllogism buffs out there have, no doubt, condensed this to a single rule.  Anytime you encounter a suggested fix you don’t understand, go learn about it.

Once you understand it, you can implement the fix or ignore the suggestion with eyes wide open.  In software design/architecture, we deal with few clear cut rules and endless trade-offs.  But you can’t speak intelligently about the trade-offs without knowing the theory behind them.

Toward that end, I’d like to facilitate that warning for some CodeIt.Right rules today.  Hopefully this helps you leverage your tooling to its full benefit.

Read More

By

Entering the Zone of Pain

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and see if your code falls into the infamous Zone of Pain.

Years ago, when I first downloaded a trial of NDepend, I chuckled when I saw the “Abstractness vs. Instability” graph.  The concept itself does not amuse, obviously.  Rather, the labels for the corners of the graph provide the levity: “zone of uselessness” and “zone of pain.”

When you run NDepend analysis and reporting on your codebase, it generates this graph.  You can then see whether or not each of your assemblies falls within one of these two dubious zones.  No doubt people with NDepend experience can recall seeing a particularly hairy assembly depicted in the zone of pain and thinking, “I knew it!”

But whether you have experienced this or not, you should stop to consider what it means to enter the zone of pain.  The term amuses, but it also informs.  Yes, these assemblies will tend to annoy developers.  But they also create expensive, risky churn inside of your applications and raise the cost of ownership of the codebase.

Because this presents a real problem, let’s take a look at what, exactly, lands you in the zone of pain and how to recover.

Read More