DaedTech

Stories about Software

By

How to Write Code that Operations Will Like

Editorial Note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, take a look around at the options you have for monitoring your production site and all of its supporting infrastructure.

In recent years, the DevOps movement has gained a significant amount of steam.  Historically, organizations approached software creation and maintenance in the same way that they might with physical, mechanical process such as manufacturing.  This meant carving the work into discrete components and then asking people to specialize in those components.

If you picture a factory floor, it becomes easy to picture.  One person screws two components together over and over, while another person operates a drill press over and over.  When you specialize this way, the entire process gains in efficiency.  At least, so it goes with mechanical processes.

With knowledge work, we haven’t realized the same kind of benefit.  In retrospect, this makes sense.  It makes sense because, unlike mechanical work, knowledge work involves little true repeatability.  It continually presents us with problems that differ at least somewhat.

As we’ve come to recognize that fact, we’ve begun to blend concerns together.  Scrum teams blur the lines among QA, developers, and business analysts.  And DevOps does the same with initial development and production operation of software.  If you can, I wholeheartedly encourage you to embrace DevOps and to blend these concerns.  Your entire work product will improve.

Unfortunately, not every shop can do this — at least not immediately.  Progress goes slowly in these places because of regulatory and compliance concerns or perhaps because change can come slowly in the enterprise.  But that shouldn’t stop you from making strides.

Even if development and operations remain separate, they can at least communicate and help each other.  Today, I’d like to talk about how developers can write code that makes life easier for ops folks.

Read More

By

CodeIt.Right Rules Explained, Part 2

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right to help with automated code review.

A little while back, I started a post series explaining some of the CodeIt.Right rules.  I led into the post with a narrative, which I won’t retell.  But I will reiterate the two rules that I follow when it comes to static analysis tooling.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

Because I follow these two rules, I find myself researching every fix suggested to me by my tooling.  And, since I’ve gone to the trouble of doing so, I’ll save you that same trouble by explaining some of those rules today.  Specifically, I’ll examine 3 more CodeIt.Right rules today and explain the rationale behind them.

Mark assemblies CLSCompliant

If you develop in .NET, you’ve no doubt run across this particular warning at some point in your career.  Before we get into the details, let’s stop and define the acronyms.  “CLS” stands for “Common Language Specification,” so the warning informs you that you need to mark your assemblies “Common Language Specification Compliant” (or non-compliant, if applicable).

Okay, but what does that mean?  Well, you can easily forget that many programming languages target the .NET runtime besides your language of choice.  CLS compliance indicates that any language targeting the runtime can use your assembly.  You can write language specific code, incompatible with other framework languages.  CLS compliance means you haven’t.

Want an example?  Let’s say that you write C# code and that you decide to get cute.  You have a class with a “DoStuff” method, and you want to add a slight variation on it.  Because the new method adds improved functionality, you decide to call it “DOSTUFF” in all caps to indicate its awesomeness.  No problem, says the C# compiler.

And yet, if you you try to do the same thing in Visual Basic, a case insensitive language, you will encounter a compiler error.  You have written C# code that VB code cannot use.  Thus you have written non-CLS compliant code.  The CodeIt.Right rule exists to inform you that you have not specified your assembly’s compliance or non-compliance.

To fix, go specify.  Ideally, go into the project’s AssemblyInfo.cs file and add the following to call it a day.

But you can also specify non-compliance for the assembly to avoid a warning.  Of course, you can do better by marking the assembly compliant on the whole and then hunting down and flagging non-compliant methods with the attribute.

Specify IFormatProvider

Next up, consider a warning to “specify IFormatProvider.”  When you encounter this for the first time, it might leave you scratching your head.  After all, “IFormatProvider” seems a bit… technician-like.  A more newbie-friendly name for this warning might have been, “you have a localization problem.”

For example, consider a situation in which some external supplies a date.  Except, they supply the date as a string and you have the task of converting it to a proper DateTime so that you can perform operations on it.  No problem, right?

That should work, provided provincial concerns do not intervene.  For those of you in the US, “03/02/1995” corresponds to March 2nd, 1995.  Of course, should you live in Iraq, that date string would correspond to February 3rd, 1995.  Oops.

Consider a nightmare scenario wherein you write some code with this parsing mechanism.  Based in the US and with most of your customers in the US, this works for years.  Eventually, though, your sales group starts making inroads elsewhere.  Years after the fact, you wind up with a strange bug in code you haven’t touched for years.  Yikes.

By specifying a format provider, you can avoid this scenario.

Nested types should not be visible

Unlike the previous rule, this one’s name suffices for description.  If you declare a type within another type (say a class within a class), you should not make the nested type visible outside of the outer type.  So, the following code triggers the warning.

To understand the issue here, consider the object oriented principle of encapsulation.  In short, hiding implementation details from outsiders gives you more freedom to vary those details later, at your discretion.  This thinking drives the rote instinct for OOP programmers to declare private fields and expose them via public accessors/mutators/properties.

To some degree, the same reasoning applies here.  If you declare a class or struct inside of another one, then presumably only the containing type needs the nested one.  In that case, why make it public?  On the other hand, if another type does, in fact, need the nested one, why scope it within a parent type and not just the same namespace?

You may have some reason for doing this — something specific to your code and your implementation.  But understand that this is weird, and will tend to create awkward, hard-to-discover code.  For this reason, your static analysis tool flags your code.

Until Next Time

As I said last time, you can extract a ton of value from understanding code analysis rules.  This goes beyond just understanding your tooling and accepted best practice.  Specifically, it gets you in the habit of researching and understanding your code and applications at a deep, philosophical level.

In this post alone, we’ve discussed language interoperability, geographic maintenance concerns, and object oriented design.  You can, all too easily, dismiss analysis rules as perfectionism.  They aren’t; they have very real, very important applications.

Stay tuned for more posts in this series, aimed at helping you understand your tooling.

By

The Fastest Way to Get to Know NDepend

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and, well, get to know it.

I confess to a certain level of avoidance when it comes to tackling something new.  If pressed for introspection, I think I do this because I can’t envision a direct path to success.  Instead, I see where I am now, the eventual goal, and a big uncertain cloud of stuff in the middle.  So I procrastinate by finding other things that need doing.

Sooner or later, however, I need to put this aside and get down to business.  For me, this usually means breaking the problem into smaller problems, identifying manageable next actions, and tackling those.  Once things become concrete, I can move methodically.  (As an aside this is one of many reasons that I love test driven development — it forces this behavior.)

When dealing with a new product or utility that I have acquired, this generally means carving out a path toward some objective and then executing.  For instance, “learn Ruby” as a goal would leave me floundering.  But “use Ruby to build a service that extracts data via API X” would result in a series of smaller goals and actions.  And I would learn via those goals.

For NDepend, I have a recommendation along these lines.  Let’s use the tool to help you visualize your the reality of your codebase better than anyone around you.  In doing this, you will get to know NDepend quickly and without feeling overwhelmed.

Running Analysis

First things first.  Before you can do much of anything with NDepend, you need to set it up to analyze your codebase.  You can get here with a pretty simple sequence of steps.

  • Download NDepend and unpack it.
  • Run the Visual Studio Extension installer and install the extension for your preferred version.
  • Launch Visual Studio and open the solution you want to visualize.
  • Attach a new NDepend project to your solution, selecting the assemblies you’d like to analyze and then clicking “analyze” (leave the “build report” checkbox checked).

That’s all there is to getting started and running an analysis.  I won’t go into more detail here, as NDepend has good online documentation and the nuts and bolts are beyond the scope of what I want to cover.  But you’ll find it fairly easy to get going here on your own, and to get your code analyzed.

Read More

By

The Relationship Between Team Size and Code Quality

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, take a look at NDepend and see how your code stacks up.

Over the last few years, I’ve had occasion to observe lots of software teams.  These teams come in all shapes and sizes, as the saying goes.  And, not surprisingly, they produce output that covers the entire spectrum of software quality.

It would hardly make headline news to cite team members’ collective skill level and training as a prominent factor in determining quality level.  But what else affects it?  Does team size?  Recently, I found myself pondering this during a bit of downtime ahead of a meeting.

Does some team size optimize for quality?  If so, how many people belong on a team?  This line of thinking led me to consider my own experience for examples.

A Case Study in Large Team Dysfunction

Years and years ago, I spent some time with a large team.  For its methodology, this shop unambiguously chose waterfall, though I imagine that, like many such shops, they called it “SDLC” or something like that.  But any way you slice it, requirements and design documents preceded the implementation phase.

In addition to big up front planning, the codebase had little in the way of meaningful abstractions to partition it architecturally.  As a result, you had the perfect incubator for a massive team.  The big up front planning ensured the illusion of appropriate resource allocation.  And then the tangled code base ensured that “illusion” was the best way to describe the notion that people could be assigned tasks where they didn’t severely impact one another much.

On top of all that, the entire software organization had a code review mandate for compliance purposes.  This meant that someone needed to review each line of code committed.  And, absent a better scheme, this generally meant that the longest tenured team members did the reviewing.  The same longest tenured team members that had created an architecture with no meaningful partitioning abstractions.

This cauldron of circumstances boiled up a mess.  Team members bickered over minutiae as code sprawled, rotted, and tangled.  Based solely on this experience, less is more.

Read More

By

How to Scale Your Static Analysis Tooling

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and have a look at the tech debt forecasting features in the latest version.

If you wander the halls of a large company with a large software development organization, you will find plenty of examples of practice and process at scale.  When you see this sort of thing, it has generally come about in one of two ways.  First, the company piloted a new practice with a team or two and then scaled it from there.  Or, second, the development organization started the practice when it was small and grew it as the department grew.

But what about “rolled it out all at once?”  Nah, (mercifully) not so much.  “Let’s take this thing we’ve never tried before, deploy it in an expensive roll out, and assume all will go well.”  Does that sound like the kind of plan executives with career concerns sign off on?  Would you sign off on it?  Even the pointiest haired of managers would feel gun shy.

When it comes to scaling a static analysis practice, you will find no exception.  Invariably, organizations grow the practice as they grow, or they pilot it and then scale it up.  And that begs the question of, “how?” when it comes to scaling static analysis.

Two main areas of concern come to mind: technical and human.  You probably think I’ll spend most of the post talking technical don’t you?  Nope.  First of all, too many tools, setups, and variations exist for me to scratch the surface.  But secondly, and more importantly, a key person that I’ll mention below will take the lead for you on this.

Instead, I’ll focus on the human element.  Or, more specifically, I will focus on the process for scaling your static analysis — a process involving humans.

Read More