DaedTech

Stories about Software

By

Don’t Just Flag It — Fix It!

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, download a trial of CodeIt.Right.

More years ago than I’d care to admit, I took a software engineering course as part of my graduate CS program.  At the time, I worked a full time job during the day and did remote classes in the evening.  As a result, I disproportionately valued classes with applicability to my job.  And this class offered plenty of that.

We scratched the surface on such diverse topics as agile methodologies, automated testing, cost of code ownership, and more.  But I found myself perhaps most interested by the dive we did into refactoring.  The idea of reworking internal structure of code while preserving inputs and outputs is a surprisingly complex one.

Historical Complexity of Refactoring

At the risk of dating myself, I took this course in the fall of 2006.  While automated refactorings in your IDE now seem commonplace, back then, they were hard.  In fact, the professor of the course considered them to be sufficiently difficult as to steer a group of mine away from a project implementing some.  In the world of 2006, I suspect he had the right of it.  We steered clear.

In 2016, implementing automated refactorings still presents a challenge.  But modern tool and IDE vendors can stand on the shoulders of giants, so to speak.  Back then?  Not so much.

Refactorings present a unique challenge to tool vendors because of the inherent risk.  They can really screw up users’ code.  If a mistake happens, best case scenario is that the resultant code fails to compile because then, at least, it fails fast.  Worse still is semantically and syntactically correct code that somehow behaves improperly.  In this situation, a refactoring — a safe change to code — becomes a modification to the behavior of production code instead.  Ouch.

On top of the risk, the implementation of refactoring anywhere beyond the trivial involves heady concepts such as abstract syntax trees.  In other words, it’s not for lightweights.  So to recap, refactoring is risky and difficult.  And this is the landscape faced by tool authors.

I Don’t Fix — I Just Flag

If you live in the US, you may have seen a commercial that features a funny quip.  If I’m not mistaken, it advertises for some sort of fraud prevention services.  (Pardon any slight inaccuracies, as I recount this as best I can, from memory.)

In the ad, bank robbers hold a bank hostage in a rather cliche, dramatic scene.  Off to the side, a woman stands near a security guard, asking him why he didn’t do anything to stop it.  “I’m not a robbery prevention service — I’m a robbery monitoring service.  Oh, by the way, there’s a robbery.”

It brings a chuckle, but it also brings an underlying point.  In many situations, monitoring alone can prove woefully ineffective, prompting frustration.  As a former manager and current consultant, I generally advise people that they should only point out problems when they have also prepared solution proposals.  It can mean the difference between complaining and solving.

So you can imagine and probably share my frustration at tools that just flag problems and leave it to you to investigate further and fix them.  We feel like the woman standing next to the “robbery monitor,” wondering how useful the service is to us.

Levels of Solution

Going back to the subject of software development, we see this dynamic in a number of places.  The compiler, the IDE, productivity ad-ins, static analysis tools, and linting utilities all offer us warnings to heed.

Often, that’s all we get.  The utility says, “hey, something is wrong here, but you’re going to have to figure out what.”  I tend to think of that as the basic level of service, or level 0, if you will.

The next level, level 1, involves at least offering some form of next action.  It might be as simple as offering a help file, inline reading, or a link to more information.  Anything above “this is a problem.”

Level 2 ups the ante by offering a recommendation for what to do next.  “You have a dependency cycle.  You should fix this by looking at these three components and removing one mutual dependency.”  It goes beyond giving you a next thing to do and gives you the next thing to do.

Level 3 rounds out the field by actually performing the action for you (following a prompt, of course).  “You’ve accidentally hidden a method on the parent class.  Click here to rename or click here to make parent virtual.”  That’s just an example off the top, of course, but it illustrates the interaction paradigm.  “We’ve noticed a problem and you can click here to fix it.”

Fixes in Your Tooling

When evaluating your own tools, look to climb as high up this hierarchy as you can.  Favor tools that identify problems, but offer fixes whenever possible.

There are a number of such tools out there, including CodeIt.Right.  Using tools like this is a pleasure, because it removes the burden of research and implementation from you.  Well, you can always do the research if you want, but at your own leisure.  But it’s much better to do research at your leisure than when you’re trying to accomplish something else.

The other, important concern here is that you find trusted tooling to help you with this sort of thing.  After all, you don’t want something messing with your source code if it might mess up your source code.  But, assuming you can trust it, this provides an invaluable boost to your effectiveness by automatically resolving your problems and by helping you learn.

In the year 2016, we have far more tooling available, with a far better track record, than we did in 2006.  Leverage it whenever possible so that you can focus on solving the pressing problems of your day to day work.

By

When is It Okay to Turn off Static Analysis Guidance

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, download CodeIt.Right and give it a try.

The balance among types of feedback drives some weird interpersonal dynamics and balances.  For instance, consider the rather trite (if effective) management technique of the “compliment sandwich.”  Managers with a negative piece of feedback precede and follow that feedback with compliments.  In that fashion, the compliments form the “bun.”

Different people and different groups have their preferences for how to handle this.  While some might bend over backward for diplomacy others prefer environments where people hurl snipes at one another and simply consider it “passionate debate.”  I have no interest arguing for any particular approach — only in pointing out the variety.  As it turns out, we humans find this subject thorny.

To some extent, this complicated situation extends beyond human boundaries and into automated systems.  While we might not take quite the same umbrage as we would with humans, we still get frustrated.  If you doubt this, I challenge you to tell me that you have never yelled at a compiler because you were sure your code had no errors.  I thought so.

So from this perspective, I can understand the frustration with static analysis feedback.  Often, when you decide to enable a new static analysis engine or linting tool on a codebase, the feedback overwhelms.  28,326 issues the code can demoralize anyone.  And so the temptation emerges to recoil from this feedback and turn off the tool.

But should you do this?  I would argue that usually, you should not.  But situations do exist when disabling a static analyzer makes sense.  Today, I’ll walk through some examples of times you might suppress such a warning.

Read More

By

Secrets of Maintainable Codebases

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, have a look at the tech debt quantification features of the new NDepend version.

You should write maintainable code.  I assume people have told you this, at some point.  The admonishment is as obligatory as it is vague.  So, I’m sure, when you heard this, you didn’t react effusively with, “oh, good idea — thanks!”

If you take to the internet, you won’t need to venture far to find essays, lists, and stack exchange questions on the subject.  As you can see, software developers frequently offer opinions on this particular topic.  And I present no exception; I have little doubt that you could find posts about this on my own blog.

So today, I’d like to take a different tack in talking about maintainable code.  Rather than discuss the code per se, I want to discuss the codebase as a whole.  What are the secrets to maintainable codebases?  What properties do they have, and what can you do to create these properties?

In my travels as a consultant, I see a so many codebases that it sometimes seems I’m watching a flip book show of code.  On top of that, I frequently find myself explaining concepts like the cost of code ownership, and regarding code as, for lack of a better term, inventory.  From the perspective of those paying the bills, maintainable code doesn’t mean “code developers like to work with” but rather “code that minimizes spend for future changes.”

Yes, that money includes developer labor.  But it also includes concerns like deployment effort, defect cycle time, universality of skills required, and plenty more.  Maintainable codebases mean easy, fast, risk-free, and cheap change.  Here are some characteristics in the field that I use when assessing this property.  Some of them may seem a bit off the beaten path.

Read More

By

Measure Your Code to Get Back on Track

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.

When I’m called in for strategy advice on a codebase, I never arrive to find a situation where all parties want to tell me how wonderfully things are going.  As I’ve mentioned before here, one of the main things I offer with my consulting practice is codebase assessments and subsequent strategic recommendations.

Companies pay for such a service when they have problems, and those problems drive questions.  “Should we scrap this code and start over, or can we factor toward a better state?”  “Can we move away from framework X, or are we hopelessly tied to it?”  “How can we evolve without doing a forklift upgrade?”

To answer these questions, I assess their code (often using NDepend as the centerpiece for querying the codebase) and synthesize the resultant statistics and data.  I then present a write-up with my answer to their questions.  This also generally includes a buffet of options/tactics to help them toward their goals.  And invariably, I (prominently) offer the option “instrument your code/build with static analysis to raise the bar and prevent backslides.”

I find it surprising and a bit dismaying how frequently clients want to gloss over this option in favor of others.

Using the Observer Effect for Good

Let me digress for a moment, before returning to the subject of preventing backslides.  In physics/science, experimenters use the term “observer effect” to describe an experimental problem.  This occurs when the act of measuring a phenomenon changes its behavior, inextricably linking the two.  This presents a problem, and indeed a paradox, for scientists.  The mechanics of running the experiment contaminate the results of the experiment.

To make this less abstract, consider the example mentioned on the Wikipedia page.  When you use a tire pressure gauge, you measure the pressure, but your measurement lets some of the air out of the tire.  You will never actually know what, exactly, the pressure was before you ran the experiment.

While this creates a problem for scientists, businesses can actually use it to their advantage.  Often you will find that the simple act of measuring something with your team will create improvement.  The agile concept of “big, visible charts” draws inspiration from this premise.

In discussing this principle, I frequently cite a dead simple example.  On a Scrum team, the product owner has ultimate responsibility for making decisions about the software’s behavior.  The team thus needs frequent access to this person, despite the fact that product owners often have many responsibilities and limited time.  I recall a team who had trouble getting this access, and put a big piece of paper on the wall that listed the number of hours the product owner spent with the team each day.

The number started low and improved noticeably over the course of a few weeks with no other intervention at all.

Read More

By

Managing Code Analysis Statistics with the NDepend API

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  Also, NDepend just released a new version that addresses tech debt extensively — check it out while you’re there!

If you’re familiar with NDepend, you’re probably familiar with the Visual Studio plugin, the out of the box metrics, the excellent visualization tools, and the iconic Zone of Uselessness/Zone of Pain chart.  These feel familiar to NDepend users and have likely found their way into the normal application development process. NDepend has other features as well, however, some of which I do not necessarily hear discussed as frequently.  The NDepend API has membership in that “lesser known NDepend features club.”  Yes, that’s right — if you didn’t know this, NDepend has an API that you can use.

You may be familiar, as a user, with the NDepend power tools.  These include some pretty powerful capabilities, such as duplicate code detection, so it stands to reason that you may have played with them or even that you might routinely use them.  But what you may not realize is the power tools’ source code accompanies the installation of NDepend, and it furnishes a great series of examples on how to use the NDepend API.

NDepend’s API is contained in the DLLs that support the executable and plugin, so you needn’t do anything special to obtain it.  The NDepend website also treats the API as a first class citizen, providing detailed, excellent documentation.   With your NDepend installation, you can get up and running quickly with the API.

Probably the easiest way to introduce yourself is to open the source code for the power tools project and to add a power tool, or generally to modify that assembly.  If you want to create your own assembly to use the power tools, you can do that as well, though it is a bit more involved.  The purpose of this post is not to do a walk-through of setting up with the power tools, since that can be found here.  I will mention two things, however, that are worth bearing in mind as you get started.

  1. If you want to use the API outside of the installed project directory, there is additional setup overhead.  Because it leverages proprietary parts of NDepend under the covers, setup is more involved than just adding a DLL by reference.
  2. Because of point (1), if you want to create your own assembly outside of the NDepend project structure, be sure to follow the setup instructions exactly.

A Use Case

I’ve spoken so far in generalities about the API.  If you haven’t already used it, you might be wondering what kinds of applications it has, besides simply being interesting to play with.  Fair enough.

One interesting use case that I’ve experienced personally is getting information out of NDepend in a customized format.  For example, let’s say I’m analyzing a client’s codebase and want to cite statistical information about types and methods in the code.  Out of the box, what I do is open Visual Studio and then open NDepend’s query/rules editor.  This gives me the ability to create ad-hoc CQLinq queries that will have the information I need.

But from there, I have to transcribe the results into a format that I want, such as a spreadsheet.  That’s fine for small projects or sample sizes, but it becomes unwieldy if I want to plot statistics in large codebases.  To address this, I have enlisted the NDepend API.

Read More