DaedTech

Stories about Software

By

Static Analysis: Why You Should Care

I don’t want to go into a ton of detail on this just yet, but in broad terms, my next Pluralsight course covers the subject of static analysis. I get the sense that most people’s reaction to static analysis lies somewhere between “what’s that?” and “oh yeah, we use FX Cop sometimes.” To be sure, it’s not everyone’s reaction, but I’d say the majority falls into this category. And frankly, I think that’s a shame.

To bring things into perspective a bit, what would you do if you wanted to know how many public static methods were in a given namespace or project? I’m guessing that you’d probably hit “ctrl-shift-f” or whatever “find all in files” happens to be in your IDE, and then you’d start counting up the results, excluding spurious matches for public static classes and properties. Maybe you’d find some way to dump the results to Excel and filter or something a little more clever, but it’s still kludgy.

And what if you wanted to answer a question like “how many 20+ line methods are there in my code base?” My guess is that you basically wouldn’t do that at all. Perhaps you have an IDE plugin that offers some static analysis and LOC is a common one, but absent that, you’d probably just take a guess. And what if you wanted to know how many such methods in your code base also took a dependency on three or more framework classes? You’d probably just live with not knowing.

And living with not knowing leads to talking about code in vague generalities where loudness tends to make right. You might describe the whole reporting module as “tricky” or “crappy” or “buggy,” but what do those things really mean, aside from conveying that you more or less don’t trust that code? But what if you could run some qualitative and quantitative analysis on it and say things like “more than 80% of the methods in that module depend on that flaky third party library” or “there are several classes in there that are used by at least 40 other classes, making them extremely risky to change.” Now you have tangible, quantifiable problems for which you can find measurable solutions that can be validated. And that ability is solid gold in a profession often dominated by so-called religious arguments.

CodeReview

Static analysis of the variety that gives you detailed information about your code and warns you about potential problems combines two incredibly useful software development techniques: code review and fast feedback. Code reviews involve peer inspection of code, but it is conceptually possible to get a lot of the benefit of this activity by having the reviewers codify and store common rulesets that they would apply when doing actual reviews: no methods longer than X lines, no more code added to class Y, etc. Done this way, fast feedback becomes possible because the reviewee doesn’t actually need to find time with reviewers but can instead keep running the analysis on the code as he writes it until he gets it right.

There are plenty more benefits that I could list here. I even could talk about how static code analysis is just flat out fascinating (though that’s something of an editorial opinion). But, for my money, it makes the discussion of code quality scientific, and it dramatically speeds up the review/quality feedback loop. I think pretty much any software group could stand to have a bit of that magic dust sprinkled on it.

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
JeffLeBert
JeffLeBert
10 years ago

I love static analysis. I’ve been using FxCop since it was released and have written 150+ custom rules. We have a fairly large framework and I’ve written a bunch of rules to make sure it is used correctly. I would rather build the framework so people use it correctly, but that’s a different story. It boggles my mind that people don’t use the free utilities that are out there. That included unit testing frameworks as well. When I joined my current group I asked about FxCop and the said something to the effect of “it looked OK but there were… Read more »

Erik Dietrich
10 years ago
Reply to  JeffLeBert

I agree with you on all counts here. Taking advantage of tools for analysis, the preference for code that guides people to fall into the “pit of success,” but absent that, automating warnings/messages when people are doing the wrong thing.

And yeah, the whole “we’re beyond hope” thing seems common with applying new tools to the code base. I don’t get that and why they can’t figure out “fix a little at a time.” We should just flip on “warnings as errors.”