Editorial Note: I originally wrote this post for the NDepend blog. You can check out the original here, at their site. While you’re there, download a trial of NDepend and give it a spin.
I do a lot of work with and around static analysis tools. Obviously, I write for this blog. I also have a consulting practice that includes detailed codebase and team fact-finding missions, and I have employed static analysis aplenty when I’ve had run of the mill architect gigs. Doing all of this, I’ve noticed that the practice gets a rap of being just for techies.
Beyond that even, people seem to perceive static analysis as the province of the uber-techie: architects, experts, and code statistics nerds. Developing software is for people with bachelors’ degrees in programming, but static analysis is PhD-level stuff. Static analysis nerds go off, dream up metrics, and roll them out for measurement of developers and codebases.
This characterization makes me sad — doubly so when I see something like test coverage or cyclomatic complexity being used as a cudgel to bonk programmers into certain, predictable behaviors. At its core, static analysis is not about standards compliance or behavior modification, though it can be used for those things. Static analysis is about something far more fundamental: furnishing data and information about the codebase (without running the code). And wanting information about the code is clearly something everyone on or around the team is interested in.
To drive this point home, I’d like to cite some examples of less commonly known value propositions for static analysis within a software group. Granted, all of these require a more indirect route than, “install the tool, see what warnings pop up,” but they’re all there for the realizing, if you’re so inclined. One of the main reasons that static analysis can be so powerful is scale — tools can analyze 10 million lines of code in minutes, whereas a human would need months.