DaedTech

Stories about Software

By

Detecting Performance Bottlenecks with NDepend

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, take a look at NDepend and see what it can tell you about your code.

In the past, I’ve talked about the nature of static code analysis.  Specifically, static analysis involves analyzing programs’ source code without actually executing them.  Contrast this with runtime analysis, which offers observations of runtime behavior, via introspection or other means.

This creates an interesting dynamic regarding the idea of detecting performance issues with static analysis.  This is because performance is inherently a runtime concern.  Static analysis tends to do its best, most direct work with source code considerations.  It requires a more indirect route to predict runtime issues.

For example, consider something simple.

public void DoSomething(SomeService theService)
{
    theService.DoYourThing();
}

With a static analyzer, we can easily look at this method and say, “you’re dereferencing ‘theService’ without a null check.”  However, it gets a lot harder to talk definitively about runtime behavior.  Will this method ever generate an exception?  We can’t know that with only the information present.  Maybe the only call to this in the entire codebase happens right after instantiating a service.  Maybe no one ever calls it.

Today, I’d like to talk about using NDepend to sniff out possible performance issues.  But my use of possible carries significant weight because definitive gets difficult.  You can use NDepend to inform reasoning about your code’s performance, but you should do so with an eye to probabilities.

That said, how can you you use NDepend to identify possible performance woes in your code?  Let’s take a look at some ideas.

Leveraging Out of the Box Warnings

First of all, understand that NDepend does speak to this out of the box.  As you explore the queries and rules, keep an eye out for the following.

  • Instance sizes too big.  If you deal in big, unwieldy instances, they might hinder runtime performance.
  • Methods too long, complex, or with too many parameters.  Similarly, if you have unwieldy methods you may experience problems.
  • Make methods static, if possible.  As a micro-optimization, this saves passing around of a “this” parameter for the instance.
  • Avoid boxing and unboxing.  Boxing and unboxing cause you a performance hit, so keep an eye out for methods that use them needlessly.
  • Remove calls to GC.Collect.  If you’re messing with the garbage collector, chances are you have an opportunity to fix some performance issues around that code.

I do not intend this as an exhaustive list, per se.  I’m only attempting to highlight that NDepend speaks some to performance right out of the box.  You should take the first step of making use of that fact.

Detecting Lots of Throwing

Now, I’d like to get into some custom setups that you could create for yourself.  Keep in mind that you’ll have to create your own implementations of these things to leverage them in your own codebase.  But doing so won’t prove hard, and you can implement and then tweak to your specific situation.

With NDepend and CQLinq, you can detect the instantiation of exception types in methods or types.  For an example of this, check out the out of the box rule, “do not raise reserved exception types.”  This gives you an advantage.

I commonly see a mistake wherein developers drift toward using exceptions for control flow.  For instance, they might have a method that returns an integer, but in some specific case they want a string.  Rather than rethinking the method’s responsibilities, they throw an exception for the string case and use the exception’s message as the “returned” string.

This doesn’t just violate the principle of least astonishment.  It also creates real performance problems, because exception handling is expensive.  You can use NDepend to keep an eye on types or namespaces that seem to generate a lot of exceptions.  When you notice such a thing, go in manually to make sure people aren’t using them for control flow.

High Complexity, Doing Expensive Things

In keeping with the same theme of indirect warning, let’s reason about how performance problems tend to occur.  How often do you find yourself chasing some performance problem only to realize you’re doing something inadvisable inside of a tight loop?  I bet you’d answer, “usually.”

This situation proves somewhat hard to detect directly with static analysis, but you can go after it indirectly with NDepend.  Consider that you have two essential ingredients to this situation: looping and expensiveness.  You’ll need to nibble at both.

In NDepend, we have decent proxies for tight looping: cyclomatic complexity and nesting depth.  Cyclomatic complexity tells us how many paths exist through the code, and nesting depth tells us about control flow statements within control flow statements.

So let’s start with that.  We can detect methods likely to have “tight loops” by looking for high cyclomatic complexity and/or deep nesting.  From there, we can cross-reference that set of methods with references to/creation of/use of known expensive operations.  Does the method use the filesystem?  A database?  A network operation or web service?  You can see what I’m driving at.

You can effectively create a list of candidates for “expensive things in tight loops.”  Once you have this list, you can go through and investigate for actual performance problems.

Build a Blacklist

Finally, I’ll talk about something we built to in the last section.  You can create a general “blacklist.”

That term may be a bit loaded, though, because I don’t actually advise you completely eschew expensive types.  Indeed, you’d have a hard time doing much useful without databases, files, and web services.  So I’m not advising that you avoid these things, but rather that you track them.

Audit your application and find all of the expensive things you use.  This will include the aforementioned set of external concerns, but it might also include random, poorly performing third party libraries or bits of untouchable legacy code.  Add anything troublesome to the list.

From there, you just need to build a custom NDepend query to list any and all types using these things.  This way, you establish a baseline and can see if usage increases.  To make it a bit more concrete, consider that, while you may need to access a filesystem at times, you don’t want that happening willy-nilly all over the codebase.

This blacklist approach lets you keep a pretty tight eye on likely sources of performance woe.

But Always Measure

In this post, I’ve offered some ideas for how you can use NDepend to flag potential performance issues.  These will all prove useful to you, but it bears repeating that you need to remember the word “potential.”  It’s all about probabilities.

Once you’ve used NDepend to identify likely culprits, you should absolutely verify.  And to verify, you’ll find nothing more useful than a good performance profiling tool.  All of the prediction in the world won’t show you as much as actual measurements at runtime.  But the fact that the runtime tool provides the ultimate source of truth does not mean it should be the only tool in your toolbox.  Use NDepend to help focus and narrow down your investigations.

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Vince
Vince
6 years ago

I find Performance Monitor in Windows to be a valuable tool in detecting performance issues in production, without installing any software and introducing risk. The only problem is, it doesn’t give you much information on the source of the problem. A good time to get cracking with NDepend.