DaedTech

Stories about Software

By

Keeping Your Code Clean while Logging

Editorial Note: I originally wrote this post for the LogEntries blog.  You can check out the original here, at their site.  While you’re there, take a look at their product offering that handles your log aggregation, search, and alerting needs.

In my consultancy practice, one of the things that I do most frequently is help teams write so-called “clean code.”  Usually, this orients around test-driven development (TDD) and writing code that is easily maintained via regression tests and risk-free refactoring.  Teams want to understand how to do this, and how to do it in their production code (as opposed to in some kind of toy “let’s build a calculator” exercise).

BrandNewSetup

One of the most prominent, early sticking points that rears its head tends to be application logging.  Why?  Unit testing is all about isolating objects, instantiating them, and rapidly verifying their behavior in memory.  Logging is all about dumping as much information as possible to a file (or database, service, etc with appenders) from as many places as possible.  Unstoppable force, meet immovable object.

For instance, consider the following C# code from a “let’s build a calculator” exercise.

[TestMethod]
public void Adding_Numbers_Works()
{
    var target = new Calculator();

    var result = target.Add(2, 2);

    Assert.AreEqual<int>(4, result);
}

public class Calculator
{
    public int Add(int x, int y)
    {
        return x + y;
    }
}

Life here is good.  The calculator’s add method returns the sum of the integers, and we have a test method that supplies 2 and 2, and confirms a result of 4.  That’s a screaming fast, very clear test.  But, consider what happens if we amend our calculator to take advantage of logging.

public class Calculator
{
    public int Add(int x, int y)
    {
        Logger.Log($"Entering add method with parameters {x} and {y}.");
        Logger.Log($"Returning result of {x + y}");
        return x + y;
    }
}

Now, with that same test, everything goes wrong.  Instead of passing, it throws an exception about not having write access to some weird directory on your machine that you’ve never even heard of.  After digging a bit, you see that it’s the directory in which the test runner executes when running your solution’s tests.  Hmmm.

Unit tests and file I/O (or external system concerns in general) do not tend to play nicely together.  And, while you can hack your own environment to allow logging while your unit tests are running, it gets much dicier when your teammates also need to run these, and when the build machine does as well.  For this reason, I tend to steer developers away from a world in which their unit tests result in file I/O.

But that world can’t be one that eliminates logging.  Logging is far too valuable an auditing and troubleshooting tool to simply toss aside.  The question then becomes, how do you have both?  How do you have clean, unit-tested code and logging?  Well, there are a number of techniques that I’ll discuss.

Different Run Modes

Depending on language and framework, you may have the option of code that is selectively included at the time of building/compiling.  In C#, it’s possible to add a conditional attribute to a method.  Effectively, this gives you the option to turn the logging method into a no-op by default and to enable it for production deployments.

This is a pretty nifty way to sidestep testability issues.  When the unit tests are running, the calls to the logger just kind of magically disappear, but they exist in production.  On the downside of this, anything that’s “magical” in your code can be confusing to team members, and I generally advise people to execute testing in an environment as close to production as possible.  Having flags in your codebase, however elegant, that say “if production, do this otherwise, do this” subtly erodes the value of your tests.

Dependency Injection

If messing with compiler flags isn’t your thing (or possible in your codebase), you can always inject your dependency.  In the above code snippet, the logger is accessed through a static method (which is common).  This means that there’s no avoiding it or mitigating the situation in a unit test.  But consider the following, alternative code snippet.

public class Calculator
{
    private readonly ILogger _logger;

    public Calculator(ILogger logger)
    {
        _logger = logger;
    }
    public int Add(int x, int y)
    {
        _logger.Log($"Entering add method with parameters {x} and {y}.");
        _logger.Log($"Returning result of {x + y}");
        return x + y;
    }
}

Here, the logger is injected into the calculator class, which gives the unit test method the ability to introduce a mock object.  Basically, since the logger is only an interface, we can supply an alternative implementation at test time that does not touch a file system.

This is often a preferred approach, but it does have a notable downside, which is that it tends to clutter the constructor of every object that needs logging responsibilities.  Code that favors this approach can get kind of noisy, and all for the sake of something that is not the primary focus of the object (and thus one might argue, philosophically, a violation of the Single Responsibility Principle (SRP)).

Derived Logger

The author of the SRP, Bob Martin, offers an innovative solution.  One possible approach, he suggests, is to create a derivative of Calculator called “LoggingCalculator.”

public class Calculator
{
    public virtual int Add(int x, int y)
    {
        return x + y;
    }
}

public class LoggingCalculator : Calculator
{
    private readonly ILogger _logger;

    public LoggingCalculator(ILogger logger)
    {
        _logger = logger;
    }

    public override int Add(int x, int y)
    {
        _logger.Log($"Entering add method with parameters {x} and {y}.");
        _logger.Log($"Returning result of {x + y}");
        return base.Add(x, y);
    }
}

With this approach, Calculator stays “pure” and (unit) testable.  But since LoggingCalculator is a derivative of Calculator, it can be swapped in anywhere and used for it.  All LoggingCalculator (and Logging*) for any class does is add logging functionality, so that can be tested elsewhere, such as in integration tests.

While this does remove the noise from the classes that you want to test and from a lot of your code, it does have the drawback of creating a lot of noise in the project as a whole.  You now have up to twice as many classes/files as you’d have otherwise had.

Aspect-Oriented Programming

If you really want to cut out the boilerplate as much as possible, you can consider aspect-oriented programming.  Aspect-oriented programming is aimed at consolidating all code for a cross-cutting concern (like logging) into a single location.  In essence, rather than having every method in your code base worry about logging, you have code in one place that worries about the mechanics of logging and about inspecting the rest of the code and making the determination as to where the logging should occur.

This is a powerful concept, but it is also heavy on black magic.  Generally, AOP is accomplished with a technique known as weaving, in which the AOP code stores information about how the code you’ve written will be transformed into production code.  If you’re not careful, this can produce some pretty complex bugs at runtime.

Judicious Logging

One last, and more moderate point worth mentioning is that you can be judicious about where you log.  I’ve seen plenty of low-value logging code that is totally ubiquitous in applications and turns out to be total noise on the operations side.  Think about logging only in certain modules of your application.  External interaction is a great point heuristic — log inbound requests and responses or log queries to the database along with their results.

It’s not as complete as exhaustive logging, but it can paint a pretty good picture of what’s happening.

Combine for the Win

The techniques I’ve mentioned are not mutually exclusive and they do not exist in a vacuum.  You should mix and match them as necessary and as suits your particular application, style, and preference.  You need logging for traceability, and you need to keep your code clean for maintainability.  With a little practice at these techniques, there’s no reason you can’t have both.

12 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
hurcane
hurcane
7 years ago

In .NET, I have favored TraceSource events as an alternative to logging directly to files. Using trace events, my production code can be configured to generate actual log files. Unit tests still execute the TraceSource statements, so any null reference exceptions that might occur in logging will still be caught. And logging can be tested by wiring up a listener within the unit test.

Inversion of Control containers are another useful technique. The logger can still be static, but accessed through the IOC container, which can be “mocked” for unit tests.

mwaterman
mwaterman
7 years ago
Reply to  hurcane

Yup, TraceSource is a great way to go for .NET logging–you can not only configure the listeners with minimal effort, but you can easily configure the level of verbosity for each listener. And if you use the Essential Diagnostics package (http://essentialdiagnostics.codeplex.com/ ) then you get a boatload of useful listeners (rolling text files, email notifications, etc…).

Erik Dietrich
7 years ago
Reply to  hurcane

Thanks for posting this. TraceSource isn’t something I’ve had any experience with, myself, so I’ll have to give it a look and play with it.

da
da
7 years ago

It should be noted the the Derived Logger is what’s referred to as the Decorator pattern and is an original GOF pattern. Also most IOC frameworks have some support for the decorator pattern so that you don’t have to get into the weeds of wiring up the decorators in a ton of different places. Ideally, you don’t want to have a ton of constructors with ILoggers passed in to every single class that might need it. If you don’t want to go with an AOP solution or simply need more detailed logging than can be done outside of a method,… Read more »

Erik Dietrich
7 years ago
Reply to  da

I think there’s a subtle difference between what I’m talking about (or at least what I intended to talk about) and the Decorator pattern. One *could* implement derived logger using that pattern, but one could also implement it with simple inheritance. IOW, Decorator Pattern has three inheritance levels, with Abstract “Decorator” in the middle and decorator implementations below. But you can implement the scheme I’m talking about with two classes: Service and LoggingService. As for the ambient context, thanks for the links; I wasn’t familiar with that term or the thinking around it. I went and read a bit about… Read more »

da
da
7 years ago
Reply to  Erik Dietrich

>I think there’s a subtle difference between what I’m talking about (or at least what I intended to talk about) and the Decorator pattern. I see that you’re doing something slightly different than the decorator pattern by using inheritance. Looking at it now, there’s slightly more structure to the decorator pattern than you really don’t need for logging. I think it ultimately comes down to whether you have your logging class reimplement the interface of the base class or inherit from it and call virtual methods. I think having those virtual methods unnecessarily opens up the base class for modification… Read more »

Erik Dietrich
7 years ago
Reply to  da

Logging has always struck me, from a pure code cleanliness perspective, as perhaps the hardest thing to make elegant. AOP, inheritance, etc, are all decent enough attempts, but you’re right in that there are cases where you need more. It sounds like, if the team is diligent about controlling it, the ambient context solution can work effectively, but it also sounds like it comes with its own form of inelegance.

I wonder, at a very philosophical level, if some holy grail is possible with this.

Lars Johannesson
Lars Johannesson
7 years ago

The best way to avoid having logging code interfere with tests is imho to use a log framework (such as nlog, lo4net or similar) where logging calls can be Configured away entirely or redirected to other storages than files during testing. Though when using one of these frameworks failed log writes shouldn’t even interfere with the executed code in the 1st place.

Regarding the techniques discussed I agree with them all being clean ways to increase testability, just that when it comes to logging neither of them should be necessary at all.

Erik Dietrich
7 years ago

I wanted to cover a variety of strategies because my experience with clients is that there are always a million reasons they can’t do thing X. “We can’t use a logging framework because open source is evil” (or whatever).

But point taken. A lot of the problems created by cross-cutting logging have been addressed by logging frameworks.

Lars Johannesson
Lars Johannesson
7 years ago
Reply to  Erik Dietrich

I guess you can sort of say the LogFramework developers have already applied many (all?) of the techniques in their frameworks for the ultimate combination and win where you get all the benefits but without much (any) impact on your own code 🙂

Timothy Klenke
Timothy Klenke
7 years ago

Another technique I sometimes use is to have your class raise Log events. The production code can route these events to the log file. The test code can capture the events to verify the proper logging is done, or just not handle them.

Erik Dietrich
7 years ago
Reply to  Timothy Klenke

Good idea! I’ve not ever done this myself, per se, so I can’t speak to the pitfalls that might come up. But that strikes me as a potentially elegant way to separate critical path from not as critical path.