DaedTech

Stories about Software

By

Microsoft Pex

I’ve recently started to play around with Microsoft Pex, and I’m discovering (1) that there is a non-trivial learning curve; and (2) that it seems like a pretty powerful tool. As best I can tell at this early stage (and admittedly, I’m certainly a neophyte), there seem to be two major components. The first is the notion of parameterized unit testing (PUT) and the second is automated test generation.

For anyone not familiar with the idea of parameterized unit testing, it’s an idea that rather than the first bit of code below, you can do something that conceptually mirrors the second bit of code, except with better syntax. In the .NET world, I believe that MBUnit supplies this syntax natively (in the form of “row test”) and that users are left to reconstruct it with NUnit and MS Test. I’m too far removed at the moment from unit testing suites in Java or C++ to comment with any authority on those languages and PUTs.

This second example is likely something that we’ve all done, with or without some supporting syntax of the test library. Obviously, unit test methods don’t take parameters, so you might have coded up some standard helper like the method above and call it from actual, decorated test methods (or, if you’re like me, you created an assert extension class that grows as you go). Pex creates a series of method decorating attributes and some of them appeared oriented toward generating tests with these semantics–they pepper your SUT (system under test) with ranges of values and see how it behaves.

But the really snazzy part of Pex that you can’t mimic with helper methods is that it analyzes your code, looking for interesting values. That is, you don’t supply “rangeStart” and “rangeStop” to Pex–it deduces them on its own, based on static and dynamic analysis of your code, and tries them out. The first time I ran Pex, it found all sorts of exception-generating conditions in code that I had thought to be fairly bulletproof, given that I had added a whole slew of contracts using Code Contracts to check preconditions and invariants and enforce post conditions. While somewhat humbling, that is extremely helpful. It’s better than the best possible manual code review (for seeing potential exceptions, at least), and better than the manual tests you generate. It’s automatically executing your code and guiding it towards exception-generating conditions.

Making things even better is the fact that Pex presents the results and gives you the option to actually generate unit tests (in your framework of choice) based on the exceptions it finds. It refers to this as promoting the tests. So the process flow is that you run a “Pex Exploration” which does dynamic analysis (mixed, I believe, with some forms of static analysis) to find exceptional conditions. It then shows you a list of the results of its parameterized runs and whether they passed, failed, or were inconclusive (there is a configurable timeout for each exploration) and allows you to pick and choose which results that you want to promote to unit tests saved in files in your codebase.

One thing that I find a bit off-putting is the fact that the tests it generates are strewn across a number of “.g.cs” partial classes, at least with MS Test. That is really just a matter of personal taste, and I can probably get used to it. It seems generate the partials based on the method in your code that it’s exploring and handle the setup for the call there, and then invoke the main class to actually call the method in question. This is kind of confusing, as the setup occurs in a more specific class and the actual test assert is somewhere else. However, when you open the test from the test result window (which is what I’ve been doing), you can set a breakpoint and step through the heavily attribute-decorated Pex boilerplate, winding up in your own code and seeing the exceptions. And that’s really what matters.

As I play more with this apparently slick and powerful tool, I’ll have more to say about it. I’m playing with it in conjunction with Code Contracts, and there seems to be a natural synergy between these products. I’ll have more to post on both subjects in this series of posts.

By

Why I don’t like C# extension methods

When I first became aware of the Extension Method in C#, my reaction was similar to that of a lot of people–I thought it was a pretty neat trick. For anyone not familiar with extension methods, they are a construct introduced in C# along with LINQ to allow ‘extension’ of a class. Here’s an example:

What’s going on behind the scenes here is that Microsoft introduced syntactic sugar on standard static methods. An extension method is really just a static method, and as far as the compiler is concerned, the client code is no different than it would be if you called “Extensions.GetWordCount(str);”

The advantages are immediately obvious. You can abstract verbose and redundant common idioms into these methods, favoring instead descriptive terminology. You can make it look as though GetWordCount() is a method of string, even though it isn’t. This means that, done right, you can create the appearance of enhanced classes without modifying the classes themselves. If you keep these methods well organized, they seem a natural enhancement to the language. And perhaps the most powerful thing of all is that you can extend library classes that are sealed or have non-virtual methods, or you can supply enhancements to your own API for others without the risk of introducing breaking changes.

But in spite of those benefits, something bothered me about this idea. I couldn’t quite put my finger on it for a long time. The drawbacks that one that might see in a google search or realize on his own include decoupling class-specific functionality from that class (i.e. violating encapsulation) and the fact that you’re favoring free-floating static utility over instance methods. There is also the possibility for naming collisions if you write an extension method with the same signature as an instance method. There is the confusion that one might experience not knowing through intellisense whether you were dealing with a native method of some class or an afterthought that somebody tacked on. But all of those drawbacks seemed to have reasonable counterpoints, and none of them really address the notion of enhancing bad APIs that you’ve been given or putting out your own extensions to preserve reverse compatibility.

So, as an experiment, I developed a little utility and decided to load up on extension methods to really try them out, having previously only been a client of them. I was writing a bunch of things out to an HTML file and using the XDocument API to do it (taking advantage of the fact that HTML is conceptually a subset of XML), so I thought I had the perfect opportunity. I added a bunch of extension methods for XElement that would do things like create tables, styles, and other HTML constructs. I would do something like myXElement.CreateTableOutline(rows, columns, title, headerColor);

It was pretty slick. And, as I did it, I found that these extension methods begat more extension methods for other things. Pretty soon, I had the cleanest, easiest to read code that you could ever want in a data access object. It read like a book, not like code: “Element.CreateTableWithThis()”, and, “Element.PopulateTableWithThat();” Best of all, it looked like a beautiful object-oriented design.

And, at that moment, the thing I couldn’t put my finger on that had been bothering me leaped into full view. It looked like object-oriented design. Extension methods, as I mentioned earlier, are just stateless (hopefully) static methods gussied up to look like instance methods. And, stateless static methods are really just procedural constructs that exist outside of any application state at all–the much younger brother with a striking resemblance to something in the C world that you might call “utils.c” and populate with 8000 functions that just didn’t fit anywhere else.

The whole time, I had been making my code increasingly procedural. I had been moving ugly functionality out of my instances and putting it into the blob of stateless static functionality, prettying it up by making it a series of extension methods. Rather than create some kind of HtmlElement class that inherited from or wrapped an XElement, I created a laundry list of extension methods that could easily have been called “public static class HtmlUtils”. Taken to its logical conclusion, I could write an entirely procedural application with nothing but extensions of existing classes. I realize that nobody is likely to do that, but it is interesting that such a thing is possible and that you can gravitate toward it without realizing it.

And that is the crux of what bothered me all along. Here, we have a brand, spankin’ new language feature introduced to an object-oriented language that gives users the ability to make their code substantially more procedural while making it look substantially more like a clean object-oriented design. Extension methods are the ultimate in deodorant for a code smell. If you venture down that rabbit hole, you’ll never ask yourself why you’re increasingly favoring static utils methods instead of object-oriented design because you will fool everyone else and even yourself into thinking that you are using object-oriented principles.

The denouement of my little story was that I systematically removed all of the extension methods. In their place, I created a series of small methods on a common base class and factored some of the functionality into value objects and small inheritance structures that defined HTML constructs. The result wasn’t as pretty, and I didn’t get the fluent interface feel, but the way I see it, I gave up high fructose corn syrup for fresh-squeezed orange juice. The new design may have required that I add a few classes, but I now have the ability to extend CreateTable() in sub-classes rather than adding a bunch of additional static methods for that functionality. With stateless static methods, you can’t inherit base classes or implement interfaces to take advantage of polymorphism.

I’m not advocating that everybody swear off extension methods (or really advocating much of anything), but rather explaining why I don’t like the construct and choose not to use it. Since my discovery, I’ve paid closer attention to how other people use extension methods. I’ve seen them used to circumvent the compiler for preventing circular dependencies. I’ve seen them used to ‘extend’ functionality in the same namespace where it would have been perfectly reasonable just to add a method to the class itself. I’ve even seen them used to hold and alter global state. And I’ve also seen them used somewhat reasonably.

But, at the end of the day, extension methods are the cousin of the singleton in that they both have an unanticipated side effect. That side effect is a vehicle for developers who are forced to develop in OO languages, but have always carried the torch for procedural programming, to sneak back to visit the ex without anyone knowing. In both cases, for the OO programmer, I would advise a bit of frank examination of the motivations for using the construct. Is it because extending the class (or forcing one global instance) is the only/best possible approach, or is it because it’s easier and faster than taking a hard look at your design or object graph? If it’s the latter, you could be heading for some headaches down the road.

By

Microsoft Code Contracts

I’m currently working on an ongoing side project, and we’re using Microsoft Code Contracts, so I thought that I would create a series of posts documenting my experience with this as a new and curious user. The basic idea of it is to standardize and formalize preconditions, post conditions, and invariants in your classes.

For instance, let’s say that you work on a C# project in which you can’t really trust your users or your dependencies not to do unpleasant things. You might create a class that looks like this:

This is the epitome of defensive code, and it’s a smart thing to do in the context that I mentioned. At the beginning of each method, you ferret out the things that will create problems for operations (precondition violations) and throw if they aren’t satisfied. In the constructor, you, perhaps, enforce some class invariants (i.e. our service class basically can’t do anything if its data access reference is null).

That’s fine, but it creates a certain amount of boilerplate overhead, and it’s somewhat ad-hoc. Code Contracts standardizes it:

As you can see, the defensive coding boilerplate becomes a little less verbose. But what’s more is that Code Contracts will also generate compiler warnings when client code does not or may not satisfy the preconditions. So, if I had the code below in another class, I would be shown a compiler warning that the precondition was not satisfied.

Another cool facet of this is that you can express post conditions in a way that you really can’t with the guard/throw paradigm. For instance:

As you can see, we’re sticking a post condition in here that advertises what the method will guarantee when it is finished. In this fashion, client code that may specify its own pre-/post-conditions and invariants has more to work with in terms of proving its own conditions. Also, anyone reading the code can tell what the author intended the method to do (and what side-effects, if any, it might have). This is where your contract doubles as correctness enforcement and documentation. Comments may lie after enough time and changes, but the contracts won’t–you’ll get warnings or runtime errors if that documentation goes out of date.

The final thing that I’ll show in this introductory post is the notion of a class invariant. Let’s take a look at our previous car example, but dressed up a little.

Here, I’ve added a method decorated with a Code Contracts attribute. The end effect of this is to create an object invariant – essentially, an implied pre- and post-condition for every method in my class. Now, the code is future-proof. If anyone adds methods to this and tries to set _carDao to null, they will get warnings/errors, depending on the mode of Code Contracts. (I’ll go into this in a future post.) If someone derives from this class and tries the same thing, the same result will occur. We’ve locked up an invariant as bulletproof. The other nice thing is that if you move this logic into an invariant method, you don’t have to check the _carDao for null in all of your methods, cluttering them with useless checks.

In future posts, I intend to cover the different modes of Code Contracts, some more advanced features, some more depth information, its interaction with the Pex utility, and a utility that I’m working on which provides wrappers for C# library classes that enforce the No-Throw guarantee.

By

WPF and Notifying Property Change

One of the central idioms of WPF development is having presentation tier classes implement the INotifyPropertyChanged interface. Anybody who has done more than a “Hello World” WPF application or two is almost certainly familiar with this and quite probably knows it like the back of his or her hand. As I’ve been learning WPF with C# over the past year (coming from a background of C++ and Java before that), my implementation of this has evolved a bit, and I thought I’d share the progression and my thoughts on why each subsequent implementation is an improvement.

By way of background, for anyone not familiar or looking for a refresher, this interface allows XAML bindings to keep in sync with the C# class. For instance, if you bound a XAML text box to a class property called “MyText”, the class containing MyText would implement INotifyPropertyChanged. This interface consists of just a single event, and properties fire the event when set. This is the mechanism by which the model (the thing being bound to) notifies the GUI that it should ask for an updated value. Without this, the GUI would display whatever value MyText had when it was loaded, and it would ignore subsequent changes that came from anywhere but the user modifying the text.

Here is literally the simplest implementation possible of INotifyPropertyChanged:

So, whenever anyone (GUI or other code) invokes the setter for the Text field, the model class fires this event to notify the GUI (or anyone else listening) that the model’s Text has been changed. A popular, slight improvement on this would be to check that the value has actually been changed (that _text != value) before doing the setting or firing the event. The idea here is to prevent spurious event firing and thus spurious event handling.

The Basic Improvement

The first thing anybody is going to notice after implementing a few properties to back the UI is that each property setter requires an annoying amount of boilerplate. So, any Pragmatic Programmer would find a solution that didn’t involve all of this repetition–abstracting the boilerplate into a common method:

Clearly, this is better. You’ve eliminated two lines of code and a conditional from property setters. Amortized over an application, this will make the code more readable, more succinct, and easier to reason about.

What about all of those news

The next thing that will probably occur to someone after working with this for a while (or at least it did to me), is that it might not be necessary to create the same object every time a property is set. And that’s really what’s happening–you’re creating a new PropertyChangedEventArgs(), which is handed over to any event handler and then (probably) subsequently discarded. If you have a strange case in which someone is keeping that change event args around, then you can disregard this next optimization, though you might want to inquire why someone is storing that piece of communication artifice rather than whatever it may contain.

So the next solution I was introduced to (I didn’t actually come up with this on my own because I tend to be rather sparing in my use of “static”) is the following:

Here, we’ve cut further down on the boilerplate in the setters, but, more importantly, we’re not allocating new objects on the heap with each property that’s set. Instead, there’s a static object initialized at application startup. You pay a one-time cost at startup, and then everything is much more efficient after that.

Tweaking the Optimization

After being shown this technique and using it for a while, I felt increasingly like something wasn’t quite right and there was room for improvement. Eventually, I put my fingers on two minor gripes. First, the property changed args were being created at application startup, for every property, whether or not anyone ever set it. Pedantically (though sometimes practically), this is not as efficient as it could be. It’s an improvement over creating on every set, but not as good as creating it the first time you need it. The second thing that bothered me is all of the boilerplate that went along with it. In the first step, we eliminated some boilerplate, and now, we’ve added it back into the mix. Instead of in our setter, it sits above our property.

To counter those two issues, I added a base class from which models (and ViewModels, if they expose properties directly) should inherit. If you don’t like a common ancestor like that, you could always implement this with an interface and a default behavior extension method on that interface, but I’m not a fan of extension methods for ‘extending’ your own code in your own assembly (extension methods just being syntactic sugar on free-floating static methods)

What’s going on here is that we maintain a cache of EventArgs. Whenever a property change notification fires, the cache is checked for the corresponding event arguments. If they don’t exist, you new them up, but if they do, then this behaves like our static one. The end effect is that, for the life of the object, you get the benefit from the previous example of a single creation and the additional benefit of not taking a huge hit at application startup for all (and some potentially unneeded) properties. This suffers, performance wise, if you implement it on a lot of short lived models, because you might then, in effect, be newing all the time. The tradeoff is probably worth it if you have a high ratio of model modification to model creation. In my case, I do.

What About Those Magic Strings?

If you’re like me, the presence of “Text” everywhere in this set of examples has been like a little pebble stuck in your shoe. It’s annoying enough to irritate, but not necessarily quite annoying enough to fix at the moment. However, continuing the silly metaphor, you eventually hit your breaking point, wrench off your shoe, and do something about it. With the magic strings, that came a few months back.

I poked around on the internet some and saw various people’s solutions to this. None of them were quite to my liking, so I cobbled together one of my own from assorted pieces and my own style of coding. I don’t have any links because I honestly don’t recall where I found them and which parts would be attributable to whom. So, just be aware that not all of the thinking was entirely from scratch and that if you find somebody’s code that looks, in part, like this, there’s a good chance that’s where I got part of the idea.

This is a lot more involved, and I won’t go into all of the gory details here. But, suffice it to say, I’m taking advantage of a concept called early binding. The “automagic” way that WPF binds its XAML to your properties is through reflection in the framework. It sees {Binding Text} and it goes looking on the data context for a property called “Text”. When it finds that, it invokes ToString() on it, and that’s what appears in the GUI. For notifying change, the same thing happens. It takes “Text” from your event args and goes looking for a match in the XAML.

This is called “late binding.” You are trusting at runtime that the WPF runtime is going to be able to match your two constructs through reflection. A typo, “text” instead of “Text”, for instance, will not stop the application from building or event trigger a compiler warning. The IDE is perfectly happy to let you make as many of these mistakes as you can manage. It won’t even throw a runtime exception. You’ll just get weird results and have to pick through the output window looking for XAML problems (or painstakingly match your XAML to your Notification arguments).

With Early binding, we eliminate that. The expression that you’re passing to NotifyChange() is typesafe and compiler-checked. This means that you need to pass it an actual property or it will not compile. Now, it’s possible that you’re going to have a “Text” and a “Txet” or “text” property, but it’s not very likely, and, if you do, you may want to reconsider your naming schemes. Since you’re now binding your notification to its property at build time rather than using reflective indirection through the WPF framework, you’re a lot less prone to mistakes.

I should point out here that you are introducing extra reflection, which means you’ll take a bit of a performance hit. However, I’ve never found it to be noticeable (I have never done a time trial on it, though), and I think the tradeoff in whatever performance hit you take is well worth the hours, days, and weeks you don’t lose to tracking down obscure typos in XAML or property notifications months after they happen. Besides, the WPF framework relies heavily on reflection anyway, so you’re just adding to the paradigm slightly. If you’re using WPF, you’re probably not logging your response times in microseconds anyway.

Conclusion

So, that’s it. That’s the evolution of how I notify the GUI that things have changed. If you have a suggestion for improvement, I’d be happy to add another step to the mix. I’m always looking to improve and find slicker, more elegant ways to do things.