DaedTech

Stories about Software

By

Test Driven Development

All In?

It seems to me that most or many treatises on best practices for software engineering in this day and age advocate for Test Driven Development (TDD). I have read about this methodology, both on blogs and in a book that I ordered from Amazon (Test Driven Development By Example). I have put it into practice in varying situations, and I enjoy the clarity that it brings to the development process once you manage to wrap your head around the idea of writing non-compiling, non-functioning tests prior to writing actual code.

That said, it isn’t a process I prefer to follow to the letter. I have tried, and it doesn’t suit the way I do my best development. I have read accounts of people saying that it simply takes some getting used to and that people object to it because they want to “just code” and are less productive downstream in the development process because of this, but I don’t think that’s the case for me. I don’t delude myself into thinking that I’m more efficient in the long run by jumping in and not thinking things through. In fact, quite the opposite is true. I prototype with throw-away code before I start actual development (architecture and broad design notwithstanding–I don’t architect applications by prototyping, I’m referring to adding or reworking a module or feature in an existing, architected application or creating a new small application)

Prototyping and TDD

As far as I can tell, the actual process of TDD is agnostic as to whether or not it is being used in the context of prototyping. One can develop a prototype, a one-off, a feature, or a full-blown production application using TDD. My point in this post is that I don’t find TDD to be helpful or desirable during an exploratory prototyping phase. I am not entirely clear as to whether this precludes me from being a TDD adherent or not, but I’m not concerned about that. I’ve developed and refined a process that seems to work for me and tends to result in a high degree of code coverage with the manual tests that I write.

The problem with testing first while doing exploratory prototyping is that the design tends to shift drastically and unpredictably as it goes along. At the start of the phase, I may not know all or even most of the requirements, and I also don’t know exactly how I want to accomplish my task. What I generally do in this situation is to rig up the simplest possible thing that meets one or a few of the requirements, get feedback on those, and progress from there. As I do this, additional requirements tend to be unearthed, shifting and shaping the direction of the design.

During this prototyping phase, manual and automated refactoring techniques are the most frequently used tools in my toolbox. I routinely move methods to a new class, extract an interface from an existing class, delete a class, etc. I’ve gotten quite good at keeping version control up to date with wildly shifting folder and file structures in the project. Now, if I were to test first in this mode of development, I would spend a lot of time fixing broken tests. I don’t mean that in the sense of the standard anti-unit-testing “it takes too long” complaint, but in the sense that a disproportionate number of tests I would write early on would be discarded. And while the point of prototyping is to write throw-away code, I don’t see any reason to create more code to throw away than necessary to help you see the right design.

TDD and Prototyping in Harmony

So, here, in a nutshell, is the process that I’ve become comfortable with for new, moderately sized development efforts:

  1. Take whatever requirements I have and start banging out prototype code.
  2. Do the simplest possible thing to satisfy the requirement(s) I’ve prioritized as highest (based on a combination of difficulty, prerequisites and stakeholder preference).
  3. Eliminate duplication, refine, refactor.
  4. Satisfy the next requirement(s) using the same criteria, generalizing specific code, and continuing to refine the design.
  5. Repeat until a rough design is in place
  6. Resist the urge to go any further with this and start thinking of it as production code. (I usually accomplish this by doing the preceding steps completely outside of the group’s version control.)

At this point, I have a semi-functional prototype that can be used for a demo and for conveying the general idea. Knowing when to shift from prototyping to actual development is sort of an intuitive rather than mechanical process, but usually for me, it’s the point at which you could theoretically throw the thing into production and at least argue with eventual users that it does what it’s supposed to. At this point, there is no guarantee that it will be elegant or maintainable, but it more or less works.

From there, I start my actual development in version control. This is when I start to test first. I don’t open up the target solution and dump my prototype files in wholesale or even piecemeal. By this point, I know my design well, and I know its shortcomings and how I’d like to fix and address those. I also generally realized that I’ve given half of my classes names that no longer make sense and that I don’t like the namespace arrangements I’ve set up. It’s like looking at your production code and having a refactoring wish-list, except that you can (and in fact have to) actually do it.

So from here, I follow this basic procedure:

  1. Pick out a class from the prototype (generally starting with the most abstract and moving to the one that depends most on the other ones).
  2. For each public member (method or property), identify how it should behave with valid and invalid inputs and with the object in different states.
  3. Identify behaviors and defaults of the object on instantiation and destruction.
  4. Create a test class in version control and write empty test methods with names that reflect the behaviors identified in the previous steps.
  5. Create a new class in version control and stub in the public properties and methods from the previous steps.
  6. Write the tests, run them, and watch them fail
  7. Go through each test and do the simplest thing to make it pass.
  8. While making the test pass, eliminate duplication/redundancy and refine the design.

This is then repeated for all classes. As a nice ancillary benefit, doing this one class at a time can help you catch dependency cycles (i.e. if it’s really the case that class should A has a reference to class B and vice-versa, so be it, but you’ll be confronted with that and have to make an explicit decision to leave it that way).

Advantages to the Approach

I sort of view this as having my cake and eating it too. That is, I get to code “unencumbered” by testing concerns in the beginning, but later get the benefits of TDD since nothing is actually added to the official code base without a test to accompany it. Here is a quick list of advantages that I see to this process:

  • I get to start coding right away and bang out exploratory code.
  • I’m not writing tests for requirements that are not yet well understood or perhaps not even correct.
  • I’m not testing a class until I’m sure what I want its behavior to be.
  • I separate designing my object interactions from reasoning about individual objects.
  • I get to code already having learned from my initial design mistakes or inefficiencies.
  • No code is put into the official codebase without a test to verify it.
  • There is a relatively low risk of tests being obsolete or incorrect.

Disadvantages

Naturally, there are some drawbacks. Here they are as I see them:

  • Prototyping can wind up taking more time than the alternatives.
  • It’s not always easy to know when to transition from prototype to actual work.
  • Without tests, regressions can occur during prototyping when a violent refactoring sweeps out some of the good with the bad.
  • Without tests, some mistakes in your prototype design might make it into your official version recreation.

I think that all of these can be mitigated and I firmly believe that the advantages outweigh the disadvantages.

When not to do this

Of course, this method of development is not always appropriate. During a defect fixing cycle, I think tried and true TDD works the best. Something is wrong, so write a test for the right thing and modify the code until it happens — there’s no need to prototype. This process is also inappropriate if you have all of your requirements and a well-crafted design in place, or if you’re only making small changes. Generally, I do this when I’m tasked with new feature or project implementation that has a high level design already, and is going to take weeks or months.

By

Technical Presentations and Understanding the Little Things

An Observation

Today I attended a technical presentation on a domain-specific implementation of some software and a deployment process. The subject matter was relevant to my work, and I watched with interest. While the presentation was, to some degree, dominated by discussion from other attendees rather than pure explanation, I followed along as best I could, even during times when the discussion being raised was not relevant to me or something I already understood.

During the portions of the presentation that were explanation, however, I noticed an interesting trend, and when things went off on tangents, I began to philosophically ponder this trend. What I noticed was that most of the explanation was procedurally oriented and that lot of other presentations tended to be like this as well.

This is what we do…

When I say “procedural,” I mean that people often give presentations and explain in this context: “First we do X, then we do Y. Here are A, B, and C, the files that we use, and here are D, E, and F, the files that we output.” This goes on a for a while as someone explains what, in essence, amounts to their daily or periodic routine. Imagine someone explaining how to program: “First, I turn on my computer, then I log in, then I start Ecplise, but first, I have to wait a little while for the IT startup scripts to run. Next…”

Generally speaking, this is boring, but not always. That depends on the charisma of the speaker and the audience’s level of interest in the subject. However, boring or not, these sorts of checklist explanations and presentations are not engaging. They explain the solution to a problem without context, rather than engaging the audience’s brain to come up with solutions along with the presenter and eventually to conclude that the presenter’s approach makes the most sense in context. The result of such a procedural presentation is that you brainlessly come to understand some process without understanding why it’s done, if it’s a good idea, or if it’s even coherent in any larger scheme.

Persuasive Instead of Expository

Remember speech class? I seem to remember that there were three main kinds of speeches: persuasive, expository, and narrative. I also seem to remember that expository speeches tended to suck. I think this holds true in the format of technical presentations in particular–this describes the procedural presentation I mentioned earlier, which basically boils down to the format “A is true, B is true, C is true, D is true.” Perhaps that seems like an oversimplification, but it’s not, really. We’re programmers, right? What is the logical conclusion of any presentation that says “We do X, then we do Y, then we do Z”?

So, I think that we ought to steer for anything but expository when presenting. Narrative is probably the most engaging and persuasive the most effective. The two can also be mixed in presentation. A while back, I watched some of Misko Hevery’s presentations on clean code. One such presentation, in particular, that struck me as effective had to do with singletons (he calls them pathological liars). In this talk (and in the linked post), he told a story about setting up a test for instantiating a credit card object and exercising its “charge()” method only to find out that his credit card had been billed. This improbable story is interesting and it creates a “Really? I gotta hear this!” kind of feeling. Audience members become engaged by the narrative and want to hear what happens next. (And, consequently, there are probably fewer tangents, since unengaged audience members are probably more interested in quibbling over superior knowledge of procedural minutiae.)

Narrative is effective, but it’s limited in the end when it comes to conveying information. At some point, you’re going to need to be expository in that you’re relating facts and knowledge. I think the best approach is a persuasive one, which can involve narration and exposition as well. That is, I like (and try to give) presentations of the following format: tell a story that conveys a problem in need of solution, explain the “why” of the story (i.e. a little more technical detail about what created the problem and the nuts and bolts of it), explain early/misguided attempts at a solution and why they didn’t work, explain the details of my end solution, and persuade the audience that my solution was the way to go.

This is more or less the format that Misko followed and I’ve seen it go well in plenty of other presentations besides. It’s particularly effective in that you engage the audience, demonstrate that you understand there are multiple possible approaches, finally persuade the audience that your approach makes sense, and that they would eventually have figured this out on their own as well because, hey, they’re intelligent people.

Why This Resonates

As the presentation I was watching drifted off topic again via discussion, I started to ponder why I believe this approach is more effective with an audience like developers and engineers. It occurred to me that an explanation of the history of a problem and various approaches to solving it has a strong parallel with how good developers and engineers conduct their business.

Often, as a programmer, you run across someone that doesn’t understand anything outside of the scope of their normal procedure. They have their operating system, IDE, build script, etc, and they follow a procedure of banging out code and seeing what happens. If something changes (OS upgrade, IDE temporarily not working, etc), they file a support ticket. To me, this is the antithesis of what being a programmer is about. Instead of this sort of extreme reactive behavior, I prefer an approach where I don’t automate anything until I understand what it’s doing. First, I build my C++ application from the command line manually, then I construct a makefile, then I move onto Visual C++ with all its options, for instance. By building up in this fashion, I am well prepared for things going wrong. If, instead, all of the building is a black box to someone, they are not prepared. And if you’re using tools on the machine blissfully ignorant as to what they’re doing, can you really say that you know what you’re doing?

However, I would venture a guess that most of the people who are content with the black box approach are not really all that content. It might be that they have other things going on in their lives or a lack of time to get theoretical or a number of other things. Perhaps it’s just that they learn better by explanation as opposed to experimentation, and they would feel stupid asking how building the application they’ve been working on for two years works. Whatever the case may be, I’d imagine that, all things being equal, they would prefer to know how it works under the hood.

So, I think this mode of speech-giving appeals to the widest cross section of programmers. It appeals to the inveterate tinkerers because they always want, nay, demand, to know questions like “why” and “how” instead of just “what.” But it also appeals to the go-along-to-get-along type that is reasonably content with black boxes because, if they’re attending some form of training or talk, they’d rather know the “why” and “how” for once.

So What?

I’d encourage anyone giving a technical talk to keep this in mind. I think that both you’ll feel better about the talk and your audience will be more engaged and benefit more from it. Tell a story. Solve a problem together with the audience. Demonstrate that you’ve seen other ways to approach the issue and that you came to the right decision. Generally speaking, make it a collaborative problem solving exercise during the course of which you guide the audience to the correct solution.

By

Belkin USB Dongle and Ubuntu

This is another one of those posts that’s more for my reference, but if anyone else finds it useful, so much the better…

So, a few years ago, I bought some Belkin wireless USB dongles (F5D7050 is the chipset, I believe). I’ve gotten these working with a few different Linux distros, but the one I most commonly use these days is Ubuntu. At the moment, I have two up and chugging along with different versions of Ubuntu, and they’ve been working long enough for me to forget all of the little annoyances in setting them up.

So, I recently blew away a Windows installation on another PC I have laying around and decided to put Ubuntu and Gmote server on it to take a stab at making it a media PC for hooking up to one of my TVs. I’m thinking of starting with a combination of Netflix/Hulu/Gmote and kind of going from there as the spirit moves me, tying things with my home automation and personal music/movie collection. But, I digress.

Point is, I got Ubuntu installed and configured to my satisfaction in my office with a cat5 connection. I figured, no problem, I’ll just poke around with ndiswrapper like I’ve done in the past. Well, it took me two hours to figure out what I was doing wrong, and I intend to document it here so that I’ll have a fighting chance of wasting less time the next time I have to do this.

I did remember enough that I got the part about using ndiswrapper right. That is, I had played around with the drivers that ship with Ubuntu, and none of them worked very well for these dongles. So ndiswrapper is the way to go. I also recalled that I had to pop the dongle CDs into my driver and pull the driver and the .inf file onto the machine locally to install with ndiswrapper. So I did this, found that there was no ndiswrapper, and remembered that I had to do a “sudo apt-get install ndiswrapper-utils”. Well, this also didn’t work. Apparently, there’s some new-fangled thing going on with ndiswrapper and the package manager, so I actually had to go through the package manager GUI to get this working, but it did, eventually.

I loaded it with “ndiswrapper -l rt73.inf” and thought I was up and running. (Of course, to complicate matters, I have two different CDs with two different sets of Belkin drivers, and I never thought to label the stupid dongles as to which went with which, so there was a bit of extra complication here.) But no dice. I picked through dmesg and some of the logs in /var/logs and saw a lot of cryptic driver messages. There’s nothing like “deauthenticating by local choice reason 3″ or a message about the eeprom of the dongle to make you question your sanity.

Nevertheless, I could tell that I was tantalizingly close. The dongle’s light was blinking furiously, and I could see that it was actually connecting. This is no small miracle, given that my home network is non-broadcasting, encrypted with AES, static IP, and probably some other corner cases I’m forgetting. The problem was that it was also disconnecting–by “local choice,” apparently. I wasn’t sure who on Earth was making this choice, but I suppose that’s life.

As it turns out, the problem was that Ubuntu loads competing kernel modules for these drivers by default. At the end of the day, I needed to add the following to /etc/modprobe.d/blacklist:

And that sorted it out. I had spent the entire time racking my brain for what I wasn’t loading or configuring properly, and it didn’t dawn on me for those couple of hours what else might be loading. I’m still not entirely clear whose choice the local disconnection was, but I suppose, in the end, it doesn’t matter.

Also, for good measure, you’ll probably want to make sure ndiswrapper kernel module loads on boot. That can be accomplished simply by editing /etc/modules and adding “ndiswrapper” on its own line at the bottom of the file.

Cheers.

By

Precondition and Invariant Checking

The Problem

If you find yourself immersed in a code base that is sometimes or often fast and loose with APIs and internal contracts between components, you might get into the habit of writing a lot of defensive code. This applies to pretty much any programming language, but I’ll give examples here in C#, since that’s what I code in most these days. For example, this is probably familiar:

That’s an awful lot of boilerplate for what might be a very simple method. Repeating this all over the place may be necessary at times, but that doesn’t make it pretty or desirable. In an earlier post, I wrote about Microsoft’s Code Contracts as an excellent way to make this pattern a lot more expressive and manageable in terms of lines of code. However, for those times that you don’t have Code Contracts or the equivalent in a different language, I have created the pattern I’m posting about here.

Poor Man’s Code Contracts

As I got tired of writing the code above on a project that I’m currently working on where we do not bring in the Code Contracts DLL, I started thinking about ways to make this sort of checking less verbose. The first thing I did was to start changing my classes on an individual basis to look like this:

If you’re not familiar with the concept of generics, you can check out this link: generics. Basically, the idea is similar to templating in C++. You define a class or a method that takes a type to-be-defined-later, and it performs some operations on that type. Whenever you have List < Foo > , you’re using the List class, whose signature is actually List < T > . The “where” clause on the method just puts some constraints on what types the compiler will allow–here I’m restricting it to reference (i.e. class) types, since value types are never null and this wouldn’t be appropriate. If you try to pass an int to that method, the compiler will complain.

Anyway, that implementation doesn’t save a lot of code here, but imagine if MyClass had 10 methods. And imagine if there were 400 MyClass-like classes. Then, the savings in terms of LOC really adds up. I was pretty happy with this for a few classes, and then my code smell sense was telling me that I shouldn’t be writing that generic method over and over again.

In the interest of DRY (Don’t Repeat Yourself)

So, what to do? For problem solving, I usually try to think of the simplest thing that will work and then find its faults and ways to improve on it. In this case, the simplest thing that would work would be to make VerifyArgumentOrThrow() a static method. Then I could call it from anywhere and it would only be written once.

Generally, I try to avoid static methods as much as possible because of their negative effect on testability and the fact that they’re procedural in nature. Sometimes they’re unavoidable, but I’m not a big fan. However, I generally try not to let dogma guide me, so instead I tried to think of the actual downside of making this method static.

What if I want a behavior where, sometimes, depending on context, DoSomething() doesn’t throw an exception? I still want to verify/validate the parameters, but I just don’t want an exception (maybe I want to log the problem instead, or maybe I have some kind of backup strategy). Well, with a static method, I would have to define another static method and then have the class determine which method to use. It would have to be passed some kind of flag or else read some kind of global state to see which handling strategy was desired. As more verification strategies emerged, this would get increasingly rigid and ugly. That was really all of the justification that I needed–I decided to implement an instance class.

The ‘Finished’ Product

This solution is what I’ve been using lately. I’m hesitant to call it finished because like anything else, I’m always open to improvements. But this has been serving me well.

With this scheme, you supply some kind of default validation behavior (and give clients the option to specify an exception message). Here, the default is to throw ArgumentNullException. But you can inherit from ArgumentValidator and do whatever you want. You could have a LoggingValidator that overrides the methods and writes to a log file. You could have an inheritor that throws a different kind of exception or one that supplies some kind of default (maybe creates a new T using Activator). And you can swap all of these in for one another on the fly (or allow clients to do the same, the way MyClass is), allowing context-dependent validation and mocking for test purposes.

Other Ideas

Another thought would be to have IArgumentValidator and a series of implementers. I didn’t choose to go this route, but I see nothing wrong with it whatsoever. I chose the inheritance model because I wanted to be sure there was some default behavior (without resorting to extension methods or other kinds of compiler trickery), but this could also be accomplished with an IArgumentValidator and a default implementer. It just puts a bit more burden on the clients, as they’ll need to know about the interface as well as know which implementation is the default (or, potentially, supply their own).

I’ve also, at times, added more methods to the validator. For instance, in one implementation, I have something called ISelfValidating for entity objects that allows them to express that they have a valid/invalid internal state, through an IsValid property. With that, I added a VerifyValid(ISelfValidating) method signature and threw exceptions on IsValid being false. I’ve also added something that verifies that strings are not null or empty. My only word of caution here would be to avoid making this validator do too much. If it comes down to it, you can group validation responsibilities into multiple kinds of validators with good internal cohesion.

Acknowledgements | Contact | About | Social Media