Stories about Software


TDD Chess Game Part 4: Getting Organized

Alright, welcome back to this series.

A couple of housekeeping things:

  1. I have bitten the bullet and used the Visual Studio White theme along with 14 point font to record, so hopefully the videos going forward should be easier to watch. It’s a little surreal to work with, but c’est la vie.
  2. The source code is now available on github for you to follow along. The coding is usually running ahead of my publication, so if you want to see the code from a given video, you may have to grab a slightly earlier version.

Here’s what I accomplish in this clip:

  • Started using a little todo list to keep track of what I’ve done and what I need to do.
  • Cleaned up code as reported by static analysis tools.
  • Pulled some production classes into their own namespaces and out of the test classes.
  • Defined an abstract Piece class.
  • Defined a second inheritor, “Rook,” for Piece.
  • Defined a bit of dumb functionality for Rook’s “GetMovesFrom” to get it started.
  • Implemented ability for a pawn to move two spaces on its first move.
  • Defined a piece concept of “HasMoved.” (albeit just for Pawn)

And here are the lessons to take away:

  • Keeping a list of smallish things you want to change can help you keep track of what needs to be done without distracting you too much (I picked this technique up from Kent Beck’s “Test Driven Development By Example.”)
  • If you’re using NCrunch, use the green dots being dark or bright as a quick way to tell if the code is compiling.
  • Gamify cosmetic issues. If “Optimize Namespaces” and things like that are important, make violations ugly and distracting in the IDE and you’ll get annoyed and fix them whereas you probably wouldn’t bother, otherwise.
  • It’s okay to write stupid tests if you do so knowing that you’ll fix them. Finding ways to always write a test to change production code is good for practicing the TDD discipline until it starts to become second nature.
  • It’s okay to write a test that causes a non-compile failure and then needing to do a good bit of work to get everything back to compiling/passing.
  • I’ve mentioned this previously, but it bears repeating: it’s okay to reuse a test (especially a stupid one) to get a failing test.
  • If you weren’t aware of C# yield keyword and deferred execution, it’d be a good thing to familiarize yourself with.
  • Force yourself not to copy and paste as much as possible, even when it seems dumb. Feeling the pain of re-typing things will make it painfully obvious when you’re duplicating code and could do something better.

And, here’s the clip:


Implied Acceptance Criteria

I’m on a Scrum team these days, serving as Product Owner, and I was watching developers do functionality demos. This has generally been of the form of them walking me through the happy path, and then me taking it for a test drive and trying to put myself in a user, ‘cleverly’ typing “twenty five” into a text box that clearly wants an integer to see what happens. Yellow screen of death? Generic error message? Scornful validation message such as “dude, what’s wrong with you?” Helpful validation message such as “your response for how many children you have cannot be a negative number, a decimal, or typeset nonsense?”

If what happens isn’t what I think should happen, it’s then off to check out the acceptance criteria/tests. Did it say anything in there about a validation message? Should it have to? Are there things that ought to be “universal acceptance criteria?”

To me, the answer to that question is, “sort of, but I’d prefer to think of them more as default/implied ACs.” And so I started to enumerate them on an internal wiki, kind of as an exercise for myself to see if there were certain things that I think should always be true. Maybe these are like the equivalent of code contracts, but contracts between the developer and the user. Here are some ideas that I came up with:

  • When user supplies invalid input, a message should be shown explaining why the input was not accepted.
  • The way to get back to where you just were should always be available and obvious (e.g. “cancel” button or “back” link or something).
  • Nothing that happens should ever result in a “yellow screen” or whatever equivalent indicates to the user that whatever is going wrong is something you never dreamed of (for production release, anyway — I have no issues with crashes like this in internal or beta tests as part of a “fail early” strategy)
  • If an exception occurs that the user would understand, it is explained to the user (e.g. “The connection to the database was lost.”)
  • There is never a point during the normal course of usage where the user thinks, “should I just, like, wait, or is it frozen?”
  • If it’s possible to prevent the user from doing the wrong thing, the user is prevented from doing the wrong thing (e.g. if user clicks “save” and that’s a long running operation, “save” button is disabled until it’s okay to click again)

These are some things that I think ought to be true of your application’s behavior unless there is a good reason to deviate. These are things that I generally think of when I’m working, even if nothing is spelled out explicitly. I think of these and I think of more, depending on the application context (e.g. is this something that should be permission controlled or is this something where the verbiage might be changed later?) Over the years, I’ve developed a pretty lengthy mental checklist for what I consider good application experience, even without consciously realizing that I’ve done so for the most part. But I think it’s important for us to try to grow this evoked set in our minds as we go.

So what about you? Do you have anything you’d add to this list — any glaring omissions? I’m actually looking to tabulate some of these things into a working document and perhaps expand it to cover other kinds of minimal checklists for various scenarios (such as testing your own code via running the application, finding and fixing a bug, etc). Perhaps when I get it more fleshed out at some point, I’ll revisit and post again, but either way, I’d be interested to hear what you consider to be table stakes/minimum standards for how your applications interact with users.


Getting Started on the Roslyn Journey

It’s not as though it’s new; Roslyn CTP was announced in the fall of 2011, and people have been able to play with it since then. Roslyn is a quietly ground-breaking concept — a set of compilers that exposes compiling, code modeling, refactoring, and analysis APIs. Oh, and it was recently announced that the tool would be open source meaning that all of you Monday morning quarterback language authors out there can take a crack at implementing multiple inheritance or whatever other language horrors you have in mind.

I have to say that I, personally, have little interest in modifying any of the language compilers (unless I went to work on a language team, which would actually be a blast, I think), but I’m very interested in the project itself. This strikes me as such an incredible, ground-breaking concept and I think a lot of people are just kind of looking at this as a curiosity for real language nerds and Microsoft fanboys. The essential value in this offering, to me, is the standardizing of code as data. I’ve written about this once before, and I think that gets lost in the shuffle when there’s talk about emitting IL at runtime and infinite loops of code generation and whatnot. Forget the idea of dispatching a service call to turn blobs of text into executables at runtime and let’s agree later to talk instead about the transformative notion of regarding source code as entity collections rather than instruction sheets, scripts, or recipes.

But first, let’s get going with Roslyn. I’m going to assume you’ve never heard of this before and I’m going to take you from that state of affairs to doing something interesting with it in this post. In subsequent/later posts, we’ll dive back into what I’m driving at philosophically in the intro to this post about code as data.

Getting Started

(Note — I have VS2013 on all my machines and that is what I’ve used. I don’t know whether any/all of this would work in Studio 2012 or earlier, so buyer beware)

First things first. In order to use the latest Roslyn bits, you actually need a fairly recent version of Nuget. This caught me off guard, so hopefully I’ll save you some research and digging. Go to “Tools” menu and choose “Extensions and Updates.” Click on the “Updates” section at the left, and then click on “Visual Studio Gallery.”


If you’re like me, your version was 2.7.something and it needs to be 2.8.1something or higher. This update will get you where you need to be. Once you’ve done that, you can simply install the API libraries via Nuget command line.

With that done, you’re ready to download the necessary installation files from Microsoft. Go to http://aka.ms/roslyn to get started. If you’re not signed in, you’ll be prompted to sign in with your Microsoft ID (you’ll need to create one if you don’t have one) and then fill out a survey. If you get lost along the way, your ultimate destination is to wind up here.

At this point, if you follow the beaten path and click the “Download” button, you’ll get something called download.dlm that, if your environment is like mine, is completely useless. So don’t do that. Click the circled “download” link indicated below to get the actual Roslyn SDK.


Once that downloads, unpack the zip file and run “Roslyn End User Preview” to install Roslyn language features. Now you can access the APIs and try out interesting new language features, like this one:

That’s all well and good for dog-fooding IDE changes and previewing new language features, but if you want access to the coolness from an API perspective, it’s time to fire up Nuget. Open up a project, and then the Nuget command line and type “Install-Package Microsoft.CodeAnalysis -Pre”

Once that finishes up, make your main entry point consist of the following code:

At this point, if you hit F5, what you’re going to see on the screen is a list of the fields contained in the class that you specify as your “sourceCodePath” variable (at least you will with the happy path — I haven’t tested this extensively to see if I can write classes that break it). Now, could you simply write a text parser (or, God forbid, some kind of horrible regex) to do this? Sure. Are there C# language modeling utilities like a Code DOM that would let you do this? Sure. Are any of these things the C# compiler? Nope. Just this.

So think about what this means. You’re not writing a utility that uses a popular C# source code modeling abstraction; you’re writing a utility that says, “hey, compiler, what are the fields in this source code?” And that’s pretty awesome.

My purpose here was to give you a path from “what’s this Roslyn thing anyway” to “wow, look at that, I can write a query against my own code.” Hopefully you’ve gotten that out of this, and hopefully you’ll go forth, tinker, and then you can come back and show me some cool tricks.


NCrunch and Continuous Testing: The Must-Have Setup

Most of this post was taken from the transcript of my Pluralsight course on NCrunch. If you are interested in watching the course but are not a Pluralsight subscriber, feel free to email me or leave a comment requesting a trial, and I’ll get you a 7 day subscription to check it out.

Understanding the Legitimate, Root-Cause Objection to TDD

In my experience, there are three basic “camps” of reactions to the concept of test driven development (TDD) from those not experienced with it: willing students, healthy skeptics and reactionary curmudgeons. The first group is basically looking for a chance to practice and needs no convincing. The last group will have to be dragged along, kicking and screaming, and so there’s no persuading them without the threat of negative consequences. It is the middle group that tends to have rational objections, some of which are well-founded and others of which aren’t so much. A lot of the negative reaction from this group is the result of reacting to the misconceptions that I mentioned in this post about what TDD is and isn’t. But even once they understand how it works, there are still some fairly common and legitimate objections that are not simply straw man arguments.

  1. The most common and prevalent objection is that coding this way means that you’re doing a lot more work. You’re taking more time and writing more code and people don’t necessarily see the benefit, especially in cases where they already know what code they want to write.
  2. Many of the misconception objections and other inexplicable resistance is really the result of people simply not knowing how to write tests or practice TDD, and perhaps at times being reluctant to admit it. Others may freely admit it. Either way, the objection is that TDD, like any other discipline, would take time to learn and require an investment of effort.
  3. There is also more code that is going into a project since you now have an additional test class for each single class you would otherwise have created. More code means more maintenance time and effort.
  4. Many astute observers also realize that a lot of legacy code, particularly that involving large-work constructors, singletons, and static state is very hard to test, making attempts to do so effort-intensive.
  5. And, along the same lines , they also realize that there would be more effort required than simply learning how to do TDD – it would also mean learning different design techniques such as dependency injection, polymorphism, and inversion of control.

When you consider all of these objections, they all have a common thread. At the core of it, they’re really all variants on the theme of not having enough time. Writing the tests, maintaining the test code, learning new ways of doing things, and applying them to new and old code are all things that take time, and for most developers, time is precious. Someone selling TDD is a lot like someone selling you on a 401K: they’re convincing you that sacrificing now is going to be worth it later and asking you to take this, to some degree, on faith.

Could TDD be better?

Justifying the adoption of TDD to a healthy skeptic hinges largely on demonstrating that it provides a net benefit in terms of time, and thus cost. So how can these objections be reconciled and the concerns addressed?

Well first up are the learning curve oriented objections. And the truth is that there’s no way around this one being a time sink. Learning how to do TDD and learning how to write testable code are going to take time, no matter what. If you do not have the time to learn, this is a perfectly valid objection, but only in the shorter term. After all, we work in an industry where change is the only constant and learning new languages, frameworks, and methodologies is pretty much table stakes for staying relevant.

Regarding development time overall, a very common argument made by TDD proponents is that the practice saves time over the long haul. This is reminiscent of the parable of the tortoise and the hare where the TDD practitioner is a tortoise plodding along, getting everything right and the hare is generating reams of code quickly but with mistakes. The hare will declare himself done more quickly, but he’ll spend a lot more time later troubleshooting, reading log files, debugging, and fixing errors. The tortoise may not finish as quickly, but when he does, he truly is done.

But what about in the short term? Is there anything that can be done to make things go more quickly in the short term for TDD practitioners? Could we strap a rocket pack to the tortoise and make him go faster than the hare while preserving his accuracy?


Speeding up the Feedback Loop

What if I told you a story? What if I told you that you could write code and know whether or not it was working nearly instantaneously? In this world of development, you don’t have to wait while your application starts up, and then navigate through various user interface screens to get to the action that will trigger the bit of code you want to verify. There is no more repetitive clicking and typing and waiting for screens to load. In fact, in this world you don’t even need to build your project or compile your code. All you need to do is type and see, as you’re typing, whether or not the changes you’re making are right. And, you can see a visual metric for how much confidence you can have in your changes by virtue of how much your code is covered by the unit tests.
Does that story sound too good to be true? Well, I’ll admit that it does sound pretty good, but I’ll let you in on a little secret – it is true. There is a name for this paradigm, and it’s called “continuous testing.” And there are various tools out there for different platforms that make it a reality, right now as we speak.

To understand the magic of continuous testing, it’s essential to understand one of the most important, but often overlooked, concepts in computer science. I’m talking about the feedback loop. At its core, programming is a series of experiments. Whenever you approach a programming task, you have a code base that does something, and you have a goal to make it do something different or new. To achieve this goal, you identify intermediate behaviors that you’d like to see to mark progress, and then you make changes that you think will result in those behaviors. Then, you run the application to see if what you thought would happen does, in fact, happen.

For example, perhaps you want to have your application display customer information stored in a database to the screen when the user clicks a certain button. You might first say “forget the database – let’s just get the button click to result in some hard-coded value being displayed,” and then set about altering the code to make that happen. When you’d made your changes to the code, you’d run the program and click that button to see what had happened.

Considered closely, this process is actually a lot like the scientific method. For step (1) you read the code. For step (2) you hypothesize what you’ll need to do to the code. For step (3) you predict the outcome of your changes, and for step (4) you make the changes and observe the results. The amount of time that it takes to perform an iteration of your coding version of the scientific method is what I’m calling the “feedback loop.” How long does it take for you to have an idea, implement it, and verify that it had the desired effect?


In the early days of programming when the use of punch cards was common, feedback times were very lengthy. Programmers would reason carefully about everything that they did because feedback times were extremely slow, meaning mistakes were very costly. While many improvements have been made across the board to feedback times, situations persist to this day when the feedback loop is excruciatingly slow. This includes long running or resource-intensive applications and distributed systems with high latency. With such systems, programmers on projects often devise schemes to try to shorten the feedback loop, such as mocking out bottlenecks to allow fast verification of the rest of the system.

What they’re really trying to do is shorten the feedback loop to allow themselves to be more productive. When a great deal of time elapses between trying something and seeing what happens, attention tends to wander to distractions like twitter or reddit, exacerbating the inefficiency in this already-slow process. Developers innately understand this problem and are frustrated by the long build and run times of behemoth and slow-running applications.

To combat this problem, developers intuitively favor faster schemes. Ask yourself whether you prefer to work on a small project that builds quickly or a large one. How about a slow test suite versus a fast one? By speeding up the feedback loop you trade frustration and wandering attention span for engagement and a feeling of accomplishment. Techniques like relying on fast-running unit tests and keeping modules small and decoupled help a great deal with this, but we can get even faster.

If short feedback is good, immediate is definitely better. Anyone who has done extensive work at a command line or, in general used a Read-Evaluate-Print-Loop (REPL) understands this. Attention does not wander at all during a session like this. Historically, such a thing wasn’t possible in a compiled language, but with the advent of multicore systems and increasingly sophisticated compiler technology, times are changing. It is now possible to have a build running in the background of an IDE like Visual Studio even as you modify the code.


If you’ve been watching my series on building a chess game using TDD you couldn’t help but notice the red and green dots on the left side of the code window, since they catch the eye. What you were seeing was the tool, NCrunch, in action. Now it’s time to get properly acquainted.

NCrunch is a software product by Remco Software and was written by software developer Remco Mulder, who owns the company. It is a tool written specifically to allow developers to practice continuous testing in Visual Studio. NCrunch is a commercial product with a tiered pricing model and full-blown customer support. And it operates as a plugin to Visual Studio so there is no need to integrate or operate any kind of standalone application. It drops right in, comfortably with a tool with which you are already familiar.

For the first several years of its existence, NCrunch was free, since it was in an extended state of Beta release. During the course of these years, it grew a substantial and loyal user base. In the fall of 2012, Remco decided to issue version 1.0 and release NCrunch as a full, commercial product with a licensing model and production support. It is now on version 2.5 and is most certainly an excellent, commercial-grade product that is worth every penny.

As I write my code using this tool, you may notice things that I rarely or never do. I rarely, if ever, run an application. I rarely, if ever, use the unit test runner. I rarely even compile my code, though I do this sometimes simply because I happen to be quite accustomed to looking at compiler feedback in the errors window. Continuous testing tools like NCrunch may have been a novelty when they came out, but I would argue that they’re rapidly becoming table stakes for efficient development these days.

Before NCrunch, the viability of TDD for me was tied up in the idea that investing extra time up front meant that I wouldn’t later be revisiting my code, debugging, tweaking, fixing, when I was further removed and it’s more time consuming. With NCrunch, I don’t even need to make that case. Now, if you took TDD and NCrunch away, my development process would be substantially slower as I sat there waiting for the application to compile or the test runner to do its thing.

If you don’t have this, get it. You won’t be sorry. Forget clean code, unit testing, TDD, all of that stuff (well, not really — but indulge me here for a second). Just get this setup for the tight feedback loop alone. There is nothing like the feeling of productivity you get from typing a line of code and knowing in less than a second, without doing anything else, whether the change is what you want. That incredible power makes it all worth it — the learning curve of the tool, the cost of the tool, adopting TDD, learning to unit test. It’s like getting a car with 500 horse power and feeling that acceleration; it ruins you for anything less.


TDD Chess Game Part 3: Stumbling and Refactoring

My apologies. I meant to be a little more regular in this series, but I stumbled a bit out of the gate, as I got into the home stretch of my next Pluralsight course. Now that the course is delivered (not released yet – in the review/edit phase), I have some more time, so I’m planning to pick this back up and go with it a little more regularly.

One interesting thing that arises out of these “fits and starts” kind of passes at it is that it mimics an actual, common development scenario: spotty maintenance coding. What I mean is, so many TDD series that you’ll watch or coding dojo/exercises in which you’ll participate have a premise that you have some fixed length of time during which to pay complete attention. But in this series, I’m sort of poking at it for 10 minutes here and 20 minutes there, very seriously mimicking an environment where you’re plugging a lot of holes, thrashing a bit and saying, “where was I and what was I dong here?”

That’s evident in this clip, probably a little too much for me really to call it polished. And as such, I don’t accomplish a ton, but here’s what I did accomplish (not necessarily in order):

  • Tied up a loose end by getting rid of the last of the primitive obsession passing of x and y coordinate ints.
  • Implemented sanity precondition checks for the input Board’s AddPiece() method, in terms of where pieces could be placed.
  • Pushed functionality for validating coordinates into the coordinate itself.
  • Eliminated duplication in the validation with a refactoring.

And, here are some lessons to take away from this, both instructional from me and by watching me make mistakes:

  • After a conceptual refactoring, such as replacing multiple primitives with a type, take a look around to make sure you cleaned up all instances of the former.
  • When you’re not really sure what to do next (i.e. “coder’s block” or “paralysis by analysis”), implement some sanity checks for preconditions/invariants. This might jolt you into some next steps as you do it.
  • Make ABSOLUTELY SURE that a test goes red when you think it should go red. Not understanding why a unit test is passing is just as bad as not understanding why it’s failing. In both cases, it means you don’t understand what your code is doing. Stop everything and get your brain in sync with the code immediately to save yourself a lot of frustration later. (See “programming by coincidence,” which I saw coined in the book “The Pragmatic Programmer” — and then, don’t do it!)
  • You’re going to make mistakes. Often dumb ones. The beauty of TDD and its fast feedback loop is to prevent them from festering and being worse later.
  • This is more of an editorial/opinion take, but I’ve more recently gravitated toward allowing my TDD to include what might be called “integration tests” (tests that exercise the interaction between two classes). As long as the test makes sense from a behavioral standpoint and provides clarity, I think it’s fine. Some, particularly those in the BDD camp, even argue that this is preferred, and that your tests should really only go through the outer API of your module/application.
  • Eliminate duplication, however trivial and however subtle. If you see repetition of any kind, you can probably extract a method. Some productivity tools and IDEs will even help you locate possible duplication.

Finally, a few notes on the video itself (and resultant code):

  • For those of you who suggested a larger font size, look for that in part 4. I apologize, but I had actually recorded the video for this already when I was taking suggestions. In the production, I did zoom to a slightly smaller area, so we’ll see if that helps any.
  • I had one commenter express a preference for a white background instead of the VS Dark theme that I use. White work-spaces give me a headache, so I darken all IDEs and things that I work in. For one person, I don’t think I’ll pull the trigger, but if more people start responding and expressing that preference, I’ll agree to suck it up and change colors.
  • The code is now on github. I’ll commit the code each time I record the video and tag it with a comment corresponding to the part of the series in question. The initial push to master just reads “Initial publish to Github” but it corresponds to the code at the end of this clip. From here forward, I’ll sync them, though if you check the repo, it’ll probably run slightly ahead of me publishing the videos because I record the audio and do these writeups after the fact.
  • Again, the higher res you view this in the better. I’d go for 1440P if you can.