DaedTech

Stories about Software

By

Chess TDD 61: Testing an Actual Game

Editorial Note: I was featured on another podcast this week, this one hosted by Pete Shearer.  Click here to give it a listen.  It mostly centers around the expert beginner concept and what it means in the software world.  I had fun recording it, so hopefully you’ll have fun listening.

This post is one where, in earnest, I start testing an actual game.  I don’t get as far as I might like, but the concept is there.  By the end of the episode, I have acceptance tests covering all initial white moves and positions, so that’s a good start.  And, with the test constructs I created, it won’t take long to be able to say the same about the black pieces.

I also learned that building out all moves for an entire chess game would be quite an intense task if done manually.  So, I’d be faced with the choice between recording a lot of grunt work and implementing a sophisticated game parsing scheme, which I’d now consider out of scope.  As a result, I’ll probably try to pick some other, representative scenarios and go through those so that we can wrap the series.

What I accomplish in this clip:

  • Get re-situated after a hiatus and clean up/reorganize old cards.
  • A few odds and ends, and laying the groundwork for the broader acceptance testing.

Here are some lessons to take away:

  • No matter where they occur, try to avoid doing redundant things when you’re programming.
  • If, during the course of your work, you ever find yourself bored or on “auto-pilot,” that’s a smell.  You’re probably working harder instead of smarter.  When you find yourself bored, ask yourself how you might automated or abstract what you’re doing.
  • When you’re writing acceptance tests, it’s important to keep them readable by humans.
  • A seldom-considered benefit to pairing or having someone review your coding is that you’ll be less inclined to do a lot of laborious, obtuse work.
  • Asserting things in control flow scopes can be a problem — if you’re at iteration 6 out of 8 in a while loop when things fail, it’s pretty hard to tell that when you’re running the whole test suite.

By

Chess TDD 60: Wrapping Initial Development

There is a bit of symmetry to this episode that may interest only me.  It is the 600th post to be published on the blog, and it is the 60th post in the ChessTDD series.  I wouldn’t have thought the series accounted for 10% of my posts, but, there it is.  Believe it or not, this post is about wrapping initial development on the project.  In other words, I have no more functionality cards to implement.  From here on in, it’s going to be constructing test scenarios and addressing any shortcomings that they reveal.  (Not ideal, but it’s hard to get user feedback in a one person show with no prod environment)

I also, after some time away have a bit more clarity on what I want to do with this going forward, so you’ll hear some mention of this as I narrate the videos.  I’m looking to wrap the youtube series itself and then to use that as the centerpiece and starting point of a video-product that I have in mind.  Stay tuned for updates down the line.

What I accomplish in this clip:

  • Get re-situated after a hiatus and clean up/reorganize old cards.
  • A few odds and ends, and laying the groundwork for the broader acceptance testing.

Here are some lessons to take away:

  • An interesting definition of done when it comes to software work goes beyond completeness and even shipping.  You can say that something is done when it has demonstrably added value somehow (it has sold or helped product revenue or something)
  • Writing unit tests is a great way to turn hypotheses that you have about the code base into productive regression test suite.  It’s also a great way to confirm or refut your understanding of the code.
  • It bears repeating over and over, but avoid programming by coincidence.  If you don’t understand why a change to your code had the effect that it had, stop what you’re doing and develop that understanding.  You cannot afford to have magic and mystery in your code.
  • There shouldn’t be any line of code in your code base that you can delete without a test turning red.  This isn’t about TDD or about code coverage — it’s about the more general idea that you should be able to justify and express the necessity of every line of code in the code base.  If removing code doesn’t break anything, then remove the code!

 

By

Chess TDD 59: King (Not) Moving into Check

This episode was relatively short and sweet.  Things actually went well, which is surprising to me somehow, particularly given the length of time between episodes.  In this episode I used previously implemented code to stop the king from being able to move into check.  I’m not positive, but off the top, this might be the last move consideration to implement.  I think my remaining cards are about testing activities and design considerations.  (Though, famous last words)

What I accomplish in this clip:

  • Disallow king from moving into check.
  • Clean a bit of cruft out of an old unit test class.

Here are some lessons to take away:

  • Tech debt can happen even in a code base well covered by tests.  It manifests itself in a variety of ways, not the least of which is making it harder than it should  be to get your bearings.  That’s on display a bit now in some of these episodes (though, I’d argue there’s no crippling debt, you can still see the effect)
  • If you see commented out code, there’s only one thing to do with it, in my opinion: delete it without a second thought.  I mean, it’s not doing any good.  If you uncomment it and it even compiles, then… good?  Do you then leave it in, even though it’s dead?  Do you, for some reason, try to use it?  I mean, what good comes from it?  And, if you’re inclined to leave it, why do that?  Isn’t that what source control is for?
  • Things like what to name stuff and when to extract constants are no exact science.  Bounce the question off of others and poll for opinions.  I suggest making it a practice not to be too uptight about such things, or programming collaboratively will wind up being a high-stress, neurotic endeavor.
  • When you get to the end of a complex chain of business rules applying to one part of the application, it may be the case that “the simplest thing that could work to get the tests all green” is actually rather complex.  This is (1) possibly unavoidable and/or (2) possibly a smell that you need to break the business logic apart.  In my case here, I’d say a little from each column.  The rules of chess are fairly complex, but it’s no secret that we’re carrying around some of the aforementioned tech debt here.

By

Chess TDD 58: (Not) Castling through Check

In this episode, I took the newly minted threat evaluator and used it to prevent castling through check.  The most interesting thing to note in this episode, however, aside from continued progress toward the final product, was how some earlier sub-optimal architectural shortcuts came back to bite us (if only temporarily).  At this point, we’re pretty close to a full implementation, but if we were building more and more functionality on top of this, it would definitely be time to pause and clean house a bit.

What I accomplish in this clip:

  • Integrated threat evaluator with castling.
  • Prevented castling through check.

Here are some lessons to take away:

  • Working sporadically on a project causes cost incursion that goes beyond just having to take time to get back up to speed.  In this case, I realized that I’ve changed the way I’m using Trello on a lot of projects, but didn’t realize I hadn’t changed it here, so I wasted some time managing the board.  Not a big deal, but definitely an example of the hidden cost of sporadically working on a project.
  • I’m personally very context-dependent when it comes to outside-in TDD (ATDD) or so-called traditional TDD.  Your mileage may vary, but the lesson to take away is that you should be able to articulate your approach, whatever it is.  Why do you do it that way?  The answer shouldn’t just be, “mmm… dunno.”
  • I made the mistake in this video of renaming some tests while I had a failing test.  As I stated in the video, the risk of this causing a problem is nearly nil, so it may seem that I’m going on about theory for theory’s sake.  But this is really an important practice — not so that you can get the “TDD Badge of Purity,” but because there are times when making this mistake leads you to introduce more failures or to introduce a different reason for failure without realizing it.  Your green test suite is like an early detection system, the way that pain tells your body something is wrong.  If you start making changes with red tests, it’s like eating a big meal after leaving the dentist’s office loaded with Novocaine.  You might start chewing through your tongue without realizing it.
  • You saw the circular dependencies thing come home to roost in this episode with a stack overflow exception.  Having circular dependencies in the code isn’t just bad for architectural purity.  It causes tangible pain.
  • You also saw me do something iffy to address the circular dependency problem.  Architectural/design concessions beget more concessions if left unchecked.  You don’t do one iffy thing and that’s the end of it.  Sooner or later, it becomes a slippery slope.

By

Chess TDD 57: Finished Threat Evaluator

Once again, it’s been a while since the last episode in the series.  This time the culprit has been a relocation for the remainder of the winter, which meant I was dealing with moving-like logistics.  However I’m holed up now in the south, where I’ll spend the winter working on my book, working remotely, and hopefully wrangling this Chess TDD series to a conclusion.  In this episode, I finished threat evaluator, or at least got pretty close.  I called it a wrap around the 20 minute mark and without having written additional unit or acceptance tests to satisfy myself that it works in a variety of cases.  It now correctly understands that some moves aren’t threatening (e.g. pawn moving straight ahead) and that pieces of the same color do not threaten one another.

Also, a quick editorial note.  There’s been some pretty intense wind here in Louisiana, and the power went out around the 18:00 minute mark.  I was surprised that all of the screencasting was saved without issue, but that VS barfed and filled my source file with all sorts of nulls.  I had to revert changes and re-do them by hand prior to picking up where I left off.  So it’s conceivable that a word might be spelled slightly differently or something.  I’m not pulling a fast one; just dealing with adverse circumstances.

What I accomplish in this clip:

  • Finished threat evaluator (initial implementation).
  • Won a decisive battle in the age old, man vs nature conflict.

Here are some lessons to take away:

  • Save early and often.
  • Writing a test is a good, exploratory way to get back up to speed to see where you left off with an implementation.  Write a test that needs to pass, and then, if it fails, make it pass.  If it passes, then, hey, great, because it’s a good test to have anyway.  This (writing tests to see where you left off) is not to be confused with writing a test that you expect to fail and seeing it pass.
  • Anything you can do to tighten up the automated test feedback loop is critical (like NCrunch).  The idea here is to get in the habit of using unit tests as the quickest, most efficient way to corroborate or disprove your understanding of the code.  Run experiments!
  • If, while getting a red test green, you have an idea for a more elegant refactoring, store that away until you’ve done something quick and simple to get to green.  Don’t go for the gold all at once.  Get it working, then make it elegant.