DaedTech

Stories about Software

By

Chess TDD 59: King (Not) Moving into Check

This episode was relatively short and sweet.  Things actually went well, which is surprising to me somehow, particularly given the length of time between episodes.  In this episode I used previously implemented code to stop the king from being able to move into check.  I’m not positive, but off the top, this might be the last move consideration to implement.  I think my remaining cards are about testing activities and design considerations.  (Though, famous last words)

What I accomplish in this clip:

  • Disallow king from moving into check.
  • Clean a bit of cruft out of an old unit test class.

Here are some lessons to take away:

  • Tech debt can happen even in a code base well covered by tests.  It manifests itself in a variety of ways, not the least of which is making it harder than it should  be to get your bearings.  That’s on display a bit now in some of these episodes (though, I’d argue there’s no crippling debt, you can still see the effect)
  • If you see commented out code, there’s only one thing to do with it, in my opinion: delete it without a second thought.  I mean, it’s not doing any good.  If you uncomment it and it even compiles, then… good?  Do you then leave it in, even though it’s dead?  Do you, for some reason, try to use it?  I mean, what good comes from it?  And, if you’re inclined to leave it, why do that?  Isn’t that what source control is for?
  • Things like what to name stuff and when to extract constants are no exact science.  Bounce the question off of others and poll for opinions.  I suggest making it a practice not to be too uptight about such things, or programming collaboratively will wind up being a high-stress, neurotic endeavor.
  • When you get to the end of a complex chain of business rules applying to one part of the application, it may be the case that “the simplest thing that could work to get the tests all green” is actually rather complex.  This is (1) possibly unavoidable and/or (2) possibly a smell that you need to break the business logic apart.  In my case here, I’d say a little from each column.  The rules of chess are fairly complex, but it’s no secret that we’re carrying around some of the aforementioned tech debt here.

By

Chess TDD 58: (Not) Castling through Check

In this episode, I took the newly minted threat evaluator and used it to prevent castling through check.  The most interesting thing to note in this episode, however, aside from continued progress toward the final product, was how some earlier sub-optimal architectural shortcuts came back to bite us (if only temporarily).  At this point, we’re pretty close to a full implementation, but if we were building more and more functionality on top of this, it would definitely be time to pause and clean house a bit.

What I accomplish in this clip:

  • Integrated threat evaluator with castling.
  • Prevented castling through check.

Here are some lessons to take away:

  • Working sporadically on a project causes cost incursion that goes beyond just having to take time to get back up to speed.  In this case, I realized that I’ve changed the way I’m using Trello on a lot of projects, but didn’t realize I hadn’t changed it here, so I wasted some time managing the board.  Not a big deal, but definitely an example of the hidden cost of sporadically working on a project.
  • I’m personally very context-dependent when it comes to outside-in TDD (ATDD) or so-called traditional TDD.  Your mileage may vary, but the lesson to take away is that you should be able to articulate your approach, whatever it is.  Why do you do it that way?  The answer shouldn’t just be, “mmm… dunno.”
  • I made the mistake in this video of renaming some tests while I had a failing test.  As I stated in the video, the risk of this causing a problem is nearly nil, so it may seem that I’m going on about theory for theory’s sake.  But this is really an important practice — not so that you can get the “TDD Badge of Purity,” but because there are times when making this mistake leads you to introduce more failures or to introduce a different reason for failure without realizing it.  Your green test suite is like an early detection system, the way that pain tells your body something is wrong.  If you start making changes with red tests, it’s like eating a big meal after leaving the dentist’s office loaded with Novocaine.  You might start chewing through your tongue without realizing it.
  • You saw the circular dependencies thing come home to roost in this episode with a stack overflow exception.  Having circular dependencies in the code isn’t just bad for architectural purity.  It causes tangible pain.
  • You also saw me do something iffy to address the circular dependency problem.  Architectural/design concessions beget more concessions if left unchecked.  You don’t do one iffy thing and that’s the end of it.  Sooner or later, it becomes a slippery slope.

By

Chess TDD 57: Finished Threat Evaluator

Once again, it’s been a while since the last episode in the series.  This time the culprit has been a relocation for the remainder of the winter, which meant I was dealing with moving-like logistics.  However I’m holed up now in the south, where I’ll spend the winter working on my book, working remotely, and hopefully wrangling this Chess TDD series to a conclusion.  In this episode, I finished threat evaluator, or at least got pretty close.  I called it a wrap around the 20 minute mark and without having written additional unit or acceptance tests to satisfy myself that it works in a variety of cases.  It now correctly understands that some moves aren’t threatening (e.g. pawn moving straight ahead) and that pieces of the same color do not threaten one another.

Also, a quick editorial note.  There’s been some pretty intense wind here in Louisiana, and the power went out around the 18:00 minute mark.  I was surprised that all of the screencasting was saved without issue, but that VS barfed and filled my source file with all sorts of nulls.  I had to revert changes and re-do them by hand prior to picking up where I left off.  So it’s conceivable that a word might be spelled slightly differently or something.  I’m not pulling a fast one; just dealing with adverse circumstances.

What I accomplish in this clip:

  • Finished threat evaluator (initial implementation).
  • Won a decisive battle in the age old, man vs nature conflict.

Here are some lessons to take away:

  • Save early and often.
  • Writing a test is a good, exploratory way to get back up to speed to see where you left off with an implementation.  Write a test that needs to pass, and then, if it fails, make it pass.  If it passes, then, hey, great, because it’s a good test to have anyway.  This (writing tests to see where you left off) is not to be confused with writing a test that you expect to fail and seeing it pass.
  • Anything you can do to tighten up the automated test feedback loop is critical (like NCrunch).  The idea here is to get in the habit of using unit tests as the quickest, most efficient way to corroborate or disprove your understanding of the code.  Run experiments!
  • If, while getting a red test green, you have an idea for a more elegant refactoring, store that away until you’ve done something quick and simple to get to green.  Don’t go for the gold all at once.  Get it working, then make it elegant.

By

Chess TDD 56: Threatened Pieces

It’s been a while since my last post in this series, and for that I kind of apologize.  Normally, I’m apologizing because I’ve had too much to do and it renders the reasoning a little disjoint.  But this time, I took a break for the holidays and left off at a natural stopping point, so the disjoint-ness is kind of moot.  I apologize only that it’s been a while, but I don’t feel culpable.  Anyway, in this episode, I start to tackle the concept of check by tackling the slightly more abstract concept of “threatened pieces.”

What I accomplish in this clip:

  • Moved on to working on the concept of check.
  • Laid the foundation for a general evaluation of whether a square on the board is threatened.

Here are some lessons to take away:

  • One of the best ways to keep things moving efficiently when coding is to find ways to slice off bits of new functionality in new classes.  Getting back to TDD around a new class makes it very easy to implement functionality.  The trick is in figuring out logical seams to do this, and that comes with time and practice.
  • Classes without mutable state are a lot easier to reason about, to test, and to work with.
  • Passing booleans into methods is a smell, because usually it means that you’re passing in a control flow concern to tell the method how to behave.  In this episode, I have a boolean argument that is actually a conceptual piece of data, so it’s not problematic vis a vis the single responsibility principle, but it is, nevertheless, a smell to keep an eye on and to, perhaps, move away from later.
  • During TDD, it is fine (and even expected) to do obtuse things to get the early tests to pass, but only if each thing that you’re doing advances you toward the eventual solution and is sequentially less obtuse.  That part is critical.
  • A good trailing indicator that you can use for whether or not you’re biting off too much implementation with each test is what happens when you finish your changes to the production code.  Do all tests immediately go green, or do you have unexpected failures that cause you to need to tweak and tinker with the code that you’ve written.  Immediate green is a sign you’re in the Goldilocks zone while tinkering is a sign that you’re biting off more than you can chew.

By

Chess TDD 55: Got the Hang of Castling

In this episode, it seems I finally got the hang of castling.  It’s been a relatively long journey with it, but I can now successfully detect castling situations that involve any combination of rook/king prior movement, as well as interceding pieces.  The only thing left to go is the relative edge case of a castling through check scenario, but I’ll consider that to be part of the implementation of the check concept.

What I accomplish in this clip:

  • Re-hydrated the acceptance test that I’d left off with last time and got it passing.
  • Refactored toward a less naive implementation of blocking, and then got it passing for all cases on both sides.

Here are some lessons to take away:

  • If you’ve written a failing acceptance test and are struggling to see how to get it to pass, see if you can express the same scenario with a more granular unit test.  This can focus you and help prevent your brain from spinning.
  • If you have the feeling that you’ve implemented something before (or a teammate has), it is definitely worth pausing to investigate.  You don’t want oddball solutions to the same problem, which can be as damaging as copy and paste programming.
  • You’ll never get away from making mistakes of all sorts.  Take note how in this episode, I probably got a bit too ambitious with solving the whole problem at once, which led to a lot of staring at the screen.  I probably should have made use of more failing tests to get there more gradually.
  • When you feel that you have coder’s block and aren’t sure what to do next, that’s the best time to implement something naive and ugly.  There’s no need to be afraid, since you’ll refactor shortly, but just getting anything out there can un-stick you.