DaedTech

Stories about Software

By

ChessTDD 28: Preparing for Idiomatic SpecFlow

This week I’m starting the process of engaging more yak-shaving, but I think it’s important yak-shaving. If we’re going to be doing this SpecFlow thing in the series, let’s get it done right. In a post I made last week, Darren Cauthon took the time to put together this gist. I read through it and my eyes bugged out. That kind of visualization is solid gold for this problem space. How powerful is it to be able to show other developers (and hypothetical business stakeholders) these ASCII chess boards and say, “look, this movement works?!” So, I’m asking you to indulge me in the journey to get there.

Here’s what I accomplish in this clip:

  • Created AsciiBoardBuilder class to take strings like “BQ” and “WP” and turn them into pieces added to a Board instance.
  • Got half-ish way through implementing that class.

Here are some lessons to take away:

  • I created something called “AsciiBoardBuilder.”  There was a time when I would have immediately thought to have it implement an IBoardBuilder interface, but I’ve gravitated away from this over the years as it tends to lead to unnecessary abstraction at times.  We can always do this alter if we need some other means of constructing boards aside from just using the Board class itself.
  • In case you’ve forgotten, one trick I use is to create the production class in the same file as the test class at first, when convenient.  NCrunch runs a little quicker this way and you don’t have to flip back and forth between two files.  I leave it in here until it no longer makes sense to, usually (e.g. when some other production class needs to use my new class).
  • I did something that’s a little more complex than your average TDD scenario, wherein I was modifying two different production classes as I went.  This isn’t particularly common and you might not want to do it if you’re new to this or you run into trouble trying it.  But if it works for you, there’s no reason not to do this.
  • If you go look up a solution, such as how to implement a bit of Linq-fu, it’s probably worth writing an extra test or two that you expect to go green just to confirm that you really understand the behavior of the solution that you found.
  • What I did with factoring out GeneratedBoard may not be to your liking.  This has the upside of eliminating duplication but the downside of introducing a bit of indirection.  For me, the improvement in how succinct the test cases are is worth it, but you may not agree.  This isn’t always an exact science — taste does factor in.
  • I’m pretty rigid about strongly favoring one assert per test method.  Not everyone feels this way, but I felt it worth refactoring to assert only one thing: that the piece is a white bishop.  (As opposed to asserting that it’s white and then that it’s a bishop).  I suggest that unless there’s really no option but to have multiple asserts, you factor toward one.

By

Finding Topics as a New Blogger

One of the serious difficulties about the life of a traveling consultant is avoiding weight gain. While this might sound like such a first world problem as to be 0.5 world problem, it’s actually a serious struggle. Living in a hotel, every meal is a restaurant meal, and restaurants tend not to optimize for low caloric intake. Recently, though, I’ve turned the tide and started to make a pretty successful foray into enemy territory; I’ve lost about 8 pounds over the last few weeks.

As someone who already exercises regularly and is conscious of my caloric intake, I pulled a different sort of lever to make this happen. I eliminated decisions about eating them by making them ahead of time. To wit, I set a rule for myself that until I became about 15 pounds lighter, I would not eat any desserts, drink any alcohol or eat any snacks. This is surprisingly easy to do compared to making decisions like that every time I was out with friends or felt hungry or was offered a cannoli or something. Standing in front of the pantry, it’s a lot easier to say simply, “oh, I don’t have snacks anymore” than “well, I’ll have a few handfuls of popcorn to tide myself over, and one cookie is probably okay…” The former is 0 decisions while the latter is endless decisions in the form of “should I eat this or that?” and “should I stop eating this now?” Life gets easier when you pre-make decisions.

Read More

By

Getting Started as a Blogger: A Contract with Your Readers

This post has been a draft without words for a long time, evoking images of a jar of pickles that’s been in the fridge for 15 months.  Someone says to me something like “man, I really want to start blogging but __________” and I say, “Don’t let that stop you — tell you what, I’ll write some tips and tricks.”  The first time this happened was me buying the pickles and putting them in the fridge.  Each subsequent time, I stick my head in and say, “yep, still have pickles and I’m totally going to eat them one of these nights.”  Well, today is pickle-eating day.

This post is aimed at someone who either has no blog or someone whose blog consists of a few posts described by the metaphor in the last paragraph.  You have a blog or don’t, but you’re interested, rather than having a blog, in being a blogger.  This is an important, but subtle distinction since it flips the matter from achievement to identity — you can move a “make a blog” or “make a post” card into another column in Trello, but not so much with “be a blogger.”

Also, caveat emptor.  What I’m telling you here essentially amounts to, “here’s what I’ve learned through some reading and collaborating and a whole lot of trial and error.”  I have a pretty decent reader base and sometimes I even get paid to write (and, recently, to help others get better at writing), but I am not Lifehacker or some kind of blogging empire.  I’m a guy that’s been treating the world to two or three rants per week for a number of years.

AngryArch

Read More

By

ChessTDD 27: Parameterized Acceptance Tests and Scenarios

At this point, I’m not going to bother with excuses any longer and I’ll just apologize for the gaps in between these posts.  But, I will continue to stick with it, even if it takes a year; I rarely get to program in C# anymore, so it’s fun when I do.  In this episode, I got a bit more sophisticated in the use of SpecFlow and actually got the acceptance tests into what appears to be reasonable shape moving forward.  I’m drawing my knowledge of Specflow from this Pluralsight course.  It’s a big help.

Here’s what I accomplish in this clip:

  • Implemented a new SpecFlow feature.
  • Discovered that the different SpecFlow features can refer to the same C# class.
  • Implemented parameterized spec flow components and then scenarios.
  • Got rid of the “hello world-ish” feature that I had created to get up to speed in favor of one that’s a lot more useful.

Here are some lessons to take away:

  • When using new tools, you’re going to flounder a bit before the aha! moments.  That’s okay and it happens to everyone.
  • As your understanding of tooling and your environment evolves, be sure to evolve with it.  Don’t remain rigid.
  • Fight the urge to copy and paste.  It’s always a fight myself, but on the occasion that I don’t have a better solution right in the moment, getting a test green, than duplicating code, I force myself to feel the pain and re-type it.  This helps me remember I’m failing and that I need to make corrections.
  • When I got my bearings in SpecFlow and realized that I had things from the example that I didn’t actually need, I deleted them forthwith.  You can always add stuff back later, but don’t leave things you don’t need laying around.
  • Notice how much emphasis I placed throughout the clip on getting back to green as frequently as possible.  I could have parameterized everything wholesale in a broad refactoring, but I broke it up, checking for green as frequently as possible.
  • Sometimes sanity checks are necessary, especially when you don’t really know what you’re doing.  I was not clear on why 8 was succeeding, since I was querying for a piece off the board.  Just to make sure that I wasn’t messing up the setup and it was never even checking for 8, I added that exception in temporarily.  Use those types of checks to make sure you’re not tilting at windmills.  As I’ve said, getting red when you expect red is important.
  • No matter what you’re doing, always look to be refactoring and consolidating to keep your code tight.

By

ChessTDD 26: At Last, Acceptance Tests

Let’s get the excuses out of the way: busy, more work coming in, vacation, yada-yada.  Sorry for the delay since last time.  This episode is a short one just to get back on track.  I haven’t been writing much code lately in general, and I was already stumbling blindly through SpecFlow adpotion, so this was kind of just to get my sea legs back.  I wanted to get more than one acceptance test going and start to get a feel for how to divide the acceptance testing effort up among “features” going forward.

Here’s what I accomplish in this clip:

  • Cleaned up the dead code oops from last time.
  • Defined an actual feature file for spec flow, with four real acceptance tests.

Here are some lessons to take away:

  • I don’t really know how Specflow works very well, so that floundering around after pasting in the same text seemed to fix the problem is my way of avoiding programming by coincidence.
  • No matter what you’re doing (unit tests, production code, or, in this case, acceptance tests) you should always strive to keep your code as clean and conformant to the SRP as you can.
  • When you don’t know a tool, go extra slowly with things like rename operations and whatnot.  I’m referring to me changing the name of the spec flow files.  It’s easy to get off the rails when you don’t know what’s going on, so make sure you’re building and your tests are green frequently.

By

ChessTDD 25: Yak-Shaving with a visit to trusty old ExtendedAssert

I started out with the intent of cleaning up a mistake in the new board setup functionality and moving on to more acceptance tests.  However, I got sidetracked a bit and wound up adding some functionality to my ExtendedAssert utility and briefly contemplating the interaction of some things in my object graph (once a week is better than I’d been doing, but 20 minutes per week isn’t a lot of time to keep it all fresh in my head).

Here’s what I accomplish in this clip:

  • Get at least a start at having tests around PiecePositioner
  • Fixed the issue where SetupStandardBoard was making both pawn rows white
  • Added functionality to my ExtendedAssert utility.
  • Added a way for consumers of the Board class to tell what size the board is after creating it.

Here are some lessons to take away:

  • I make a lot of dumb mistakes.  It’s TDD and the trusted test suite that helps me flounder for times on the order of seconds looking for them, rather than frustrated minutes or hours.
  • If you can avail yourself of NCrunch, I highly recommend it.  It’s incredibly handy to be able to visualize the path through the code that your unit test is driving.
  • A valuable part of TDD is stopping to reason about what your public methods will do when you feed them nonsense.
  • Duplication is icky anywhere you find it, including customized assert utilities.
  • Writing test infrastructure code (e.g. custom asserts) gets deeply weird in a hurry, since you have to either write unit tests for the unit test helping code or else flip the script and use your production code to ‘test’ what you’re doing in the test code.  I favor the latter approach.  Test and production code are like balancing your check book, so it seems reasonable to me to use either one as validation of the other.  YMMV.

By

I’ll Take a Crack at Defining Tech Debt

Not too long ago, on one of my Chess TDD posts, someone commented asking me to define technical debt because I had tossed the term out while narrating a video in that series. I’ll speak to it a bit because it’s an extremely elegant metaphor, particularly in the context of clarifying things to business stakeholders. In 1992, Ward Cunningham coined the term in a report that he wrote:

Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.

Everyone with an adult comprehension of finance can understand the concept of fiscal debt and interest. Whether it was a bad experience with one’s first credit card or a car or home loan, people understand that someone loaning you a big chunk of money you don’t have comes at a cost… and not just the cost of paying back the loan. The greater the sum and the longer you take to pay it back, the more it costs you. If you let it get out of hand, you can fall into a situation where you’ll never be capable of paying it back and are just struggling to keep up with the interest.

Likewise, just about every developer can understand the idea of a piece of software getting out of hand. The experience of hacking things together, prototyping and creating skunk-works implementations is generally a fun and heady one, but the idea of living for months or years with the resulting code is rather horrifying. Just about every developer I know has some variant of a story where they put something “quick and dirty” together to prove a concept and were later ordered, to their horror, to ship it by some project or line manager somewhere.

The two parallel concepts make for a rather ingenious metaphor, particularly when you consider that the kinds of people for whom we employ metaphors are typically financially savvy (“business people”). They don’t understand “we’re delivering features more slowly because of the massive amount of nasty global state in the system” but they do understand “we made some quality sacrifices up front on the project in order to go faster, and now we’re paying the ‘interest’ on those, so we either need to suck it up and fix it or keep going slowly.”

Since the original coining of this term, there’s been a good bit of variance in what people mean by it. Some people mean the generalized concept of “things that aren’t good in the code” while others look at specifics like refactoring-oriented requirements/stories that are about improving the health of the code base. There’s even a small group of people that take the metaphor really, really far and start trying to introduce complex financial concepts like credit swaps and securities portfolios to the conversation (it always seems to me that these folks have an enviable combination of lively imaginations and surplus spare time).

I can’t offer you any kind of official definition, but I can offer my own take. I think of technical debt as any existing liability in the code base (or in tooling around the code base such as build tools, CI setup, etc) that hampers development. It could be code duplication that increases the volume of code to maintain and the likelihood of bugs. It could be a nasty, tangled method that people are hesitant to touch and thus relates in oddball workarounds. It could be a massive God class that creates endless merge conflicts. Whatever it is, you can recognize it by virtue of the fact that its continued existence hurts the development effort.

I don’t like to get too carried away trying to make technical debt line up perfectly with financial debt. Financial debt is (usually) centered around some form of collateral and/or credit, and there’s really no comparable construct in a code base. You incur financial debt to buy a house right now and offer reciprocity in the form of interest payments and ownership of the house as collateral for default. The tradeoff is incredibly transparent and measured, with fixed parameters and a set end time. You know up front that to get a house for $200,000, you’ll be paying $1000 per month for 30 years and wind up ‘wasting’ $175,000 in interest. (Numbers made up out of thin air and probably not remotely realistic).

GratuitousAddOn

Tech debt just doesn’t work this way. There’s nothing firm about it; never do you say “if I introduce this global variable instead of reconsidering my object graph, it will cost me 6 hours of development time per week until May of 2017, when I’ll spend 22 hours rethinking the object graph.” Instead, you get lazy, introduce the global variable, and then, six months later, find yourself thinking, “man, this global state is killing me! Instead of spending all of my time adding new features, I’m spending 20 hours per week hunting down subtle defects.” While fiscal debt is tracked with tidy, future-oriented considerations like payoff quotes and asset values, technical debt is all about making visible the amount of time you’re losing to prior bad decisions or duct-tape fixes (and sometimes to drive home the point that making the wrong decision right now will result in even more slowness sometime in the future).

So in my world, what is tech debt? Quite simply, it’s the amount of wasted time that liabilities in your code base are causing you or the amount of wasted time that you anticipate they will cause you. And, by the way, it’s entirely unavoidable. Every line of code that you write is a potential liability because every line of code that you write is something that will, potentially, need to be read, understood, and modified. The only way to eliminate tech debt is to solve people’s problems without writing code and, assuming since you’re probably in the wrong line of work not to write code, the best you can do is make an ongoing, concerted, and tireless effort to minimize it.

(And please don’t take away from this, “Erik thinks writing code is bad.” Each line of code is an incremental liability, sure, but as a software developer, writing no code is a much larger, more immediate liability. It’s about getting the job done as efficiently and with as little liability as you can.)

By

ChessTDD 24: Cleaning Up for Acceptance Testing

It’s been a little while since I posted to this series, largely because of the holiday time.  I’ve wrapped up some things that I’d been working on and am hoping here in the next few weeks to have some time to knock out several episodes of this, so look for an elevated cadence (fingers crossed).  To get back into the swing of things in this episode, I’m going to pull back a bit from the acceptance test writing and clean up some residual cruft so that when I resume writing the acceptance tests in earnest, I’m reacquainted with the code base and working with cleaner code.

Here’s what I accomplish in this clip:

  • Refactored awkward test setup to push board population to production code.
  • Got rid of casting in acceptance tests (and thus prepared for a better implementation as we start back up).

Here are some lessons to take away:

  • Refactoring is an interesting exercise in the TDD world when you’re moving common functionality from tests to production.  Others’ mileage may vary, but I consider this to be a refactoring, so even moving this from test code to production code, I try to stay green as I go.
  • I made a mistake in moving on during a refactoring when my dots went dark green.  Turns out they were green for the project I was in even while another project and thus the solution were not compiling.  It’s good to be mindful of gotchas like this so that you’re not refactoring, thinking that everything is fine, when in reality you’re not building/passing.  This is exactly the problem with non-TDD development — you’re throwing a lot of code around without verification that what you’re doing isn’t creating problems.
  • It’s not worth agonizing over the perfect implementation.  If what you’re doing is better than what existed before, you’re adding value.
  • If you’re working experimentally and trying things out and your tests stay green for a while when you’re doing this, make sure you can get to red.  Take a second and break something as a sanity check.  I can’t tell you how frustrating it is to be working for a while and assume that everything’s good, only to learn that you’d somehow, inadvertently made the tests necessarily always pass for some trivial reason.

By

Chess TDD 23: Yak-Shaving with SpecFlow

I’m writing out this paragraph describing what I’m planning to do before recording myself doing anything. This post has been a while in coming because I’ve waffled between whether I should just hand write acceptance/end-to-end tests (what I generally do) or whether I should force you to watch me fumble with a new tool. I’ve opted for the latter, though, if it gets counter-productive, I may scrap that route and go back to what I was doing. But, hopefully this goes well.

Having fleshed out a good bit of the domain model for my Chess move calculator implementation, the plan is to start throwing some actual chess scenarios at it in the form of acceptance tests. The purpose of doing this now is two-fold: (1) drive out additional issues that I may not be hitting with my comparably straightforward unit test cases and (2) start proving in English that this thing works. So, here goes.

Here’s what I accomplish in this clip:

  • Installed SpecFlow plugin to Visual Studio
  • Set SpecFlow up for the ChessTDD project.
  • Wrote a simple acceptance test.

Here are some lessons to take away:

  • There is a camp that favor ATDD (acceptance test driven development) or “outside-in TDD” and another that prefers “traditional” or “inside-out TDD.”  I’m not really in either, but you might find that you have a preference.  Both approaches are worth understanding.
  • What I’m doing right now, for the moment, is not test driven development.  I’m retrofitting acceptance tests to bolster my coverage and see if the more micro-work I’ve done stands up to real world usage.
  • If you want to see how someone that doesn’t know SpecFlow gets it set up, this is the video for you.
  • If you’ve been tinkering with a test for a while and the test was green the whole time, how do you know you’re done?  With your assert in place as you think it should look, modify it quickly to something that should fail and confirm that it does.  The idea here is to make sure that your modifications to the test have actually had an effect.

By

Chess TDD 22: Friendly Pieces

This episode saw me make the first distinction between friendly and enemy pieces in determining available moves.  It had also been a while since I’d worked on the series, so I was a little uncomfortable with how well I was reasoning at compile time about what was happening.  The Board class and some of my implementations are getting a little awkward and I’d like to spiffy them up.  However, I think that I’m going to start focusing on writing acceptance tests right now to bolster the correctness of what’s going on.  This will allow me to separate fixing any flaws in my reasoning from making the code more readable, which really are two separate things.

Here’s what I accomplish in this clip:

  • Stopped knight from being able to land on friendly pieces
  • Implemented the concept of capture
  • Stopped allowing jump of any pieces for the purpose of capture

Here are some lessons to take away:

  • Sometimes you think it’s going to be a lot harder than it is to get everything green.  I was expecting a lot of red when I added the restriction that the move target couldn’t contain a piece, but none happened.  Better to know quickly via experiment than spend a lot of time squinting at your code.  It’s important to reason out why you were mistaken, but getting verification first and working backward will generally save time.
  • You can get a little too clever with what you’re doing.  I was trying to get to green by adding sort of a silly hard-coding, and it came back to bite me.  Such is the nuance of “do the simplest thing to go green.”  No matter how many times you do this, there will always be times you fumble or do dumb things.
  • I got a bit sidetracked trying to figure out how to push that base class constructor into the children, but came up empty.  I’m not going to lie — if I weren’t recording for an audience, I would probably have scoured the internet for a way to do that.  If you’re not in a time crunch or you’re willing to do that on your own dime, these can be great learning exercises.
  • As I work with these Linq expressions, you’ll note that I’m not especially concerned about whether I’m iterating over a collection too many times or performance in general.  Not that it’s directly applicable, per se, but I’ve always loved this post from Jeff Atwood.  There are only two things that make me care about performance in the slightest: unhappy customers and unhappy me with bogged down unit tests.  Until one of those things starts happening, I consider performance orders of magnitude less important than readability and just about any other concern you can think of.  I can easily make readable code faster, if necessary.  It’s really hard to make a pile of unreadable crap written “for performance purposes” into something that makes sense.
  • We’re getting fairly close to a full implementation of non-specialty moves (en passant, last row replacement, castling), so perhaps its time to take a real chess game and model the actual, possible moves to create an acceptance test suite.  Stay tuned.

Acknowledgements | Contact | About | Social Media