DaedTech

Stories about Software

By

Habits that Pay Off for Programmers

Editorial Note: I originally wrote this post for the LogEntries blog.

I would like to clarify something immediately with this post.  Its title does not contain the number 7, nor does it talk about effectiveness.  That was intentional.  I have no interest in trying to piggy-back on Stephen Covey’s book title to earn clicks, which would make this post a dime a dozen.

In fact, a google search of “good habits for programmers” yields just such an appropriation, and it also yields exactly the sorts of articles and posts that you might expect.  They have some number and they talk about what makes programmers good at programming.

But I’d like to focus on a slightly different angle today.  Rather than talk about what makes people better at programming, I’d like to talk about what makes programmers more marketable.  When I say, “habits that pay off,” I mean it literally.

Don’t get me wrong.  Becoming better at programming will certainly correlate with making more money as a programmer.  But this improvement can eventually suffer from diminishing marginal returns.  I’m talking today about practices that will yield the most bang for the buck when it comes time to ask for a raise or seek a new gig.

Read More

By

Secrets of Maintainable Codebases

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, have a look at the tech debt quantification features of the new NDepend version.

You should write maintainable code.  I assume people have told you this, at some point.  The admonishment is as obligatory as it is vague.  So, I’m sure, when you heard this, you didn’t react effusively with, “oh, good idea — thanks!”

If you take to the internet, you won’t need to venture far to find essays, lists, and stack exchange questions on the subject.  As you can see, software developers frequently offer opinions on this particular topic.  And I present no exception; I have little doubt that you could find posts about this on my own blog.

So today, I’d like to take a different tack in talking about maintainable code.  Rather than discuss the code per se, I want to discuss the codebase as a whole.  What are the secrets to maintainable codebases?  What properties do they have, and what can you do to create these properties?

In my travels as a consultant, I see a so many codebases that it sometimes seems I’m watching a flip book show of code.  On top of that, I frequently find myself explaining concepts like the cost of code ownership, and regarding code as, for lack of a better term, inventory.  From the perspective of those paying the bills, maintainable code doesn’t mean “code developers like to work with” but rather “code that minimizes spend for future changes.”

Yes, that money includes developer labor.  But it also includes concerns like deployment effort, defect cycle time, universality of skills required, and plenty more.  Maintainable codebases mean easy, fast, risk-free, and cheap change.  Here are some characteristics in the field that I use when assessing this property.  Some of them may seem a bit off the beaten path.

Read More

By

Rethinking Assert with Shouldly

I was doing a bit of work with Tweetdeck open, when I noticed this tweet.

I’ve been using Assert.IsTrue() and its friends for years, so you might think I would take offense.  But instead, this struck me as an interesting and provocative statement.  I scanned through the conversation this started and it got me to thinking.

Over the years, I’ve evolved my unit tests heavily in the name of readability.  I’ve come to favor mocking frameworks on the basis of having fluent APIs and readable setup.  On a pointer from Steve Smith, I’ve adopted his philosophy and approach to naming unit test classes and tests.  My arrange and act inside of the tests have become highly readable and optimized for comprehension.

But then, there’s Assert.AreEqual.  Same as it ever was.

WorkHarder

Read More

By

Chess TDD 62: Finishing Chess TDD

You might not have expected to read this, and I honestly wasn’t really expecting to write it, but here we are.  I’m going to call it and announce that I’m finishing Chess TDD series.  It’s been a lot of fun and gone on for a long time, and I’m not actually done with the codebase (more on that shortly).

My original intention, after finishing the initial implementation with acceptance and unit tests, was to walk through some actual games, by way of “field testing,” so to speak.  I thought this would simulate QA to some extent — at least as well as you can with a one person operation.  And, with this episode, I’ve showed a tiny taste of what that could look like.  And, I’ve realized, I could go on this way, but that would start to get pretty boring.  And, I’ve also realized that it would be irresponsible.

What I mean is that plugging laboriously through every piece on the board after every move would be showing you a “work harder, not smarter” approach that I don’t advocate.  I’d said that I would save ingesting chess games and using algebraic notation for an upcoming product, and that is true — I plan to do that.  But what I shouldn’t be doing in the interim is saving the smart work for the product and treating you to brainless, boring work in the meantime.

So with that in mind, I brought the work I was doing to a graceful close, wrapping up the feature where initial board positioning was shown to work, and using red-green-refactor to do it.

You’ll notice in the video that the Trello board is not empty by a long shot.  There’s a long list of stuff I’d like to tweak and make better as well as peripheral features that I’d like to add.  But, to use this as a metaphor for business, I have a product that (as best I can tell) successfully tells you all moves available to any piece, and that was the original charter.  It’s shippable, and, better yet, it’s covered with a robust test suite that will make it easy to build on.

What should you look for in the product?  Here are some ideas that I have, off the top (and from Trello).

  • A way to overlay algebraic chess notation for acceptance tests.
  • Remove type checking for a polymorphic scheme.
  • Improve the object graph with better responsibilities (e.g. removing En Passant logic from Board)
  • Apply static analysis tooling to address shortcomings in the code.
  • Make sure that piece movement also works properly (currently it probably wouldn’t for castling/en passant).
  • Develop a scheme for ingesting chess games and verifying that our possibilities/play match.

In short, what I have in mind is bringing this application along with the kinds of work I’d advise the teams that I train/coach and assess.  Here’s how to really make this codebase shine.

I have a couple of things to get off my plate before I productize this, but it’s not going to fall off my radar.  Stay tuned!  And, until then, here is the last of the Chess TDD posts in the format you’re accustomed to.

What I accomplish in this clip:

  • Finish the testing of the initial rows on the board.

Here are some lessons to take away:

  • The new C# language features (as of 6) are pretty great for making your code more compact and functional-appearing in nature.
  • Always, always, always make sure that you’re refactoring only when all tests are green.  I’ve experienced the pain of not heeding this advice, and it’s maddening.  This is a “measure twice, cut once” kind of scenario.
  • Clean, consistent abstractions are important for readability.  If you think of them, spend a little extra time to make sure they’re in place.
  • If something goes wrong with the continuous test runner or your test suite in general, pause and fix it when you notice it.  Don’t accept on faith that “everything should be passing.”  Like refactoring when red because “it’s just a trivial change,” this can result in worlds of downstream pain.

By

Chess TDD 61: Testing an Actual Game

Editorial Note: I was featured on another podcast this week, this one hosted by Pete Shearer.  Click here to give it a listen.  It mostly centers around the expert beginner concept and what it means in the software world.  I had fun recording it, so hopefully you’ll have fun listening.

This post is one where, in earnest, I start testing an actual game.  I don’t get as far as I might like, but the concept is there.  By the end of the episode, I have acceptance tests covering all initial white moves and positions, so that’s a good start.  And, with the test constructs I created, it won’t take long to be able to say the same about the black pieces.

I also learned that building out all moves for an entire chess game would be quite an intense task if done manually.  So, I’d be faced with the choice between recording a lot of grunt work and implementing a sophisticated game parsing scheme, which I’d now consider out of scope.  As a result, I’ll probably try to pick some other, representative scenarios and go through those so that we can wrap the series.

What I accomplish in this clip:

  • Get re-situated after a hiatus and clean up/reorganize old cards.
  • A few odds and ends, and laying the groundwork for the broader acceptance testing.

Here are some lessons to take away:

  • No matter where they occur, try to avoid doing redundant things when you’re programming.
  • If, during the course of your work, you ever find yourself bored or on “auto-pilot,” that’s a smell.  You’re probably working harder instead of smarter.  When you find yourself bored, ask yourself how you might automated or abstract what you’re doing.
  • When you’re writing acceptance tests, it’s important to keep them readable by humans.
  • A seldom-considered benefit to pairing or having someone review your coding is that you’ll be less inclined to do a lot of laborious, obtuse work.
  • Asserting things in control flow scopes can be a problem — if you’re at iteration 6 out of 8 in a while loop when things fail, it’s pretty hard to tell that when you’re running the whole test suite.