Stories about Software


Why I Don’t Inherit from Collection Types

In one of the recent posts in the TDD chess series, I was asked a question that made me sort of stop and muse a little after it was asked. It had to do with whether to define a custom collection type or simply use an out of the box one and my thinking went, “well, I’d use an out of the box one unless there were some compelling reason to build my own.” This got me to thinking about something that had gone the way of the Dodo in my coding, which was to build collection types by inheriting from existing collections or else implementing collection interfaces. I used to do that and now I don’t do that anymore. At some point, it simply stopped happening. But why?

I never read some blog post or book that convinced me this was stupid, nor do I have any recollection of getting burned by this practice. There was no great moment of falling out — just a long-ago cessation that wasn’t a conscious decision, sort of like the point in your 20’s when it stops being a given that you’re going to be out until all hours on a Saturday night. I pondered this here and there for a week or so, and then I figured it out.

TDD happened. Er, well, I started practicing TDD as a matter of course, and, in the time since I started doing that, I never inherited from a collection type. Interesting. So, what’s the relationship?

Well, simply put, the relationship is that inheriting from a collection type just never seems to be the simplest way to get some test passing, and it’s never a compelling refactoring (if I were defining some kind of API library for developers and a custom collection type were a reasonable end-product, I would write one, but not for my own development sake). Inheriting from a collection type is, in fact, a rather speculative piece of coding. You’re defining some type that you want to use and saying “you know, it’d probably be handy if I could also do X, Y, and Z to it later.” But I’ve written my opinion about this type of activity.

Doing this also largely increases the surface area of your class. If I have a type that encapsulates some collection of things and I want to give clients the ability to access these things by a property such as their “Id” then I define a GetById(int) method (or whatever). But if I then say “well, you know what, maybe they want to access these things by position or iterate over them or something, so I’ll just inherit from List of Whatever, and give them all that goodness. But yikes! Now I’m responsible for maintaining a concept of ordering, handling the case of out of bounds indexing, and all sorts of other unpleasantness. So, I can either punt and delegate to the methods on the type that I’m wrapping or else I can get rid of the encapsulation altogether and just cobble additional functionality onto List.

But that’s an icky choice. I’m either coughing up all sorts of extra surface area and delegating it to something I’m wrapping (ugly design) or I’m kind of weirdly violating the SRP by extending some collection type to do domain sorts of things. And I’m doing this counter to what my TDD approach would bubble to the surface anyway. I’d never write speculative code at all. Until some client of mine were using something besides GetById(int), GetById(int) is all it’s getting.

This has led me to wonder if, perhaps, there’s some relationship between the rise of TDD and decoupled design and the adage to favor composition over inheritance. Deep inheritance hierarchies across namespaces, assemblies, and even authors, creates the kind of indirection that makes intention muddy and testing (especially TDD) a chore. I want a class that encapsulates details and provides behavior because I’m hitting that public API with tests to define behavior. I don’t want some class that’s a List for the most part but with other stuff tacked on — I don’t want to unit test the List class from the library at all.

It’s interesting speculation, but at the end of the day, the reason I don’t inherit from collection types is a derivative of the fact that I think doing so is awkward and I’m leerier and leerier of inheritance as time goes on. I really don’t do it because I never find it helpful to do so, tautological as that may seem. Perhaps if you’re like me and you just stop thinking of it as a tool in your tool chest, you won’t miss it.


The Craftsmen Have it Right: Defining Software Quackery

I’ve fallen a bit behind on listening to some of my go-to programming podcasts of late (I will explain why shortly), but some of my friends were talking at dinner last week about a recent episode of .NET Rocks, featuring Alan Stevens. It presented the question, “Are you a craftsman?” and adopted a contrarian, or self-described “Devil’s Advocate” take on the movement. As far as episodes of that podcast go, this one was sort of guaranteed to be a lightning rod and the volume of comments seemed to bear that out, with Bob Martin himself weighing in on the subject.

I listened to this episode a week ago or so, and I’m not staring at the transcript, but here are some of the points I remember Alan making:

  • The craftsmanship movement in some places has turned into a monoculture — a way for the in-crowd to congratulate itself for getting it.
  • The wholesale comparison to guild culture and artisans is generally pretentious.
  • The movement should be inclusive and about building people up, rather than disparaging people.
  • There are lots of people out there not doing the “craftsmanship stuff” and still shipping working code, so who are we to judge?
  • Comparing software development to high-end, artistic craftsmanship, such as that of 30K guitars, devalues the latter**

(**I’ll just mention briefly that I consider this point to be absurd since this is shifting the goalposts completely from the “craft guild” comparison upon which the name is based — medieval craft guilds were responsible for making walls, shoes, and candles — not guitars for billionaire rock stars)

The comments for this episode contain a fair amount of disappointment and outright anger, but neither of those things was what I felt while listening, personally. Stevens is fairly engaging, and he had a number of pithy quotes from industry titans and authors at his disposal, citing, if memory serves, “The Pragmatic Programmer” and “The Mythical Man Month” as well as being fairly facile with the “Software Craftsmanship Manifesto.” This is clearly a well-read and knowledgeable industry professional as well as a practiced speaker, and he said a lot of things that I found reasonable, taken individually. But I couldn’t shake a vague sense of the surreal, like the dream sequences in the movie in Inception where Dali-like non-physics lurks at the periphery of your vision. It all seemed to make sense at first blush, but it was all wrong somehow.

I figured it out, and I’ll address that with a series of vignette sections here and hopefully tie them together somewhere this side of intolerable rambling.

On Quackery

One of the reasons I’ve been a bit negligent on my developer podcast listening is that I’ve become absolutely mesmerized by a podcast called “Quack Cast“, in which an infectious diseases doctor addresses alternative medicines. This man is ruthlessly empirical and relentlessly rational and even though the subject matter has virutally no overlap with anything I do, I find it fascinating. He provides ongoing rebuttal to all sorts of news about and advocacy for what he calls “SCAM” (supplements, complements, & alternative medicine), hoping to shine a light on how magic-based many of these approaches are.

Now, it isn’t my aim here to get into some kind of quasi-political fracas about whether prayer helps with surgery or whether aromatherapy cures cancer or whatever — if you have your beliefs, be welcome to them. But I will briefly describe one thing that’s so absolutely bat%&$# nuts that I don’t think it’s controversial for me to call it that: homeopathy. Now, if you’re like I was before hearing the podcast and then doing my own research, you probably don’t know exactly what homeopathy is. You probably think that it’s “home remedies” or “holistic treatment” or something else vague. It’s not. It’s much, much more ridiculous. It’s about 150 stops further down the crazy-train, firmly in the city of Quacksville.

You can read more here, but it’s a ‘theory’ of illness and cure that originated prior to the discovery of germs and the development of germ theory. Since that time, it has persisted largely and remarkably unchanged. The basic axioms are that “like cures like” and “the more you dilute something, the stronger it becomes.” So, for example, if you’re an end-stage alcoholic suffering from cirrhosis of the liver, the best medicine is tequila, but only if you cut it with water so many times that there’s no tequila left in your tequila. So, two wrongs (and one utter violation of the laws of physics as we know them) make not a right, but a nothing (i.e. giving water to someone with cirrhosis). And the fact that homeopathy is predicated upon diluting any actual effects out of existence is probably the reason it’s persisted while other historical, magical nonsense like alchemy, bloodletting, lobotomies, and exorcisms have gone extinct — homeopathy is relatively harmless because it’s all placebo, all the time (unless a practitioner makes a mistake and creates a nostrum that actually has some effect, which would be a problem). At the time this was proposed, it was as reasonable as any other contemporary approach to medicine, but in this day and age, it’s utter quackery.

Let’s come back to this later.

I’m OK, You’re OK

In software development, we start out as babes in the woods with an “I’m not OK, you’re OK” mentality. Everyone else knows what they’re doing and we’re lost. This is not healthy — we should be nurtured and not judged until we feel that we’re OK. Likewise, what separates good mentors from Expert Beginners is the adoption of an “I’m OK, you’re OK” attitude versus an “I’m OK, you’re not OK.” The latter is also not good. I write code and I’m OK. You write code and you’re OK. Maybe you’re more experienced than me, or maybe I’m more experienced than you. Either way, that’s OK. I’m OK and you’re OK.

At the end of the day, we’re all writing code that works, or at least trying to. And that’s OK. It’s OK if it doesn’t always work. We’re all trying. Maybe I test my code before checking it in. I’m OK. Maybe I at least compile it. And that’s OK. Maybe you don’t. And that’s OK. You’re OK. I’m OK and you’re OK. As long as we’re all trying, or, if not always trying, at least showing up, we’re all OK. There’s no sense casting aspersions or being critical.

This is the mature, healthy way to regard one another in the industry. Isn’t it? As long as we show up to work for the most part and sometimes write stuff that does stuff, who is anyone to judge? We’re all OK, and that’s OK.

Can Anyone Recommend an Apothecary?

If I’ve learned anything from reading all of the Game of Thrones books, it’s that the Middle Ages were awesome. Dragons and Whitewalkers and all that stuff. But what gets less play in those history textbooks is the rise of the merchant class during that time period. This is due in large part to the emergence of merchant and craft guilds. Craft guilds, specifically, were a fascinating and novel construct that greased the skids for the increased specialization of labor that would later explode in the Industrial Revolution (I think that happens in book 6 of Game of Thrones, but we’ll see when it comes out).

Craft guilds became professional homes for artisans of the time: stonemasons, cobblers, candle makers, bakers, and apothecaries — who knows, perhaps even homeopaths, though they would probably have been drawn and quartered for their suggested treatment of social diseases. The guilds had an arrangement with the towns in which they were situated. The only people in town that could practice the craft were guild members. In exchange, a basic level of quality was guaranteed. You could think of this as a largely benevolent cartel, though I kind of prefer to think of it as a labor union, but with more dragons.

Members of the guild entered as apprentices, where they learned at the feet of masters (and their family generally paid for this education). After completing a long apprenticeship, the aspiring guild member was allowed to practice the craft, for a wage as a journeyman, working with (for) other masters to ensure that knowledge cross-pollination occurred. With enough savings and proof of his “mastery” (which I think may be the origin of the word “masterpiece”) the journeyman could become a master and open his open business. The guild policed its own ranks for minimum quality, forcing its members to redo, pro bono, anything that they produced not up to snuff. The guild also acted as a unit against anyone selling knock-offs out of a trench-coat on the street corner. (The DVD makers’ guild was known in particular for this.) The self-policing guild offered a minimum standard for quality to the townsfolk and, in return, it was the only gig in town.

In practice, this meant a stamping out of impostors, charlatans, cheaters, and cranks. It meant that standards existed and that the general populace could depend, to some degree, on a predictable exchange of value. It also meant that there were accepted tools of the trade, processes and hoops through which to jump, internal political games to be played, and a decent amount of monoculture. And, it probably meant that the world progressed at the pace set by high achievers, if not geniuses. After all, the market economy had not yet been invented to reward wunderkinds, the outsized influencers, the lucky, and the radical innovators. Improvement happened, to be sure, but it happened at a pace approved by a council of masters that were ultimately men and men with egos.

So, this is probably the perfect metaphor for software development. Right? It seems like it’d be good to strive for a minimum level of quality — a set of standards, if you will. It seems like it’d be good if there was some kind of accreditation process, perhaps more accurate than a CS degree and less myopic than a certification, to demonstrate that someone was competent at software development. It seems like it’d be good for aspiring/new software developers to have a lot of one-on-one learning time with experienced developers that were really good at what they were doing. It seems like it’d be good for those initiates then to travel around some, broadening their experience and spreading their ideas. But, wait a second… we live in a very non-insular world, work in a global market economy, and, libertarians that we are, we’d sooner die than unionize. And writing software isn’t “craftsmanship,” the way that making candles, pies or shoes is.

Crap! We were doing so well there for a while, and there’s nothing more irritating than having to throw out your babies every time they dirty up their bath water. This seems to happen with all of my metaphors if I dissect them enough.

Software Quackery

Like medicine, artisanship, and even later 20th century pop psychology, software development has sort of lurched and hiccuped along in its (relatively short) history. Things that were widely done in the past have fallen out of favor. And, while a lot of our industry practices tend to be somewhat cyclical (preference for thin versus thick clients, for instance) some ideas do stick around. We make progress. We can stand, to some degree, on the shoulders of those who came before us and say that code at the GOTO level of abstraction does not scale and that shortening feedback loops and catching mistakes early are examples of preferable practices to their alternatives. We learn from our mistakes as individuals and as a collective. The bar inches higher. Or sometimes it stays stubbornly stagnant for a while and then makes a jump, the way that the field of medicine did with practices like hand-washing prior to surgery (except for homoepathic surgery to remove gangrenous limbs, in which case hand-washing is a strict no-no).

Movements emerge in fields, and they don’t emerge in a vacuum. They emerge in contrast to a prevailing trend or following a new, avant-garde idea. They are born out of words like “never again” as veterans of some practice look over the scars it has inflicted upon them. They become zeitgeists of the period before later being described as historic, or perhaps even quaint. In our field, I can think of some obvious examples. The Agile Manifesto and its subsequent movement springs to mind. As do the Software Craftsmanship, well, manifesto, and movement. Those are biggies, but there are plenty of others (everything on the desktop, no, wait, let’s move it to the browser, no, wait, let’s put it on people’s phones!)

So wherefore art thou, Software Craftsmanship? Well, I might suggest that Software Craftsmanship movement exists not in a vacuum and not to make those participating feel good about themselves after endless conferences, talks, and meetups filled with navel gazing. It’s not about condescension or creating “one true way.” It’s not about claiming that software is some kind of rarefied art form, nor is it about aesthetics or the “perfect code.” It’s not about saying that there’s one true MV* pattern to rule them all or that your unit tests need exactly one assert statement. Rather, at its core, I believe Software Craftsmanship is a simple refusal to tolerate Software Quackery any longer.

Once upon a time, back when PHP was new, we, as an industry, edited code on production servers, just as the founder of homeopathy thought that eating bark would cure malaria back in 18th century. However, a lot of time, study, experimentation and experience later, we came to the conclusion, as an industry, that hand editing code in production is a bad idea, just as medical science later came to understand why malaria occurred and how to prevent it. So now, it’s not pretentious nor is it beyond the pale for us, as an industry, to look upon software consultants editing code in production as quacks, anymore than it is to look upon medical practitioners handing out bark-water cocktails to treat malaria as quacks. We’re OK but you’re not OK if you’re doing that, and it’s OK for us to point out that you’re not OK.

It’s tempting to adopt a live and let live mentality and to be contrarian and say, “hey, we’re all trying to fight the disease, amirite, so let’s not be high and mighty,” but the fact of the matter is that treating malaria with bark and water is grossly negligent in this day and age, and it’s quackery of the highest order. So the Software Craftsmanship movement takes a page from the craft guilds of yore, and says, “you know, we should establish some minimum collective standards of competence, have some notion of internal quality policing, encourage aspiring members to learn at the hands of proven veterans, and develop the ability to say, ‘I’m sorry, but that’s just not good enough anymore.'” Yes, fine, if you ride the metaphor hard enough, it breaks down, but that’s really not the point. The point is that the movement is attempting to raise the bar from a world of barbaric medicine via unguents, spells, dances, and hitting people with rocks, to a world of medicine via the scientific method.

Back to the Podcast

As I mentioned earlier, I felt neither angry nor disappointed listening to Alan Stevens. I liked the conversation and at least parts of what he said. I picked up a few interesting quotes from authors and thinkers that I hadn’t hear previously, and his tone was relatively humble and disarmingly self-deprecating. And there was certainly a kind of populist, “don’t look down on the dark matter developers” vibe. So what was responsible for my surreal feeling? What was ‘wrong’ that I figured out? I could sum it up in a simple phrase, when it hit me: it felt like Stevens was pandering to what he thought was a low-brow audience.

What I mean is, here was a man, clearly knowledgeable about the industry, armed with impressive quotes and enough cachet to be asked to appear on .NET rocks, telling the listening population a very odd thing. Sure, he has a highfalutin Harvard doctorate like all those other doctors, but unlike them, he’s here to tell you that you don’t need any kind of fancy degree or medicine or machine to treat your own “carcinoma” or, as real people say, butt cancer — all you need is your own know-how, some kleenex, and a bucket of water. Or, in software terms, it’s okay if you ship crap, so long as you have a good attitude. I mean, we can’t all be expected to do a good job, can we? He even came out and said (paraprhased, though I remember the message very clearly) something along the lines of “sometimes mediocrity is good.” So, forget all of these fancy best practices and be proud of yourself if that 80,000 line classic ASP file you hand-edit in production manages not to kill anyone.

The underlying message of the show wasn’t any substantive indictment of the Software Craftsmanship movement, but an endorsement of Software Quackery. I mean, sure, there are good practices and bad practices, but let’s not get all high and mighty just because some doctors don’t wash their hands before operating on you — I mean, you’d rather have all of the colors of the competence rainbow than some surgeon hand-washing monoculture, right? The reference to Dreyfus really brought it full-weird-circle for me, though, because championing Software Quackery is generally the province of Expert Beginners — not Experts. So, I’m sorry, but I just don’t buy it. We can and should have some kind of minimum standard that isn’t a goody bag for just showing up. Please, join me in a polite refusal to tolerate Software Quackery — it just doesn’t cut it anymore.


TDD Chess Game Part 2

Alright, welcome to the inaugural video post from my blog. Due to several requests over twitter, comments, etc, I decided to use my Pluralsight recording setup to record myself actually coding for this series instead of just posting a lot of snippets of code. It was actually a good bit of fun and sort of surreal to do. I just coded for the series as I normally would, except that I turned the screen capture on while I was coding. A few days later, I watched the video and recorded narration for it as I watched it, which is why you’ll hear me sometimes struggling to recall exactly what I did.

The Pluraslight recordings are obviously a lot more polished — I tend to script those out to varying degrees and re-record until I have pretty good takes. This was a different animal; I just watched myself code and kind of narrated what I was doing, pauses, stupid mistakes, and all. My goal is that it will feel like we’re pair programming with me driving and explaining as I go: informal, conversational, etc.

Here’s what I accomplish in this particular video, not necessarily in order:

  • Eliminated magic numbers in Board class.
  • Got rid of x coordinate/y coordinate arguments in favor of an immutable type called BoardCoordinate.
  • Cleaned up a unit test with two asserts.
  • Got rid of the ‘cheating’ approach of returning a tuple of int, int.
  • Made the GetPossibleMoves method return an enumeration of moves instead of a single move.

And, here are some lessons to take away from this, both instructional from me and by watching me make mistakes:

  • Passing the same primitive/value types around your code everywhere (e.g. xCoordinate, yCoordinate) is a code smell called “primitive obsession” and it often indicates that you have something that should be a type in your domain. Don’t let this infect your code.
  • You can’t initialize properties in a non-default constructor (I looked up the whys and wherefores here after not remembering exactly why while recording audio and video).
  • Having lots of value type parameter and return values instead of domain concepts leads to constant small confusions that add up to lots of wasted time. Eliminate these as early in your design as possible to minimize this.
  • Sometimes you’ll create a failing test and then try something to make it pass that doesn’t work. This indicates that you’re not clear on what’s going on with the code, and it’s good that you’re following TDD so you catch your confusion as early as possible.
  • If you write some code to get a red test to pass, and it doesn’t work, and then you discover the problem was with your test rather than the production code, don’t leave the changes you made to the production code in there, even if the test is green. That code wasn’t necessary, and you should never have so much as a single line of code in your code base that you didn’t put in for reasons you can clearly explain. “Meh, it’s green, so whatever” is unacceptable. At every moment you should know exactly why your tests are red if they’re red, green if they’re green, or not compiling if the code doesn’t compile. If you’ve written code that you don’t understand, research it or delete it.
  • No matter how long you’ve been doing this, you’re still going to do dumb things. Accept it, and optimize your process to minimize the amount of wasted time your mistakes cause (TDD is an excellent way to do this).

So, here’s the video. Enjoy!

A couple of housekeeping notes. First, you should watch the video in full screen, 1080p, ideally (click the little “gear” icon between “CC” and “YouTube” at the bottom of the video screen . 720 will work but may be a touch blurry. Lower resolutions and you won’t see what’s going on. Second, if there’s interest, I can keep the source for this on github as I work on it. The videos will lag behind the source though (for instance, I’ve already done the coding for part 3 in the series — just not the audio or the post, yet). Drop me a comment or tweet at me or something if you’d like to see the code as it emerges also — adding it to github isn’t exactly difficult, but I won’t bother unless there’s interest.


Meetings and Introverts: Strangers in Strange Lands

I’ll admit as I type the first sentence of this post that I don’t know whether this will conclude a two-part “mini-series” or whether I’ll feel compelled to write further posts. But I wanted to write the follow up that I hinted at to the post I wrote about introversion for programmers (well, specifically me). Tl;dr refresher of that post is that social situations are exhausting for me because of their inherent unpredictability as compared to something like the feedback loop of a program that I’m writing (or even the easily curated, asynchronous interaction of a social media vehicle like Twitter). The subjects I left for this post were “Erik as a problem solver and pattern matcher,” consensus meetings, and two exceptional social situations that I don’t find tiring. The challenge will be spinning these things into a coherent narrative, but I’ll take a crack at it.

Whenever I look in my Outlook calendar and see something like “Meeting to Discuss Issue Escalation Strategy,” I am struck with a surprisingly profound feeling that life is filled with senseless waste, the way one might look in dismay at his sunglasses floating down a river he accidentally dropped them into. I see an hour or two of my life drifting away with no immediately obvious reclamation strategy. My hypothesis is that this is the sort of standard introvert take on what I’ll call “consensus meetings” rather than what many programmers seem to think of as a programmer take on them. As Paul Graham points out in one of my all time favorite posts, “when you’re operating on [a programmer’s] schedule, meetings are a disaster.” But I’m not really a maker these days anymore; for the time being, I’m a manager. And I still find these meetings to be a disaster.

Extroverts draw energy from social situations and become invigorated, while introverts spend energy and become exhausted. And, when I’m talking about social situations, I mean drinks and bowling with a group of friends. Introverts like me enjoy these nights but find them tiring. Now, apply this same sort of thinking to adversarial situations that are veritable clinics in bike-shedding. A bunch of introverts and extroverts gather together and set about deciding what the organizational flow chart of issue escalation should look like. Should it start at Tier 1 support and be escalated to the manager of that group, then over to internal operations and on up to Bill in DevOps? Or should it go through Susan in Accounting because she used to work in DevOps and and really has her finger on the pulse? Maybe we do that for 2 months and then go to Bill because that’s not really sustainable in the long term, but it’s good for now. And, we should probably have everyone send out a quick email any time it affects the Initrode account. And… ugh, I can’t type anymore.

So here sit a bunch of extroverts and me. The extroverts love this. People in general love having opinions and telling people their opinions. (I’m not above this — you’ve been reading my rants here for years, so there’s clearly no high ground for me to claim.) But it’s the extroverts that draw energy from this exchange and work themselves into a lather to leave their marks on the eventual end-product via this back and forth. The more this conversation draws on, the more they want to interject with their opinions, wisdom and knowledge. The more trivial and detailed the discussion becomes, the more they get their adrenaline up and enjoy the thrill of the hunt.

I on the other hand, check out almost immediately. From an organizational realpolitik perspective, these meetings are typically full of sound and fury, signifying nothing. The initial meeting organizer turns on the firehose and then quickly loses control of it as the entire affair devolves into a cartoonish torrent of ideas being sprayed around the room as the hose snakes, thrashes, and contorts with no guiding hand. Nobody is really capturing all of this, so the extroverts leave the meeting flush with the satisfaction of shouting their opinions at each other, but most likely having neglected to agree on anything. But, my inclination to check out goes deeper than the fact that nothing is particularly likely to be accomplished; it’s that neither the forum, nor the ideas and the opinions are interesting or important to me.


I earnestly apologize if this sounds arrogant or stand-offish, but it’s the honest truth. And this is where the part about me being an introverted problem solver and pattern-matcher comes in. The meeting I want to have is one where I come prepared with statistical analysis and data about the most efficient flows of information in triage scenarios. I want performance records and response times for Bill and Susan, including in her former role in the case of the latter. I want to have synthesized that data looking for patterns that coincide with issue types and resolution rates, and I want to make a recommendation on the basis of all of that. To me, those are table stakes to the meeting. Whoever has the best such analysis should clearly have his or her plan implemented.

But that’s not what happens in extrovert meetings. As soon as the meeting organizer loses control of the firehose, we’ve entered the realm of utter unpredictability. I start to present my case study and the patterns I’ve noticed, and then someone interrupts to ask if I captured that data using the new or old ticketing system. And, by the way, what power point template am I using because it’s really snazzy. And, anyway, the thing about Susan is that she’s really not as much of a people person, but Doug kind of is. Now the extroverts are firmly in command. All prior analysis goes out the window, and, as people start jabbering over one another reasoned analysis and facts are quite irrevocably replaced with opinions, speculation, gossip, and non sequitur in general. The conversation floats gently down stream and washes up on a distant shore when everyone decides that it’s time for lunch. All of the analysis… unconsidered and largely ignored.

And that, the extroverts taking over and leaving me to space out, is the best case scenario. In the worst case scenario, they start peppering me with a series of off-topic, gotcha questions to which I have to reply, “I don’t know off the top of my head, but I can look into it later.” This puts me at a huge disadvantage because extroverts, buoyed by the rush of the occasion, have no qualms about guessing, fudging, hand-waving, or otherwise manufacturing ‘analysis’ out of thin air. When things take this kind of turn, and someone else “wins the meeting,” it’s even more exhausting.

Regardless of which kind of meeting it is though, the result is usually the same. After lunch and everyone has a chance to forget the particulars of the discussion, it becomes time to email the real decision maker or chat one on one with that person, and re-present the analysis for consideration. Usually at that time, the analysis wins the day or at least heavily informs the decision. The meeting robbed me of an hour of my life to accomplish nothing, as I knew it would, when I looked sadly at my Outlook calendar that morning.

There are two kinds of meeting that have no chance to fit this pattern, however (I’m omitting from consideration meetings that are actually policed reasonably by a moderator to keep things on-agenda, since these are far more rare than they should be). These are meetings where I’m passively listening to a single presenter, or actively presenting to the group. It’s not especially interesting that I’d find the former kind of meeting not to be exhausting since it’s somewhat akin to watching a movie, but the latter is, apparently, somewhat interesting. Presenting is not exhausting to me the way that a night out at a party is exhausting. There are sometimes pre-speech/talk jitters depending on the venue, but the talk is entirely predictable to me. I control exactly what’s going to be said and shown, and the speed at which I’ll progress. There is a mild element of unpredictability during the Q&A, but as the MC for the talk, you’re usually pretty well in control of that, too. So, that is the reason I find typical corporate meetings more exhausting than presenting in front of groups.

A strange thing, that. But I think in this light it’s somewhat understandable. Having reasoned analysis, cogent arguments, and a plan is the way to bring as much predictability (and, in my opinion, potential for being productive) to the table as possible. For me, it’s also the way most likely to keep the day’s meetings from sucking the life and productivity right out of you.

Want more content like this?

Sign up for my mailing list, and get about 10 posts' worth of content, excerpted from my latest book as a PDF. (Don't worry about signing up again -- the list is smart enough to keep each email only once.)


TDD and Modeling a Chess Game

For the first post in this series, I’m responding to an email I received asking for help writing an application that allows its users to play a game of chess, with specific emphasis on a feature in which players could see which moves were available for a given piece. The emailer cited a couple of old posts of mine in which I touched on abstractions and the use of TDD. He is at the point of having earned a degree in CS and looking for his first pure programming job and it warms my heart to hear that he’s interested in “doing it right” in the sense of taking an craftsmanship-style approach (teaching himself TDD, reading about design patterns, etc).

Modeling a game of chess is actually an awesome request to demonstrate some subtle but important points, so I’m thrilled about this question. I’m going to take the opportunity presented by this question to cover these points first, so please bear with me.

TDD does not mean no up-front planning!

I’ve heard this canard with some frequency, and it’s really not true at all (at least when done “right” as I envision it). A client or boss doesn’t approach you and say, “hey can you build us a website that manages our 12 different kinds of inventory,” and you then respond by firing up your IDE, writing a unit test and seeing where the chips fall from there. What you do instead is that you start to plan — white boards, diagrams, conversations, requirements definition, etc.

The most important outcome of this sort of planning to me is that you’ll start to decompose the system into logical boundaries, which also means decomposing it into smaller, more manageable problems. This might include defining various bounded contexts, application layers, plugins or modules, etc. If you iterate enough this way (e.g. modules to namespaces, namespaces to classes), you’ll eventually find yourself with a broad design in place and with enough specificity that you can start writing tests that describe meaningful behavior. It’s hard to be too specific here, but suffice it to say that using TDD or any other approach to coding only makes sense when you’ve done enough planning to have a good, sane starting point for a system.

Slap your hand if you’re thinking about 2-D arrays right now.

Okay, so let’s get started with this actual planning. We’ll probably need some kind of concept of chess pieces, but early on we can represent these with something simple, like a character or a chess notation string. So, perhaps we can represent the board by having an 8×8 grid and pieces on it with names like “B_qb” for “black queen’s bishop.” So, we can declare a string[8][8], initialize these strings as appropriate and start writing our tests, right?

Well, wrong I’d say. You don’t want to spend your project thinking about array indices and string parsing — you want to think about chess, chess boards, and playing chess. Strings and arrays are unimportant implementation details that should be mercifully hidden from your view, encapsulated snugly in some kind of class. The unit tests you’re gearing up to write are a good litmus test for this. Done right, at the end of the project, you should be able to switch from an array to a list to a vector to some kind of insane tree structure, and as long as the implementation is still viable all of your unit tests should be green the entire time. If changing from an array to a list renders some of your tests red, then there’s either something wrong with your production code or with your unit tests, but, in either case, there’s definitely something wrong with your design.

Make your dependencies flow one way

So, forget about strings and arrays and let’s define some concepts, like “chess board” and “chess piece.” Now these are going to start having behaviors we can sink our TDD teeth into. For instance, we can probably imagine that “chess board” will be instantiated with an integer that defaults to 8 and corresponds to the number of spaces in any given direction. It will also probably have methods that keep track of pieces, such as place(piece, location) or move(piece, location).

How about “chess piece?” Seems like a good case for inheritance since all of them need to understand how to move in two dimensional spaces, but they all move differently. Of course, that’s probably a little premature. But, what we can say is that the piece should probably have some kind of methods like isLegalMove(board, location). Of course, now board knows about piece and vice-versa, which is kind of icky. When two classes know about one another, you’ve got such a high degree of coupling that you might as well smash them into one class, and suddenly testing and modeling gets hard.

One way around this is to decide on a direction for dependencies to flow and let that guide your design. If class A knows about B, then you seek out a design where B knows nothing about A. So, in our case, should piece know about board or vice-versa? Well, I’d submit that board is essentially a generic collection of piece, so that’s a reasonable dependency direction. Of course, if we’re doing that, it means that the piece can’t know about a board (in the same way that you’d have a list as a means of housing integers without the integer class knowing about the list class).

Model the real world (but not necessarily for the reasons you’d think)

This puts us in a bit of an awkward position. If the piece doesn’t know about the board, how can we figure out whether it’s moving off the edge of the board or whether other pieces are in the way? There’s no way around that, right? And wasn’t the whole point of OOP to model the real world? And in the real world the pieces touch the board and vice versa, so doesn’t that mean they should know about each other?

Well, maybe we’re getting a little ahead of ourselves here. Let’s think about an actual game of chess between a couple of moving humans. Sure, there’s the board and the pieces, but it only ends there if you’re picturing a game of chess on your smart phone. In real life, you’re picking pieces up and moving them. If you move a piece forward three spaces and there’s a piece right in front of it, you’ll knock that piece over. Similarly, if you execute “move ahead 25 spaces,” you’ll drop the piece in your opponent’s lap. So really, it isn’t the board preventing you from making illegal moves, per se. You have some kind of move legality calculator in your head that takes into account what kind of piece you’re talking about and where it’s located on the board, and based on those inputs, you know if a move is legal.

So, my advice here is a little more tempered than a lot of grizzled OOP veterans might offer. Model the real world not because it’s the OOP thing to do or any dogmatic consideration like that. Model the real world because it often jolts you out of a code-centric/procedural way of thinking and because it can make your conceptual model of the problem easier to reason about. An example of the latter is the idea that we’re talking about pieces and boards instead of the 2-day arrays and strings I mentioned a few sections back.

Down to Business

I’m doing this as an exercise where I’m just freewheeling and designing as if I were implementing this problem right now (and just taking a little extra time to note my reasoning). So you’ll have to trust that I’m not sneaking in hours of whiteboarding. I mention this only because the above thinking (deciding on chess piece, board, and some concept of legality evaulator class) is going to be the sum total of my up-front thinking before starting to let the actual process of TDD guide me a bit. It’s critical to recognize that this isn’t zero planning and that I actually might be doing a lot less up front reasoning time than an OOP/TDD novice simply because I’m pretty well experienced knowing how to chart out workable designs from the get-go keeping them flexible and clean as I go. So, without further ado, let’s start coding. (In the future, I might start a youtube channel and use my Pluralsight setup to record such coding sessions if any of you are interested).

First of all, I’m going to create a board class and a piece class. The piece class is going to be a placeholder and the board is just going to hold pieces, for the most part, and toss exceptions for invalid accesses. I think the first thing that I need to be able to do is place a piece on the board. So, I’m going to write this test:

First of all, I’ll mention that I’m following the naming/structuring convention that I picked up from Phil Haack some time back, and that’s why I have the nested classes. Besides that, this is pretty vanilla fare. The pawn class is literally a do-nothing placeholder at this point, and now I just have to make this test not throw an exception, which is pretty easy. I just define the method:

Woohoo! Passing test. But that’s a pretty lame test, so let’s do something else. What good is testing a state mutating method if you don’t have a state querying method (meaning, if I’m going to write to something, it’s pointless unless someone or something can also read from it)? Let’s define another test method:

Alright, now we probably have to make a non (less) stupid implementation. Let’s make this new test pass.

Well, there we go. Now that everything is green, let’s refactor the test class a bit:

This may seem a little over the top, but I believe in ruthlessly eliminating duplication wherever it occurs, and we were duplicating that instantiation logic in the test class. Eliminating noise in the test methods also makes your intentions clearer in your test methods, which communicates more effectively to people what it is you want your code to do.

With that out of the way, let’s define something useful on Pawn as a starting point. Currently, it’s simply a class declaration followed by “{}” Here’s the test that I’m going to write:

Alright, now let’s make this thing pass:

You might be objecting slightly to what I’ve done here in jumping the gun from the simplest thing that could work. Well, I’ll remind you that simplest does not mean stupidest. There’s no need to return a constant here, since the movement of a pawn (move ahead one square) isn’t exactly rocket science. Do the simplest thing to make the tests pass is the discipline, and in either case, we’re adding a quick one liner.

Now, there are all kinds of problems with what we’ve got so far. We’re going to need to differentiate white pieces from black pieces later. GetMoves (plural) returns a single tuple, and, while Tuple is a cool language construct, that clearly needs to go in favor of some kind of value object, and quickly. The board takes an array of pawns instead of a more general piece concept. The most recent unit test has two asserts in it, which isn’t very good. But in spite of all of those shortcomings (which, are going to be my to do list for next post), we now have a board to which you can add pieces and a piece that understands its most basic (and common) movement.

The take-away that I want you to have from this post is two-fold:

  1. Do some up-front planning, particularly as it relates to real-world conceptualizing and dependency flow.
  2. It’s a journey, and you’re not going to get there all at once. Your first few red-green-refactors will leave you with something incomplete to the point of embarrassment. But that’s okay, because TDD is about incremental progress and the simultaneous impossibility of regressions. We can only improve.