DaedTech

Stories about Software

By

Why I Don’t Inherit from Collection Types

In one of the recent posts in the TDD chess series, I was asked a question that made me sort of stop and muse a little after it was asked. It had to do with whether to define a custom collection type or simply use an out of the box one and my thinking went, “well, I’d use an out of the box one unless there were some compelling reason to build my own.” This got me to thinking about something that had gone the way of the Dodo in my coding, which was to build collection types by inheriting from existing collections or else implementing collection interfaces. I used to do that and now I don’t do that anymore. At some point, it simply stopped happening. But why?

I never read some blog post or book that convinced me this was stupid, nor do I have any recollection of getting burned by this practice. There was no great moment of falling out — just a long-ago cessation that wasn’t a conscious decision, sort of like the point in your 20’s when it stops being a given that you’re going to be out until all hours on a Saturday night. I pondered this here and there for a week or so, and then I figured it out.

TDD happened. Er, well, I started practicing TDD as a matter of course, and, in the time since I started doing that, I never inherited from a collection type. Interesting. So, what’s the relationship?

Well, simply put, the relationship is that inheriting from a collection type just never seems to be the simplest way to get some test passing, and it’s never a compelling refactoring (if I were defining some kind of API library for developers and a custom collection type were a reasonable end-product, I would write one, but not for my own development sake). Inheriting from a collection type is, in fact, a rather speculative piece of coding. You’re defining some type that you want to use and saying “you know, it’d probably be handy if I could also do X, Y, and Z to it later.” But I’ve written my opinion about this type of activity.

Doing this also largely increases the surface area of your class. If I have a type that encapsulates some collection of things and I want to give clients the ability to access these things by a property such as their “Id” then I define a GetById(int) method (or whatever). But if I then say “well, you know what, maybe they want to access these things by position or iterate over them or something, so I’ll just inherit from List of Whatever, and give them all that goodness. But yikes! Now I’m responsible for maintaining a concept of ordering, handling the case of out of bounds indexing, and all sorts of other unpleasantness. So, I can either punt and delegate to the methods on the type that I’m wrapping or else I can get rid of the encapsulation altogether and just cobble additional functionality onto List.

But that’s an icky choice. I’m either coughing up all sorts of extra surface area and delegating it to something I’m wrapping (ugly design) or I’m kind of weirdly violating the SRP by extending some collection type to do domain sorts of things. And I’m doing this counter to what my TDD approach would bubble to the surface anyway. I’d never write speculative code at all. Until some client of mine were using something besides GetById(int), GetById(int) is all it’s getting.

This has led me to wonder if, perhaps, there’s some relationship between the rise of TDD and decoupled design and the adage to favor composition over inheritance. Deep inheritance hierarchies across namespaces, assemblies, and even authors, creates the kind of indirection that makes intention muddy and testing (especially TDD) a chore. I want a class that encapsulates details and provides behavior because I’m hitting that public API with tests to define behavior. I don’t want some class that’s a List for the most part but with other stuff tacked on — I don’t want to unit test the List class from the library at all.

It’s interesting speculation, but at the end of the day, the reason I don’t inherit from collection types is a derivative of the fact that I think doing so is awkward and I’m leerier and leerier of inheritance as time goes on. I really don’t do it because I never find it helpful to do so, tautological as that may seem. Perhaps if you’re like me and you just stop thinking of it as a tool in your tool chest, you won’t miss it.

By

The Craftsmen Have it Right: Defining Software Quackery

I’ve fallen a bit behind on listening to some of my go-to programming podcasts of late (I will explain why shortly), but some of my friends were talking at dinner last week about a recent episode of .NET Rocks, featuring Alan Stevens. It presented the question, “Are you a craftsman?” and adopted a contrarian, or self-described “Devil’s Advocate” take on the movement. As far as episodes of that podcast go, this one was sort of guaranteed to be a lightning rod and the volume of comments seemed to bear that out, with Bob Martin himself weighing in on the subject.

I listened to this episode a week ago or so, and I’m not staring at the transcript, but here are some of the points I remember Alan making:

  • The craftsmanship movement in some places has turned into a monoculture — a way for the in-crowd to congratulate itself for getting it.
  • The wholesale comparison to guild culture and artisans is generally pretentious.
  • The movement should be inclusive and about building people up, rather than disparaging people.
  • There are lots of people out there not doing the “craftsmanship stuff” and still shipping working code, so who are we to judge?
  • Comparing software development to high-end, artistic craftsmanship, such as that of 30K guitars, devalues the latter**

(**I’ll just mention briefly that I consider this point to be absurd since this is shifting the goalposts completely from the “craft guild” comparison upon which the name is based — medieval craft guilds were responsible for making walls, shoes, and candles — not guitars for billionaire rock stars)

The comments for this episode contain a fair amount of disappointment and outright anger, but neither of those things was what I felt while listening, personally. Stevens is fairly engaging, and he had a number of pithy quotes from industry titans and authors at his disposal, citing, if memory serves, “The Pragmatic Programmer” and “The Mythical Man Month” as well as being fairly facile with the “Software Craftsmanship Manifesto.” This is clearly a well-read and knowledgeable industry professional as well as a practiced speaker, and he said a lot of things that I found reasonable, taken individually. But I couldn’t shake a vague sense of the surreal, like the dream sequences in the movie in Inception where Dali-like non-physics lurks at the periphery of your vision. It all seemed to make sense at first blush, but it was all wrong somehow.

I figured it out, and I’ll address that with a series of vignette sections here and hopefully tie them together somewhere this side of intolerable rambling.

On Quackery

One of the reasons I’ve been a bit negligent on my developer podcast listening is that I’ve become absolutely mesmerized by a podcast called “Quack Cast“, in which an infectious diseases doctor addresses alternative medicines. This man is ruthlessly empirical and relentlessly rational and even though the subject matter has virutally no overlap with anything I do, I find it fascinating. He provides ongoing rebuttal to all sorts of news about and advocacy for what he calls “SCAM” (supplements, complements, & alternative medicine), hoping to shine a light on how magic-based many of these approaches are.

Now, it isn’t my aim here to get into some kind of quasi-political fracas about whether prayer helps with surgery or whether aromatherapy cures cancer or whatever — if you have your beliefs, be welcome to them. But I will briefly describe one thing that’s so absolutely bat%&$# nuts that I don’t think it’s controversial for me to call it that: homeopathy. Now, if you’re like I was before hearing the podcast and then doing my own research, you probably don’t know exactly what homeopathy is. You probably think that it’s “home remedies” or “holistic treatment” or something else vague. It’s not. It’s much, much more ridiculous. It’s about 150 stops further down the crazy-train, firmly in the city of Quacksville.

You can read more here, but it’s a ‘theory’ of illness and cure that originated prior to the discovery of germs and the development of germ theory. Since that time, it has persisted largely and remarkably unchanged. The basic axioms are that “like cures like” and “the more you dilute something, the stronger it becomes.” So, for example, if you’re an end-stage alcoholic suffering from cirrhosis of the liver, the best medicine is tequila, but only if you cut it with water so many times that there’s no tequila left in your tequila. So, two wrongs (and one utter violation of the laws of physics as we know them) make not a right, but a nothing (i.e. giving water to someone with cirrhosis). And the fact that homeopathy is predicated upon diluting any actual effects out of existence is probably the reason it’s persisted while other historical, magical nonsense like alchemy, bloodletting, lobotomies, and exorcisms have gone extinct — homeopathy is relatively harmless because it’s all placebo, all the time (unless a practitioner makes a mistake and creates a nostrum that actually has some effect, which would be a problem). At the time this was proposed, it was as reasonable as any other contemporary approach to medicine, but in this day and age, it’s utter quackery.

Let’s come back to this later.

I’m OK, You’re OK

In software development, we start out as babes in the woods with an “I’m not OK, you’re OK” mentality. Everyone else knows what they’re doing and we’re lost. This is not healthy — we should be nurtured and not judged until we feel that we’re OK. Likewise, what separates good mentors from Expert Beginners is the adoption of an “I’m OK, you’re OK” attitude versus an “I’m OK, you’re not OK.” The latter is also not good. I write code and I’m OK. You write code and you’re OK. Maybe you’re more experienced than me, or maybe I’m more experienced than you. Either way, that’s OK. I’m OK and you’re OK.

At the end of the day, we’re all writing code that works, or at least trying to. And that’s OK. It’s OK if it doesn’t always work. We’re all trying. Maybe I test my code before checking it in. I’m OK. Maybe I at least compile it. And that’s OK. Maybe you don’t. And that’s OK. You’re OK. I’m OK and you’re OK. As long as we’re all trying, or, if not always trying, at least showing up, we’re all OK. There’s no sense casting aspersions or being critical.

This is the mature, healthy way to regard one another in the industry. Isn’t it? As long as we show up to work for the most part and sometimes write stuff that does stuff, who is anyone to judge? We’re all OK, and that’s OK.

Can Anyone Recommend an Apothecary?

If I’ve learned anything from reading all of the Game of Thrones books, it’s that the Middle Ages were awesome. Dragons and Whitewalkers and all that stuff. But what gets less play in those history textbooks is the rise of the merchant class during that time period. This is due in large part to the emergence of merchant and craft guilds. Craft guilds, specifically, were a fascinating and novel construct that greased the skids for the increased specialization of labor that would later explode in the Industrial Revolution (I think that happens in book 6 of Game of Thrones, but we’ll see when it comes out).

Craft guilds became professional homes for artisans of the time: stonemasons, cobblers, candle makers, bakers, and apothecaries — who knows, perhaps even homeopaths, though they would probably have been drawn and quartered for their suggested treatment of social diseases. The guilds had an arrangement with the towns in which they were situated. The only people in town that could practice the craft were guild members. In exchange, a basic level of quality was guaranteed. You could think of this as a largely benevolent cartel, though I kind of prefer to think of it as a labor union, but with more dragons.

Members of the guild entered as apprentices, where they learned at the feet of masters (and their family generally paid for this education). After completing a long apprenticeship, the aspiring guild member was allowed to practice the craft, for a wage as a journeyman, working with (for) other masters to ensure that knowledge cross-pollination occurred. With enough savings and proof of his “mastery” (which I think may be the origin of the word “masterpiece”) the journeyman could become a master and open his open business. The guild policed its own ranks for minimum quality, forcing its members to redo, pro bono, anything that they produced not up to snuff. The guild also acted as a unit against anyone selling knock-offs out of a trench-coat on the street corner. (The DVD makers’ guild was known in particular for this.) The self-policing guild offered a minimum standard for quality to the townsfolk and, in return, it was the only gig in town.

In practice, this meant a stamping out of impostors, charlatans, cheaters, and cranks. It meant that standards existed and that the general populace could depend, to some degree, on a predictable exchange of value. It also meant that there were accepted tools of the trade, processes and hoops through which to jump, internal political games to be played, and a decent amount of monoculture. And, it probably meant that the world progressed at the pace set by high achievers, if not geniuses. After all, the market economy had not yet been invented to reward wunderkinds, the outsized influencers, the lucky, and the radical innovators. Improvement happened, to be sure, but it happened at a pace approved by a council of masters that were ultimately men and men with egos.

So, this is probably the perfect metaphor for software development. Right? It seems like it’d be good to strive for a minimum level of quality — a set of standards, if you will. It seems like it’d be good if there was some kind of accreditation process, perhaps more accurate than a CS degree and less myopic than a certification, to demonstrate that someone was competent at software development. It seems like it’d be good for aspiring/new software developers to have a lot of one-on-one learning time with experienced developers that were really good at what they were doing. It seems like it’d be good for those initiates then to travel around some, broadening their experience and spreading their ideas. But, wait a second… we live in a very non-insular world, work in a global market economy, and, libertarians that we are, we’d sooner die than unionize. And writing software isn’t “craftsmanship,” the way that making candles, pies or shoes is.

Crap! We were doing so well there for a while, and there’s nothing more irritating than having to throw out your babies every time they dirty up their bath water. This seems to happen with all of my metaphors if I dissect them enough.

Software Quackery

Like medicine, artisanship, and even later 20th century pop psychology, software development has sort of lurched and hiccuped along in its (relatively short) history. Things that were widely done in the past have fallen out of favor. And, while a lot of our industry practices tend to be somewhat cyclical (preference for thin versus thick clients, for instance) some ideas do stick around. We make progress. We can stand, to some degree, on the shoulders of those who came before us and say that code at the GOTO level of abstraction does not scale and that shortening feedback loops and catching mistakes early are examples of preferable practices to their alternatives. We learn from our mistakes as individuals and as a collective. The bar inches higher. Or sometimes it stays stubbornly stagnant for a while and then makes a jump, the way that the field of medicine did with practices like hand-washing prior to surgery (except for homoepathic surgery to remove gangrenous limbs, in which case hand-washing is a strict no-no).

Movements emerge in fields, and they don’t emerge in a vacuum. They emerge in contrast to a prevailing trend or following a new, avant-garde idea. They are born out of words like “never again” as veterans of some practice look over the scars it has inflicted upon them. They become zeitgeists of the period before later being described as historic, or perhaps even quaint. In our field, I can think of some obvious examples. The Agile Manifesto and its subsequent movement springs to mind. As do the Software Craftsmanship, well, manifesto, and movement. Those are biggies, but there are plenty of others (everything on the desktop, no, wait, let’s move it to the browser, no, wait, let’s put it on people’s phones!)

So wherefore art thou, Software Craftsmanship? Well, I might suggest that Software Craftsmanship movement exists not in a vacuum and not to make those participating feel good about themselves after endless conferences, talks, and meetups filled with navel gazing. It’s not about condescension or creating “one true way.” It’s not about claiming that software is some kind of rarefied art form, nor is it about aesthetics or the “perfect code.” It’s not about saying that there’s one true MV* pattern to rule them all or that your unit tests need exactly one assert statement. Rather, at its core, I believe Software Craftsmanship is a simple refusal to tolerate Software Quackery any longer.

Once upon a time, back when PHP was new, we, as an industry, edited code on production servers, just as the founder of homeopathy thought that eating bark would cure malaria back in 18th century. However, a lot of time, study, experimentation and experience later, we came to the conclusion, as an industry, that hand editing code in production is a bad idea, just as medical science later came to understand why malaria occurred and how to prevent it. So now, it’s not pretentious nor is it beyond the pale for us, as an industry, to look upon software consultants editing code in production as quacks, anymore than it is to look upon medical practitioners handing out bark-water cocktails to treat malaria as quacks. We’re OK but you’re not OK if you’re doing that, and it’s OK for us to point out that you’re not OK.

It’s tempting to adopt a live and let live mentality and to be contrarian and say, “hey, we’re all trying to fight the disease, amirite, so let’s not be high and mighty,” but the fact of the matter is that treating malaria with bark and water is grossly negligent in this day and age, and it’s quackery of the highest order. So the Software Craftsmanship movement takes a page from the craft guilds of yore, and says, “you know, we should establish some minimum collective standards of competence, have some notion of internal quality policing, encourage aspiring members to learn at the hands of proven veterans, and develop the ability to say, ‘I’m sorry, but that’s just not good enough anymore.'” Yes, fine, if you ride the metaphor hard enough, it breaks down, but that’s really not the point. The point is that the movement is attempting to raise the bar from a world of barbaric medicine via unguents, spells, dances, and hitting people with rocks, to a world of medicine via the scientific method.

Back to the Podcast

As I mentioned earlier, I felt neither angry nor disappointed listening to Alan Stevens. I liked the conversation and at least parts of what he said. I picked up a few interesting quotes from authors and thinkers that I hadn’t hear previously, and his tone was relatively humble and disarmingly self-deprecating. And there was certainly a kind of populist, “don’t look down on the dark matter developers” vibe. So what was responsible for my surreal feeling? What was ‘wrong’ that I figured out? I could sum it up in a simple phrase, when it hit me: it felt like Stevens was pandering to what he thought was a low-brow audience.

What I mean is, here was a man, clearly knowledgeable about the industry, armed with impressive quotes and enough cachet to be asked to appear on .NET rocks, telling the listening population a very odd thing. Sure, he has a highfalutin Harvard doctorate like all those other doctors, but unlike them, he’s here to tell you that you don’t need any kind of fancy degree or medicine or machine to treat your own “carcinoma” or, as real people say, butt cancer — all you need is your own know-how, some kleenex, and a bucket of water. Or, in software terms, it’s okay if you ship crap, so long as you have a good attitude. I mean, we can’t all be expected to do a good job, can we? He even came out and said (paraprhased, though I remember the message very clearly) something along the lines of “sometimes mediocrity is good.” So, forget all of these fancy best practices and be proud of yourself if that 80,000 line classic ASP file you hand-edit in production manages not to kill anyone.

The underlying message of the show wasn’t any substantive indictment of the Software Craftsmanship movement, but an endorsement of Software Quackery. I mean, sure, there are good practices and bad practices, but let’s not get all high and mighty just because some doctors don’t wash their hands before operating on you — I mean, you’d rather have all of the colors of the competence rainbow than some surgeon hand-washing monoculture, right? The reference to Dreyfus really brought it full-weird-circle for me, though, because championing Software Quackery is generally the province of Expert Beginners — not Experts. So, I’m sorry, but I just don’t buy it. We can and should have some kind of minimum standard that isn’t a goody bag for just showing up. Please, join me in a polite refusal to tolerate Software Quackery — it just doesn’t cut it anymore.

By

TDD Chess Game Part 2

Alright, welcome to the inaugural video post from my blog. Due to several requests over twitter, comments, etc, I decided to use my Pluralsight recording setup to record myself actually coding for this series instead of just posting a lot of snippets of code. It was actually a good bit of fun and sort of surreal to do. I just coded for the series as I normally would, except that I turned the screen capture on while I was coding. A few days later, I watched the video and recorded narration for it as I watched it, which is why you’ll hear me sometimes struggling to recall exactly what I did.

The Pluraslight recordings are obviously a lot more polished — I tend to script those out to varying degrees and re-record until I have pretty good takes. This was a different animal; I just watched myself code and kind of narrated what I was doing, pauses, stupid mistakes, and all. My goal is that it will feel like we’re pair programming with me driving and explaining as I go: informal, conversational, etc.

Here’s what I accomplish in this particular video, not necessarily in order:

  • Eliminated magic numbers in Board class.
  • Got rid of x coordinate/y coordinate arguments in favor of an immutable type called BoardCoordinate.
  • Cleaned up a unit test with two asserts.
  • Got rid of the ‘cheating’ approach of returning a tuple of int, int.
  • Made the GetPossibleMoves method return an enumeration of moves instead of a single move.

And, here are some lessons to take away from this, both instructional from me and by watching me make mistakes:

  • Passing the same primitive/value types around your code everywhere (e.g. xCoordinate, yCoordinate) is a code smell called “primitive obsession” and it often indicates that you have something that should be a type in your domain. Don’t let this infect your code.
  • You can’t initialize properties in a non-default constructor (I looked up the whys and wherefores here after not remembering exactly why while recording audio and video).
  • Having lots of value type parameter and return values instead of domain concepts leads to constant small confusions that add up to lots of wasted time. Eliminate these as early in your design as possible to minimize this.
  • Sometimes you’ll create a failing test and then try something to make it pass that doesn’t work. This indicates that you’re not clear on what’s going on with the code, and it’s good that you’re following TDD so you catch your confusion as early as possible.
  • If you write some code to get a red test to pass, and it doesn’t work, and then you discover the problem was with your test rather than the production code, don’t leave the changes you made to the production code in there, even if the test is green. That code wasn’t necessary, and you should never have so much as a single line of code in your code base that you didn’t put in for reasons you can clearly explain. “Meh, it’s green, so whatever” is unacceptable. At every moment you should know exactly why your tests are red if they’re red, green if they’re green, or not compiling if the code doesn’t compile. If you’ve written code that you don’t understand, research it or delete it.
  • No matter how long you’ve been doing this, you’re still going to do dumb things. Accept it, and optimize your process to minimize the amount of wasted time your mistakes cause (TDD is an excellent way to do this).

So, here’s the video. Enjoy!

A couple of housekeeping notes. First, you should watch the video in full screen, 1080p, ideally (click the little “gear” icon between “CC” and “YouTube” at the bottom of the video screen . 720 will work but may be a touch blurry. Lower resolutions and you won’t see what’s going on. Second, if there’s interest, I can keep the source for this on github as I work on it. The videos will lag behind the source though (for instance, I’ve already done the coding for part 3 in the series — just not the audio or the post, yet). Drop me a comment or tweet at me or something if you’d like to see the code as it emerges also — adding it to github isn’t exactly difficult, but I won’t bother unless there’s interest.

By

Meetings and Introverts: Strangers in Strange Lands

I’ll admit as I type the first sentence of this post that I don’t know whether this will conclude a two-part “mini-series” or whether I’ll feel compelled to write further posts. But I wanted to write the follow up that I hinted at to the post I wrote about introversion for programmers (well, specifically me). Tl;dr refresher of that post is that social situations are exhausting for me because of their inherent unpredictability as compared to something like the feedback loop of a program that I’m writing (or even the easily curated, asynchronous interaction of a social media vehicle like Twitter). The subjects I left for this post were “Erik as a problem solver and pattern matcher,” consensus meetings, and two exceptional social situations that I don’t find tiring. The challenge will be spinning these things into a coherent narrative, but I’ll take a crack at it.

Whenever I look in my Outlook calendar and see something like “Meeting to Discuss Issue Escalation Strategy,” I am struck with a surprisingly profound feeling that life is filled with senseless waste, the way one might look in dismay at his sunglasses floating down a river he accidentally dropped them into. I see an hour or two of my life drifting away with no immediately obvious reclamation strategy. My hypothesis is that this is the sort of standard introvert take on what I’ll call “consensus meetings” rather than what many programmers seem to think of as a programmer take on them. As Paul Graham points out in one of my all time favorite posts, “when you’re operating on [a programmer’s] schedule, meetings are a disaster.” But I’m not really a maker these days anymore; for the time being, I’m a manager. And I still find these meetings to be a disaster.

Extroverts draw energy from social situations and become invigorated, while introverts spend energy and become exhausted. And, when I’m talking about social situations, I mean drinks and bowling with a group of friends. Introverts like me enjoy these nights but find them tiring. Now, apply this same sort of thinking to adversarial situations that are veritable clinics in bike-shedding. A bunch of introverts and extroverts gather together and set about deciding what the organizational flow chart of issue escalation should look like. Should it start at Tier 1 support and be escalated to the manager of that group, then over to internal operations and on up to Bill in DevOps? Or should it go through Susan in Accounting because she used to work in DevOps and and really has her finger on the pulse? Maybe we do that for 2 months and then go to Bill because that’s not really sustainable in the long term, but it’s good for now. And, we should probably have everyone send out a quick email any time it affects the Initrode account. And… ugh, I can’t type anymore.

So here sit a bunch of extroverts and me. The extroverts love this. People in general love having opinions and telling people their opinions. (I’m not above this — you’ve been reading my rants here for years, so there’s clearly no high ground for me to claim.) But it’s the extroverts that draw energy from this exchange and work themselves into a lather to leave their marks on the eventual end-product via this back and forth. The more this conversation draws on, the more they want to interject with their opinions, wisdom and knowledge. The more trivial and detailed the discussion becomes, the more they get their adrenaline up and enjoy the thrill of the hunt.

I on the other hand, check out almost immediately. From an organizational realpolitik perspective, these meetings are typically full of sound and fury, signifying nothing. The initial meeting organizer turns on the firehose and then quickly loses control of it as the entire affair devolves into a cartoonish torrent of ideas being sprayed around the room as the hose snakes, thrashes, and contorts with no guiding hand. Nobody is really capturing all of this, so the extroverts leave the meeting flush with the satisfaction of shouting their opinions at each other, but most likely having neglected to agree on anything. But, my inclination to check out goes deeper than the fact that nothing is particularly likely to be accomplished; it’s that neither the forum, nor the ideas and the opinions are interesting or important to me.

scan0003

I earnestly apologize if this sounds arrogant or stand-offish, but it’s the honest truth. And this is where the part about me being an introverted problem solver and pattern-matcher comes in. The meeting I want to have is one where I come prepared with statistical analysis and data about the most efficient flows of information in triage scenarios. I want performance records and response times for Bill and Susan, including in her former role in the case of the latter. I want to have synthesized that data looking for patterns that coincide with issue types and resolution rates, and I want to make a recommendation on the basis of all of that. To me, those are table stakes to the meeting. Whoever has the best such analysis should clearly have his or her plan implemented.

But that’s not what happens in extrovert meetings. As soon as the meeting organizer loses control of the firehose, we’ve entered the realm of utter unpredictability. I start to present my case study and the patterns I’ve noticed, and then someone interrupts to ask if I captured that data using the new or old ticketing system. And, by the way, what power point template am I using because it’s really snazzy. And, anyway, the thing about Susan is that she’s really not as much of a people person, but Doug kind of is. Now the extroverts are firmly in command. All prior analysis goes out the window, and, as people start jabbering over one another reasoned analysis and facts are quite irrevocably replaced with opinions, speculation, gossip, and non sequitur in general. The conversation floats gently down stream and washes up on a distant shore when everyone decides that it’s time for lunch. All of the analysis… unconsidered and largely ignored.

And that, the extroverts taking over and leaving me to space out, is the best case scenario. In the worst case scenario, they start peppering me with a series of off-topic, gotcha questions to which I have to reply, “I don’t know off the top of my head, but I can look into it later.” This puts me at a huge disadvantage because extroverts, buoyed by the rush of the occasion, have no qualms about guessing, fudging, hand-waving, or otherwise manufacturing ‘analysis’ out of thin air. When things take this kind of turn, and someone else “wins the meeting,” it’s even more exhausting.

Regardless of which kind of meeting it is though, the result is usually the same. After lunch and everyone has a chance to forget the particulars of the discussion, it becomes time to email the real decision maker or chat one on one with that person, and re-present the analysis for consideration. Usually at that time, the analysis wins the day or at least heavily informs the decision. The meeting robbed me of an hour of my life to accomplish nothing, as I knew it would, when I looked sadly at my Outlook calendar that morning.

There are two kinds of meeting that have no chance to fit this pattern, however (I’m omitting from consideration meetings that are actually policed reasonably by a moderator to keep things on-agenda, since these are far more rare than they should be). These are meetings where I’m passively listening to a single presenter, or actively presenting to the group. It’s not especially interesting that I’d find the former kind of meeting not to be exhausting since it’s somewhat akin to watching a movie, but the latter is, apparently, somewhat interesting. Presenting is not exhausting to me the way that a night out at a party is exhausting. There are sometimes pre-speech/talk jitters depending on the venue, but the talk is entirely predictable to me. I control exactly what’s going to be said and shown, and the speed at which I’ll progress. There is a mild element of unpredictability during the Q&A, but as the MC for the talk, you’re usually pretty well in control of that, too. So, that is the reason I find typical corporate meetings more exhausting than presenting in front of groups.

A strange thing, that. But I think in this light it’s somewhat understandable. Having reasoned analysis, cogent arguments, and a plan is the way to bring as much predictability (and, in my opinion, potential for being productive) to the table as possible. For me, it’s also the way most likely to keep the day’s meetings from sucking the life and productivity right out of you.

By

TDD and Modeling a Chess Game

For the first post in this series, I’m responding to an email I received asking for help writing an application that allows its users to play a game of chess, with specific emphasis on a feature in which players could see which moves were available for a given piece. The emailer cited a couple of old posts of mine in which I touched on abstractions and the use of TDD. He is at the point of having earned a degree in CS and looking for his first pure programming job and it warms my heart to hear that he’s interested in “doing it right” in the sense of taking an craftsmanship-style approach (teaching himself TDD, reading about design patterns, etc).

Modeling a game of chess is actually an awesome request to demonstrate some subtle but important points, so I’m thrilled about this question. I’m going to take the opportunity presented by this question to cover these points first, so please bear with me.

TDD does not mean no up-front planning!

I’ve heard this canard with some frequency, and it’s really not true at all (at least when done “right” as I envision it). A client or boss doesn’t approach you and say, “hey can you build us a website that manages our 12 different kinds of inventory,” and you then respond by firing up your IDE, writing a unit test and seeing where the chips fall from there. What you do instead is that you start to plan — white boards, diagrams, conversations, requirements definition, etc.

The most important outcome of this sort of planning to me is that you’ll start to decompose the system into logical boundaries, which also means decomposing it into smaller, more manageable problems. This might include defining various bounded contexts, application layers, plugins or modules, etc. If you iterate enough this way (e.g. modules to namespaces, namespaces to classes), you’ll eventually find yourself with a broad design in place and with enough specificity that you can start writing tests that describe meaningful behavior. It’s hard to be too specific here, but suffice it to say that using TDD or any other approach to coding only makes sense when you’ve done enough planning to have a good, sane starting point for a system.

Slap your hand if you’re thinking about 2-D arrays right now.

Okay, so let’s get started with this actual planning. We’ll probably need some kind of concept of chess pieces, but early on we can represent these with something simple, like a character or a chess notation string. So, perhaps we can represent the board by having an 8×8 grid and pieces on it with names like “B_qb” for “black queen’s bishop.” So, we can declare a string[8][8], initialize these strings as appropriate and start writing our tests, right?

Well, wrong I’d say. You don’t want to spend your project thinking about array indices and string parsing — you want to think about chess, chess boards, and playing chess. Strings and arrays are unimportant implementation details that should be mercifully hidden from your view, encapsulated snugly in some kind of class. The unit tests you’re gearing up to write are a good litmus test for this. Done right, at the end of the project, you should be able to switch from an array to a list to a vector to some kind of insane tree structure, and as long as the implementation is still viable all of your unit tests should be green the entire time. If changing from an array to a list renders some of your tests red, then there’s either something wrong with your production code or with your unit tests, but, in either case, there’s definitely something wrong with your design.

Make your dependencies flow one way

So, forget about strings and arrays and let’s define some concepts, like “chess board” and “chess piece.” Now these are going to start having behaviors we can sink our TDD teeth into. For instance, we can probably imagine that “chess board” will be instantiated with an integer that defaults to 8 and corresponds to the number of spaces in any given direction. It will also probably have methods that keep track of pieces, such as place(piece, location) or move(piece, location).

How about “chess piece?” Seems like a good case for inheritance since all of them need to understand how to move in two dimensional spaces, but they all move differently. Of course, that’s probably a little premature. But, what we can say is that the piece should probably have some kind of methods like isLegalMove(board, location). Of course, now board knows about piece and vice-versa, which is kind of icky. When two classes know about one another, you’ve got such a high degree of coupling that you might as well smash them into one class, and suddenly testing and modeling gets hard.

One way around this is to decide on a direction for dependencies to flow and let that guide your design. If class A knows about B, then you seek out a design where B knows nothing about A. So, in our case, should piece know about board or vice-versa? Well, I’d submit that board is essentially a generic collection of piece, so that’s a reasonable dependency direction. Of course, if we’re doing that, it means that the piece can’t know about a board (in the same way that you’d have a list as a means of housing integers without the integer class knowing about the list class).

Model the real world (but not necessarily for the reasons you’d think)

This puts us in a bit of an awkward position. If the piece doesn’t know about the board, how can we figure out whether it’s moving off the edge of the board or whether other pieces are in the way? There’s no way around that, right? And wasn’t the whole point of OOP to model the real world? And in the real world the pieces touch the board and vice versa, so doesn’t that mean they should know about each other?

Well, maybe we’re getting a little ahead of ourselves here. Let’s think about an actual game of chess between a couple of moving humans. Sure, there’s the board and the pieces, but it only ends there if you’re picturing a game of chess on your smart phone. In real life, you’re picking pieces up and moving them. If you move a piece forward three spaces and there’s a piece right in front of it, you’ll knock that piece over. Similarly, if you execute “move ahead 25 spaces,” you’ll drop the piece in your opponent’s lap. So really, it isn’t the board preventing you from making illegal moves, per se. You have some kind of move legality calculator in your head that takes into account what kind of piece you’re talking about and where it’s located on the board, and based on those inputs, you know if a move is legal.

So, my advice here is a little more tempered than a lot of grizzled OOP veterans might offer. Model the real world not because it’s the OOP thing to do or any dogmatic consideration like that. Model the real world because it often jolts you out of a code-centric/procedural way of thinking and because it can make your conceptual model of the problem easier to reason about. An example of the latter is the idea that we’re talking about pieces and boards instead of the 2-day arrays and strings I mentioned a few sections back.

Down to Business

I’m doing this as an exercise where I’m just freewheeling and designing as if I were implementing this problem right now (and just taking a little extra time to note my reasoning). So you’ll have to trust that I’m not sneaking in hours of whiteboarding. I mention this only because the above thinking (deciding on chess piece, board, and some concept of legality evaulator class) is going to be the sum total of my up-front thinking before starting to let the actual process of TDD guide me a bit. It’s critical to recognize that this isn’t zero planning and that I actually might be doing a lot less up front reasoning time than an OOP/TDD novice simply because I’m pretty well experienced knowing how to chart out workable designs from the get-go keeping them flexible and clean as I go. So, without further ado, let’s start coding. (In the future, I might start a youtube channel and use my Pluralsight setup to record such coding sessions if any of you are interested).

First of all, I’m going to create a board class and a piece class. The piece class is going to be a placeholder and the board is just going to hold pieces, for the most part, and toss exceptions for invalid accesses. I think the first thing that I need to be able to do is place a piece on the board. So, I’m going to write this test:

First of all, I’ll mention that I’m following the naming/structuring convention that I picked up from Phil Haack some time back, and that’s why I have the nested classes. Besides that, this is pretty vanilla fare. The pawn class is literally a do-nothing placeholder at this point, and now I just have to make this test not throw an exception, which is pretty easy. I just define the method:

Woohoo! Passing test. But that’s a pretty lame test, so let’s do something else. What good is testing a state mutating method if you don’t have a state querying method (meaning, if I’m going to write to something, it’s pointless unless someone or something can also read from it)? Let’s define another test method:

Alright, now we probably have to make a non (less) stupid implementation. Let’s make this new test pass.

Well, there we go. Now that everything is green, let’s refactor the test class a bit:

This may seem a little over the top, but I believe in ruthlessly eliminating duplication wherever it occurs, and we were duplicating that instantiation logic in the test class. Eliminating noise in the test methods also makes your intentions clearer in your test methods, which communicates more effectively to people what it is you want your code to do.

With that out of the way, let’s define something useful on Pawn as a starting point. Currently, it’s simply a class declaration followed by “{}” Here’s the test that I’m going to write:

Alright, now let’s make this thing pass:

You might be objecting slightly to what I’ve done here in jumping the gun from the simplest thing that could work. Well, I’ll remind you that simplest does not mean stupidest. There’s no need to return a constant here, since the movement of a pawn (move ahead one square) isn’t exactly rocket science. Do the simplest thing to make the tests pass is the discipline, and in either case, we’re adding a quick one liner.

Now, there are all kinds of problems with what we’ve got so far. We’re going to need to differentiate white pieces from black pieces later. GetMoves (plural) returns a single tuple, and, while Tuple is a cool language construct, that clearly needs to go in favor of some kind of value object, and quickly. The board takes an array of pawns instead of a more general piece concept. The most recent unit test has two asserts in it, which isn’t very good. But in spite of all of those shortcomings (which, are going to be my to do list for next post), we now have a board to which you can add pieces and a piece that understands its most basic (and common) movement.

The take-away that I want you to have from this post is two-fold:

  1. Do some up-front planning, particularly as it relates to real-world conceptualizing and dependency flow.
  2. It’s a journey, and you’re not going to get there all at once. Your first few red-green-refactors will leave you with something incomplete to the point of embarrassment. But that’s okay, because TDD is about incremental progress and the simultaneous impossibility of regressions. We can only improve.

By

Get Fed Up Every Now and Then

In SQL Server management studio, all of objects in the database are listed in the object explorer in the format [schema].[object]. In large databases of the legacy variety, it’s not uncommon to find that the only schema for application tables/views/sprocs is dbo. In this case, navigating to the object you want requires the infuriating step of typing “dbo.” before the table name. This may not seem like a big deal, but you have to type fast for that style of navigation, and typing that period, far from home row, creates a problem. This has annoyed me for years, but today, I’d had enough.

With a few minutes of googling, I found this Q&A exchange. Clearly others feel the same way and while I don’t consider that to be ideal, it’s certainly an improvement over what I was doing. I can hit “F7″ on a folder and bring up an “object explorer” window that lets me search by object name alone because it separates schema as a different column. Win. Sort of.

I have been using SQL Server Management Studio for nearly a decade and I never knew this. But, after a few minutes with google, now I know. Sometimes it’s google, sometimes it’s a tweet, sometimes it’s asking a coworker. But, the common thread is that one day I just say “enough” and decide to do something about it. It might not be a total solution, but I decide right then and there that the situation and my life need to be improved somehow, immediately.

I think this is a pretty valuable activity. Obviously, you pick up new tips and tricks this way, but more subtly, you’re embracing a philosophy that your routines and habits can always be improved and you’re mentally setting an expiration date on mindless, sub-optimal, or obtuse activities. In a way, this is fairly agile (using lower case a in a nod to the recent blogosphere brouhaha over “taking back agile” — I just mean it’s predicated upon the idea of iterative, steady progress). You’re not paralyzing yourself by analysis to figure out the best way to do things up front all of the time, but rather leaving yourself open to the possibility (certainty) that you can improve the way you do things.

ThisEndsNow

So pick something. Today. Pick something that’s a minor irritant that you’ve simply put up with for a long time and say to yourself, “This. Ends. Now.” in a dramatic, action hero voice. Spend a few minutes doing research and I’d imagine you’re going to be happy. I’ve personally never regretted anything about this time investment except for the regret that I didn’t allow myself to get fed up sooner. So, go ahead. You don’t have to take it anymore.

By

The Least Pleasant List

Negative feedback is something with which we all have to contend, and probably on a fairly regular basis. It comes in a whole lot of forms with varying degrees of merit or importance: official performance reviews, talks with coworkers, miscellaneous gossip, heated discussion with loved ones, and even things like comments on a blog post or publication (I love The Oatmeal’s treatment of this last one — see the section about comments). You may or may not be expecting it, but nevertheless, it tends to hit you like a slap in the face.

I usually respond to negative feedback by trying to understand what flaw of the feedback provider is causing them to be wrong about me. Perhaps they’re simply some kind of crank or idiot, or maybe it’s more elaborate than that. They might have serious psychological problems or else a diabolical motivation for hatching a conspiracy against me. Maybe they’re jealous. Yeah, totally.

But after a while, I stop thinking completely like a child and allow just the teeniest, tiniest bit of introspection. I mean, obviously, the person is still a jealous moron, but it is possible that maybe showing up late to work and snapping at everyone I talked to all morning could have been just the slightest bit off-putting to someone. I’ll generously allow for that possibility.

I am, of course, exaggerating for effect, but the point remains — I immediately respond to negative feedback by feeling defensive or even hostile. It’s easy to do and it’s easy to take feedback badly. And to make matters worse, there is definitely feedback that deserves to be taken badly such as someone simply being rude for no reason. Not all negative feedback is even reasonable. The end result is that it becomes very hard to make feedback lemonade out of the negative lemons your critics are lobbing at you.

But I urge you to try. Here’s an exercise I’m contemplating. Whenever I get negative feedback, legitimate or spurious, I’ll make a note about it. I’ll jot it down in some kind of notebook or perhaps make a spreadsheet or Trello board out of it, and I’ll let it digest for a while. Once somewhat removed from the initial feedback, I can probably filter more objectively for validity. If I can make some actionable improvement, I’ll do so. Every now and then I’ll check back to see if the things I used to get the feedback about have changed and if I seem to be making strides. Maybe such a scheme will be worthwhile for me and perhaps it might be for you too, if you’re so inclined.

It’s not easy to take a frank look at yourself and admit that you have shortcomings. But doing so is the best way to improve on and eliminate those shortcomings. It’s hard to recognize that negative feedback may have a grain of truth or even be dead on. I know because it’s hard for me. But it’s important to do so if you’re serious about achieving your goals in life.

By

What TDD Is and Is Not

In my travels, I’ve heard a lot of people embrace test driven development (TDD) and I’ve heard a lot of people say that it isn’t for them. For the most part, it’s been my experience that people who say it isn’t for them have either never really, actually done it or haven’t done it enough to be good at it. Lest you think I’m just being judgmental, here’s a post of mine from about 3 years ago, before I had truly taken the humbling plunge of forcing myself to follow the red-green-refactor discipline to the letter. My language here is typical of someone who buys into unit testing and even to TDD, but who hasn’t become facile enough with the process to avoid feeling the need to tinker with it and qualify the work. If you cut to the heart of what’s going on in my post there (and I say this hat in hand) it’s less “here’s my own special way of doing TDD that I think is actually superior” and more “I haven’t yet gotten good enough at TDD to use it everywhere and that’s rather frustrating.”

Stoplight

Not long after I wrote that post, I forced myself, on a project, to follow the discipline, to the letter, and I’ve never looked back after the initial pain of floundering during what I used to describe as the “prototyping stage” of my coding. I solved the ‘problem’ of TDD not being compatible with that prototyping by thinking my designs through more carefully up-front, solving only problems that actually exist, and essentially not needing it anymore. TDD wasn’t the problem for me when I wrote that post; the problem was that I wasn’t being efficient in my approach.

It’s been my experience that most of the criticism of TDD comes from people like me, years ago. These are people who either have never actually tried TDD or who have tried it for an amount of time too short to become proficient with it. I can rarely recall someone who became proficient with the practice one day saying, “you know what, enough of this — I don’t see the benefit.” For me this statement is hard to imagine because TDD shortens the feedback loop between implementing something and know if it works, and developers innately crave tight feedback loops.

And since the majority of criticism seems to come from those least familiar with the discipline, there are a lot of misconceptions and straw man arguments, as one might expect. So today, I’d like to offer some clarity by way of defining what TDD is and what it it isn’t.

What TDD Isn’t

I think it’s important that I first discuss what TDD is not. There are a lot of misconceptions that surround the test driven development approach to writing software, and there’s a pretty good chance that you’ve at least been exposed to them, if not slightly misled by them. Not only does this make it hard to understand how the practice works, but it may even lead you or your team to reject it for reasons that aren’t actually valid.

  1. First, TDD is not a replacement for user acceptance testing. Someone who practices it does not believe that it’s a valid substitute for running the application and making sure that it does what the requirements state that it needs to do.
  2. Classic TDD is also not comprehensive automated testing. The by-product of it is mainly unit tests, which are tests for finely grained pieces of code, such as classes and methods. Other kinds of automated tests, such as integration tests and end to end system tests involving databases or other external constructs will be a separate prong of your overall testing strategy.
  3. One thing that I frequently hear as an indictment of TDD is that it doesn’t encourage you to think through your design because you initially do the simplest thing to make tests pass. This is not accurate at all. It simply distributes the planning over the course of your development as you go rather than forcing it all to be done up front.
  4. Test driven development is not and does not claim to be any sort of load testing, concurrency testing, or anything else that you might put under the category of “smoke” or “stress” testing. The tests generated by TDD are not meant to test the behavior of your system under adverse conditions.
  5. Another common misconception is that TDD means that you write all tests for the system before you start writing your code. Frankly, doing so would be wildly impractical, and detractors who cite this impracticality as a reason not to do TDD are arguing against a straw man.
  6. TDD is not a quality assurance strategy. When your software department is contemplating new initiatives and dividing them up according to who will own then, TDD does not belong to the testing group.
  7. Detractors of TDD often point out that it doesn’t address corner cases in application or even class and method logic. That is true, but TDD doesn’t aim to address these. They belong with the other automated tests that will be written later.
  8. And, believe it or not, TDD is not primarily a testing activity. This is probably the hardest for people to wrap their heads around when learning the practice. But if you think about the acronym – test driven development, it is primarily development. The tests driving it are simply a characteristic of the development.

What TDD Is

Having gone over a series of things that TDD is not, hopefully I’ve cleared up some misconceptions and narrowed the field a bit. So let’s take a look at what TDD actually is. I should note here that the flavor of TDD that I’m addressing is “Classic,” triangulation-oriented TDD rather than the behavior driven “London” school of TDD.

  1. As mentioned in the last section, TDD is a development approach. It’s a development approach that happens to produce unit tests as you go, which you can then save for later. I suppose you could discard them and retain some of the benefit of the approach, but that would certainly be a waste since you’re going to want automated tests for your system anyway.
  2. Another facet of the test driven development approach is that you avoid “paralysis by analysis,” a situation in which you are so overwhelmed by the complexity of the problem that you simply stare at the screen or otherwise procrastinate, unsure how to proceed. TDD ensures that you’re constantly solving manageable problems.
  3. TDD produces test cases that cover and address every line of code that you write. With this 100% test coverage, you can change your code fearlessly in response to altered requirements, architectural needs, or other unforeseen circumstances. You’ll never have to look at the system nervously, wondering if you’re breaking things. Your tests will tell you.
  4. Besides allowing you to change code easily, test driven development also guides you toward a flexible design. The reason for this is that TDD forces you to assemble your code with testing seams in it, which are entry points, such as constructor and method arguments, that allow you to take advantage of polymorphism for easier testing. A side effect of this is that your code that’s testable is also easier to configure and mix and match in production.
  5. TDD is also a way to ensure that you’re writing as little code as necessary. Since every change to production code requires a failing test, you have to think through exactly what the system needs before you ever touch it. This prevents speculative coding and throwing in things that you assume you’ll need later, such as property setters you never actually use.

So how is all of this accomplished? Well, TDD is a discipline that follows a specific process and relatively simple process.

  1. As I mentioned briefly, the first step is that you expose some kind of deficiency in the system that you’d like to address. This could be a bug, a missing feature, a new requirement – anything that your codebase does not currently do that you want it to. With the deficiency picked out, you write a test that fails because of the deficiency.
  2. With the failing test in place, you then implement the simplest possible solution in your code to get the failing test to pass, ensuring that all of your other tests also still pass. This makes the system completely functional.
  3. Once the system is completely functional, you look at your quick and dirty fix and see if it needs to be refactored toward better design. If so, you perform the refactoring, ensuring that the tests still pass.

So there you have it: a brief overview of what TDD really is. If you’re interested in more on this subject and you have a Pluralsight subscription, check out my course on continuous testing TDD using a tool called NCrunch, which is all about speeding up your feedback loop during development. Most of this post is from the transcript of that course. If you don’t have a Pluralsight subscription and are interested in a trial, drop me a line and I’ll give you a free week of Pluralsight subscription.

By

The Grating Fallacy of “Idea Guys”

If you’ve been following this blog for a while, you might have seen me engage in talk about the MacLeod Hierarchy from time to time in the comments with various posters. We’re referring to a concept where the denizens of a corporate organization fall into one of three categories: Losers, Clueless, and Sociopaths. It’s explained in delightful detail in one my favorite blog series of all time, but I’ll give you the tl;dr version here of what defines these groups, so that I can use the terms more easily in this post. I’m quoting this synopsis from Michael O. Church’s follow up analysis of the subject, which I also highly recommend. The names are inherently pejorative, which was probably a stylistic choice when picking them; none of these is inherently bad to be, from a moral or human standpoint (though one might argue that Clueless is the most embarrassing).

    Losers, who recognize that low-level employment is a losing deal, and therefore commit the minimum effort not to get fired.
    Clueless, who work as hard as they can but fail to understand the organization’s true nature and needs, and are destined for middle management.
    Sociopaths, who capture the surplus value generated by the Losers and Clueless. Destined for upper management.

At this point, I’ll forgive you if you’re just now returning to my blog after a lot of reading. If you’re a fan of corporate realpolitik, those are extremely engrossing series. I bring up these terms here to use as labels when it comes to people who are self styled “idea guys” (or gals, but I’m just going to use the male version for brevity for the rest of the post). I’m going to posit that there are generally two flavors of idea guy: the Loser version and the Clueless version. There is no Sociopath “idea guy” at all, but I’ll talk about that last.

What do I mean, exactly, when I say “idea guys?” Well, I’m not being as literal as saying “people who have ideas,” which would, frankly, be pretty obtuse. Everyone has ideas all day, every day. I’m referring to people who think that their essential value is wrapped up in their ideas, and specifically in ideas that they imagine to be unique. They see themselves as original, innovative and, ironically, people who “think outside the box” (ironic in the sense that someone would describe their originality using what has to be one of the top 10 most overused cliches of all time in the corporate world). Indeed, given their talent for creativity, they see an ideal division of labor in which they kick their feet up and drop pearls of their brilliance for lesser beings to gather and turn into mundane things such as execution, operations, construction, selling, and other ‘boring details’.

I Coulda Made Facebook

The Loser “idea guy” is the “coulda been a contender” archetype. If he’s a techie, he probably talks about what a bad programmer Mark Zuckerburg is and how he got lucky by being in the right place at the right time. The Loser could have just as easily written Facebook, and he has all kinds of other ideas like that too, if he could just catch a break. He might alternate between these types of lamentations and hitting up people for partnerships on ventures where he trades the killer idea for the execution part: “hey, I’ve got this idea for a phone app, and I just need someone to do the programming and all of those details.” MacLeod Losers generally strike a deal where they exchange autonomy for bare minimum effort, so execution isn’t their forte — they’d like to hit the “I’ve got an idea” lottery where they dream something up and everyone else takes care of the details of making sure their bank accounts grow by several zeroes.

We Need to Have a Facebook!

The Clueless “idea guy” is, frankly, a lot more comical (unless you’re reporting to him, in which case he’s borderline insufferable). In contrast to his Loser brethren with illusions of Facebook clones or cures for cancer, this fellow has rather small ideas that take the form of middling corporate strategy. He is a master of bikeshedding over what color the new logo for the Alpha project should be or whether the new promo materials should go out to 25 pilot customers or 30. Actually changing the color of the logo or mailing out the materials? Pff. That’s below his paygrade, newbie — he’s an idea guy who shows up 10 minutes late to meetings, makes the ‘important’ decisions, orders everyone around for a while, and promptly forgets about the whole thing.

Whereas the Loser “idea guy” views his schemes as a way to escape gentle wage slavery, the Clueless version is delighting in playing a part afforded to him by his status with the company, which is basically itself a function of waiting one’s turn. Future Clueless comes to the company, puts up with mid level managers ordering him to do menial tasks and showing up late to his meetings, and, like someone participating in any hazing ritual, relishes the day he’ll get to do it to someone else. “Idea guy” is perfect because it involves being “too important” to follow through on anything, personally, or even extend common courtesy and respect. It’s a vehicle for enjoying a position that allows for ordering people around and reminding everyone that people in overhead positions say jump, and they have to ask “how high?”

While it’s certainly possible to encounter a Clueless with ambitious ideas (perhaps a pet project of redoing the company website to look like Apple’s or something), they’re pretty uncommon, which makes sense when you think about it. After all, part of the fundamental condition of a Clueless is enough cognitive dissonance to believe that his position was obtained through merit rather than running out the clock, and trying to push a big idea to the players in the office above him is to be threatened with rejection and possibly even ridicule, so why bother? Micromanaging and demanding the removal of ducks is a much safer strategy for pleasant reinforcement of his position of power.

I’ll Create a Bubble and Sell This Company to Facebook

So what about the Sociopath “idea guy?” As I said, I don’t think this exists. It’s not that sociopaths don’t have ideas. I might argue that they’re the only ones that have ideas of significance in the corporation (losers may have excellent ideas, but they go nowhere without a sociopath sponsor and spin). But they certainly aren’t “idea guys.” Reason being that sociopaths know what matters. To re-appropriate a famous line Vince Lombardi would say to his Packers teams, Socipaths know that “execution isn’t everything; it’s the only thing.”

A Sociopath doesn’t really assign any pride to ideas, opting instead to evaluate them on merit and implement them when doing so is advantageous. If his ends are better served by letting you take the credit for his ideas, the Sociopath will do that. Alternatively, if his ends are better served by ripping off someone’s idea and doing that, he’ll do that too. No matter what the source of the idea, it’s simply a means to an end. It’s not his baby, and it’s not to be jealously guarded, since ideas are a dime a dozen, and there will be others.

Ideas, like paper or computers or projectors, are simply tools that aid in execution or even, perhaps, inspiration for improved execution. A Sociopath is a doer in every sense — a consummate pragmatist.

Winning the Race of Achievement

If idea guys come from the ranks of Losers and Clueless but it’s only the Sociopaths that capture and capitalize on excess value created, doesn’t this mean, cynically, that people who have great ideas are never the ones that profit? Well, no, I’d argue. Sociopaths have plenty of ideas — they just aren’t “idea guys.” Ideas are commonplace across all organizational strata and across all of life. Sociopaths are simply too pragmatic and busy to create some kind of trumped up persona centered around the remarkably ordinary phenomenon of having ideas; they’re not interested in dressing up as Edison or Einstein and turning their life into some kind of extended Halloween.

Pulling back from the MacLeod categorization, it’s sufficient to say that what turns the motor of the world is commitment, work, dedication, execution, and frankly a bit of luck (though I’d argue for the adage that luck is largely a measure of time and being prepared). Sure, ideas are a part of that, but so are oxygen and conversations. They’re such ubiquitous parts of all that we do that they’re not worth focusing on or even mentioning.

In Macbeth, the title character issues a fatalistic soliloquy that describes life as “a tale, told by an idiot, full of sound and fury, signifying nothing.” This sentiment captures the impact in a group or meeting by a self-styled “idea guy,” particularly of the Clueless variety. He charges in, makes a lot of bold claims, proclamations, and predictions, and then toddles on out, altering the landscape not at all.

If you want to make your impact felt, I’d say there’s another quote that’s appropriate, popularized in American culture by Teddy Roosevelt: “walk softly and carry a big stick.” Don’t declare yourself victorious at the starting line because you dreamed up some kind of idea. Assume that ideas, even great ones, are nothing but your entry fee to the race, expected of every runner. Let them be your grit and inspiration as you labor your way through the course. But realize that the actual, boring mechanics of running — the execution — is what wins you the race rather than whatever inspirational poster saying occurs to you as you plod along.

By

There’s No Excuse

If you have a long, error-prone process, automate it. This is the essential underpinning of what we all do for a living and informs our day to day routines. Whether we’re making innovative new consumer applications, games, line of business applications, or anything else, we look at other people’s lives and say, “we can make that automatic.” “Don’t use a pen and paper for your todo list when there’s GTasks.” “Don’t play risk with a board and dice when you can do it online (or play an Elder Scrolls game instead).” “Don’t key all that nonsense in by hand.”

And yet, far too often we fail to automate our own lives and work. I wonder if there’s some kind of cognitive blindspot we have. When other people mindlessly perform laborious tasks, we jump in and point out how dumb what they’re doing is and how we can sell them something that will rock their world. But when we mindlessly perform laborious tasks (data entry, copy and paste programming, etc) we don’t think twice or, if we do, we assure ourselves that it’s too complicated or not worth automating or something.

I was going through some old documentation the other day to look up what was needed to add another instance of a feature to a legacy application. What it boiled to was something along the lines of “we have an application that sells computers and we want to add a new kind of PCI card to the configurable options.” Remarkably, this involved hand-adding things to all sorts of different meta data tables in a database, along with updating some stored procedures. The application, generally speaking, was smart enough to do the rest without code changes, even including GUI updates, so that was a win. But hand-adding things to tables was… annoying.

If only there were something that would let people add things to the database without using the query explorer tool and doing it by hand… something where you could write instructions in some kind of “language,” if you will, and translate these instructions into something visual and easy to understand so that people could add meta-data more simply. Hmmm.

Wait, I’ve got it! How about instead of a document explaining how to add a bunch of records to the database, you write an administrative function into your GUI that automates it? What a victory! You get the meta-data added and your new PCI option, and you also remove the possibility that you’ll mangle or forget one of the entries and leave the system in a poor state. This is really the only option. It’s not an either-or kind of situation. Hand-adding things to your database is facepalm.

What’s the moral of this story? To me, it’s this: if you have a giant document detailing manual steps for programmers to follow to get something done, what you really have is a spec/user story for your next development cycle. Automate all the things, and then burn those documents at a cathartic, gleeful camp fire. You can turn your onerous processes into roasted marshmallows.

Acknowledgements | Contact | About | Social Media