DaedTech

Stories about Software

By

10x Developer, Reconsidered

Unless my memory and a quick search of my blog are both mistaken, I’ve never tackled the controversial concept of a “10x developer.” For those who aren’t aware, this is the idea that some programmers are order(s) of magnitude more productive than others, with the exact figure varying. I believe that studies of varying statistical rigor have been suggestive of this possibility for decades and that Steve McConnell wrote about it in the iconic book, “Code Complete.” But wherever, exactly, it came from, the sound byte that dominates our industry is the “10x Developer” — the outlier developer archetype so productive as to be called a “rock star.” This seems to drive an endless debate cycle in the community as to whether or not it’s a myth that some of us are 10 or more times as awesome as others. You can read some salient points and a whole ton of interesting links from this jumping off stack exchange question.

Read More

By

I’ll Take a Crack at Defining Tech Debt

Not too long ago, on one of my Chess TDD posts, someone commented asking me to define technical debt because I had tossed the term out while narrating a video in that series. I’ll speak to it a bit because it’s an extremely elegant metaphor, particularly in the context of clarifying things to business stakeholders. In 1992, Ward Cunningham coined the term in a report that he wrote:

Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.

Everyone with an adult comprehension of finance can understand the concept of fiscal debt and interest. Whether it was a bad experience with one’s first credit card or a car or home loan, people understand that someone loaning you a big chunk of money you don’t have comes at a cost… and not just the cost of paying back the loan. The greater the sum and the longer you take to pay it back, the more it costs you. If you let it get out of hand, you can fall into a situation where you’ll never be capable of paying it back and are just struggling to keep up with the interest.

Likewise, just about every developer can understand the idea of a piece of software getting out of hand. The experience of hacking things together, prototyping and creating skunk-works implementations is generally a fun and heady one, but the idea of living for months or years with the resulting code is rather horrifying. Just about every developer I know has some variant of a story where they put something “quick and dirty” together to prove a concept and were later ordered, to their horror, to ship it by some project or line manager somewhere.

The two parallel concepts make for a rather ingenious metaphor, particularly when you consider that the kinds of people for whom we employ metaphors are typically financially savvy (“business people”). They don’t understand “we’re delivering features more slowly because of the massive amount of nasty global state in the system” but they do understand “we made some quality sacrifices up front on the project in order to go faster, and now we’re paying the ‘interest’ on those, so we either need to suck it up and fix it or keep going slowly.”

Since the original coining of this term, there’s been a good bit of variance in what people mean by it. Some people mean the generalized concept of “things that aren’t good in the code” while others look at specifics like refactoring-oriented requirements/stories that are about improving the health of the code base. There’s even a small group of people that take the metaphor really, really far and start trying to introduce complex financial concepts like credit swaps and securities portfolios to the conversation (it always seems to me that these folks have an enviable combination of lively imaginations and surplus spare time).

I can’t offer you any kind of official definition, but I can offer my own take. I think of technical debt as any existing liability in the code base (or in tooling around the code base such as build tools, CI setup, etc) that hampers development. It could be code duplication that increases the volume of code to maintain and the likelihood of bugs. It could be a nasty, tangled method that people are hesitant to touch and thus relates in oddball workarounds. It could be a massive God class that creates endless merge conflicts. Whatever it is, you can recognize it by virtue of the fact that its continued existence hurts the development effort.

I don’t like to get too carried away trying to make technical debt line up perfectly with financial debt. Financial debt is (usually) centered around some form of collateral and/or credit, and there’s really no comparable construct in a code base. You incur financial debt to buy a house right now and offer reciprocity in the form of interest payments and ownership of the house as collateral for default. The tradeoff is incredibly transparent and measured, with fixed parameters and a set end time. You know up front that to get a house for $200,000, you’ll be paying $1000 per month for 30 years and wind up ‘wasting’ $175,000 in interest. (Numbers made up out of thin air and probably not remotely realistic).

GratuitousAddOn

Tech debt just doesn’t work this way. There’s nothing firm about it; never do you say “if I introduce this global variable instead of reconsidering my object graph, it will cost me 6 hours of development time per week until May of 2017, when I’ll spend 22 hours rethinking the object graph.” Instead, you get lazy, introduce the global variable, and then, six months later, find yourself thinking, “man, this global state is killing me! Instead of spending all of my time adding new features, I’m spending 20 hours per week hunting down subtle defects.” While fiscal debt is tracked with tidy, future-oriented considerations like payoff quotes and asset values, technical debt is all about making visible the amount of time you’re losing to prior bad decisions or duct-tape fixes (and sometimes to drive home the point that making the wrong decision right now will result in even more slowness sometime in the future).

So in my world, what is tech debt? Quite simply, it’s the amount of wasted time that liabilities in your code base are causing you or the amount of wasted time that you anticipate they will cause you. And, by the way, it’s entirely unavoidable. Every line of code that you write is a potential liability because every line of code that you write is something that will, potentially, need to be read, understood, and modified. The only way to eliminate tech debt is to solve people’s problems without writing code and, assuming since you’re probably in the wrong line of work not to write code, the best you can do is make an ongoing, concerted, and tireless effort to minimize it.

(And please don’t take away from this, “Erik thinks writing code is bad.” Each line of code is an incremental liability, sure, but as a software developer, writing no code is a much larger, more immediate liability. It’s about getting the job done as efficiently and with as little liability as you can.)

By

You Want an Estimate? Give Me Odds.

I was asked recently in a comment what I thought about the “No Estimates Movement.” In the interests of full disclosure, what I thought was, “I kinda remember that hashtag.” Well, almost. Coincidentally, when the comment was written to my blog, I had just recently seen a friend reading this post and had read part of it myself. That was actually the point at which I thought, “I remember that hashtag.”

That sounds kind of flippant, but that’s really all that I remember about it. I think it was something like early 2014 or late 2013 when I saw that term bandied about in my feed, and I went and read a bit about it, but it didn’t really stick with me. I’ve now gone back and read a bit about it, and I think that’s because it’s not really a marketing teaser of a term, but quite literally what it advertises. “Hey, let’s not make estimates.” My thought at the time was probably just, “yeah, that sounds good, and I already try to minimize the amount of BS I spew forth, so this isn’t really a big deal for me.” Reading back some time later, I’m looking for deeper meaning and not really finding it.

Oh, there are certainly some different and interesting opinions floating around, but it really seems to be more bike-sheddy squabbling than anything else. It’s arguments like, “I’m not saying you shouldn’t size cards in your backlog — just that you shouldn’t estimate your sprint velocity” or “You don’t need to estimate if all of your story cards are broken into small enough chunks to be ones.” Those seem sufficiently tactical that their wisdom probably doesn’t extend too far beyond a single team before flirting with unknowable speculation that’d be better verified with experiments than taken as wisdom.

The broader question, “should I provide speculative estimates about software completion timelines,” seems like one that should be answered most honestly with “not unless you’re giving me pretty good odds.” That probably seems like an odd answer, so let me elaborate. I’m a pretty knowledgeable football fan and each year I watch preseason games and form opinions about what will unfold in the regular season. I play fantasy football, and tend to do pretty well at that, actually placing in the money more often than not. That, sort of by definition, makes me better than average (at least for the leagues that I’m in). And yet, I make really terrible predictions about what’s going to happen during the season.

ArmchairQuarterback

At the beginning of this season, for instance, I predicted that the Bears were going to win their division (may have been something of a homer pick, but there it is). The Bears. The 5-11 Bears, who were outscored by the Packers something like 84-3 in the first half of a game and who have proceeded to fire everyone in their organization. I’m a knowledgeable football fan, and I predicted that the Bears would be playing in January. I predicted this, but I didn’t bet on it. And, I wouldn’t have bet even money on it. If you’d have said to me, “predict this year’s NFC North Division winner,” I would have asked what odds you were giving on the Bears, and might have thrown down a $25 bet if you were giving 4:1 odds. I would have said, when asked to place that bet, “not unless you’re giving me pretty good odds.”

Like football, software is a field in which I also consider myself pretty knowledgeable. And, like football, if you ask me to bet on some specific outcome six months from now, you’d better offer me pretty good odds to get me to play a sucker’s game like that. It’d be fun to say that to some PMP asking you to estimate how long it would take you to make “our next gen mobile app.” “So, ballpark, what are we talking? Three months? Five?” Just look at him deadpan and say, “I’ll bite on 5 months if you give me 7:2 odds.” When he asks you what on earth you mean, just patiently explain that your estimate is 5 months, but if you actually manage to hit that number, he has to pay you 3.5 times the price you originally agreed on (or 3.5 times your salary if you’re at a product/service company, or maybe bump your equity by 3.5 times if it’s a startup).

See, here’s the thing. That’s how Vegas handles SWAGs, and Vegas does a pretty good job of profiting from the predictions racket. They don’t say, “hey, why don’t you tell us who’s going to win the Super Bowl, Erik, and we’ll just adjust our entire plan accordingly.”

So, “no estimates?” Yeah, ideally. But the thing is, people are going to ask for them anyway, and it’s not always practical to engage in a stoic refusal. You could politely refuse and describe the Cone of Uncertainty. Or you could point out that measuring sprint velocity with Scrum and extrapolating sized stories in the backlog is more of an empirically based approach. But those things and others like them tend not to hit home when you’re confronted with wheedling stakeholders looking to justify budgets or plans for trade shows. So, maybe when they ask you for this kind of estimate, tell them that you’ll give them their estimate when they tell you who is going to win next year’s super bowl so that you can bet your life savings on their guarantee estimate. When they blink at you dubiously, smile, and say, “exactly.”

By

The “Synthesize the Experts” Anti-Pattern

I have a relatively uncomplicated thought for this Wednesday and, as such, it should be a reasonably sized post. I’d like to address what I consider to be perhaps the most relatable and understandable anti-pattern that I’ve encountered in this field, and I call it the “Synthesize the Experts” anti-pattern. I’m able to empathize so intensely with its commission because (1) I’ve done it and (2) the mentality that drives it is the polar opposite of the mentality that results in the Expert Beginner. It arises from seeking to better yourself in a a vacuum of technical mentorship.

Let’s say that you hire on somewhere to write software for a small business and you’re pretty much a one-person show. From a skill development perspective, the worst thing you could possibly do is make it up as you go and assume that what you’re doing makes sense. In other words, the fact that you’re the least bad at software in your company doesn’t mean that you’re good at it or that you’re really delivering value to the business; it only means that it so happens that no one around can do a better job. Regardless of your skill level, developing software in a vacuum is going to hurt you because you’re not getting input and feedback and you’re learning only at the pace of evolutionary trial and error. If you assume that reaching some sort of equilibrium in this sense is mastery, you’re only compounding the problem.

But what if you’re aware of this limitation and you throw open the floodgates for external input? In other words, you’re alone and you know that better ways to do it exist, so you seek them out to the best of your ability. You read blog posts, follow prominent developers, watch videos from conferences, take online learning courses, etc. Absent a localized mentor or even partner, you seek unidirectional wisdom from outside sources. Certainly this is better than concluding that you’ve nailed it on your own, but there’s another problem brewing here.

Specifically, the risk you’re running is that you may be trying to synthesize from too many disparate and perhaps contradictory sources. Most of us go through secondary and primary education with a constant stream of textbooks that are more or less cut and dry matters of fact and cannon. The more Algebra and Geometry textbooks you read and complete exercises for, the better you’re going to get at Algebra and Geometry. But when you apply this same approach to lessons gleaned from the movers and shakers in the technosphere, things start to get weird, quickly.

To put a specific hypothetical to it, imagine you’ve just read an impassioned treatise from one person on why it’s a complete violation of OOP to have “property bag” classes (classes that have only accessors and mutators for fields and no substantive methods). That seems to make sense, but you’re having trouble reconciling it with a staunch proponent of layered architecture who says that you should only communicate between layers using interfaces and/or “DTO” classes, which are property bags. And, hey, don’t web services do most of their stuff with property bags… or something? Ugh…

Confused

But why ugh? If you’re trying to synthesize these opinions, you’re probably saying, “ugh” because you’ve figured out that your feeble brain can’t reconcile these concepts. They seem contradictory to you, but the experts must have them figured out and you’re just failing to incorporate them all elegantly. So what do you do? Well, perhaps you define some elaborate way to get all of your checked in classes to have data and behavior and, at runtime to communicate, they use an elaborate reflection scheme to build and emit, on the fly, property bag DTO classes for communication between layers and to satisfy any web service consumers. Maybe they do this by writing a text file to disk, invoking a compiler on it and calling into the resulting compiled product at runtime. Proud of this solution, you take to a Q&A forum to show how you’ve reconciled all of this in the most elegant way imaginable, and you’re met with virtual heckling and shaming from resident experts. Ugh… You failed again to make everyone happy, and in the process you’ve written code that seems to violate all common sense.

The problem here is the underlying assumption that you can treat any industry software developer’s opinion on architecture, even those of widely respected developers, as canonical. Statements like, “you shouldn’t have property bags” or “application layers should only communicate with property bags” aren’t the Pythagorean Theorem; they’re more comparable to accomplished artists saying, “right triangles suck and you shouldn’t use them when painting still lifes.” Imagine trying to learn to be a great artist by rounding up some of the most opinionated, passionate, and disruptive artists in the industry and trying to mash all of their “how to make great, provocative art” advice into one giant chimera of an approach. It’d be a disaster.

I’m not attempting a fatalistic/relativistic “no approach is better than any other” argument. Rather I’m saying that there is healthy disagreement among those considered experts in the industry about the finer points of how best to approach creating software. Trying to synthesize their opinionated guidance on how to approach software into one, single approach will tie you in knots because they are not operating from a place of universal agreement. So when you read what they’re saying and ask their advice, don’t read for Gospel truth. Read instead to cherry pick ideas that make sense to you, resonate with you, and advance your understanding.

So don’t try to synthesize the experts and extract a meeting that is the sum of all of the parts. You’re not a vacuum cleaner trying to suck every last particle off of the carpet; you’re a shopper at a bazaar, collecting ideas that will complement one another and look good, put together and arranged, when you get home. But whatever you do, don’t stop shopping — everything goes to Expert-Beginner land when you say, “nah, I’m good with what’s in my house.”

By

How to Get Your Company to Stop Killing Cats

We humans are creatures of routine, and there’s some kind of emergent property of groups of humans that makes us, collectively, creatures of rut. This is the reason that corporations tend to have a life trajectory sort of similar to stars, albeit on a much shorter timeline. At various paces, they form up and then reach sort of a moderate, burning equilibrium like our Sol. Eventually however, they bloat into massive giants which is generally the beginning of their death cycle, and, eventually, they collapse in on themselves, either going supernova or drifting off into oblivion as burned out husks. If you don’t believe me, check out the biggest employers of the 1950s, which included such household names as “US Steel” and “Standard Oil.” And, it’s probably a pretty safe bet that in 2050, people will say things like, “oh yeah, Microsoft, I heard of them once when playing Space-Trivial-Pursuit” or “wow, Apple was a major company? That Apple? The one that sells cheap, used hologram machines?”  (For what it’s worth, I believe GE and some other stalwarts were on that list as well, so the dying off is common, though not universal)

Yes, in the face of “adapt or die” large companies tend to opt for “die.” Though, “opt” may be a strong word in the sense of agency. It’s more “drift like a car idling toward a cliff a mile away, but inexplicably, no one is turning.” Now, before you get any ideas that I’m about to solve the problem of “how to stop bureaucratic mobs from making ill-advised decisions,” I’m not. I’m really just setting the stage for an observation at a slightly smaller scale.

I’ve worked for and with a number of companies, which has meant that I’ve not tended to be stationary enough to develop the kind of weird, insular thinking that creates the Dead Sea Effect (wow, that’s my third link-back to that article) or, more extremely, Lord of the Flies. I’m not the kids wearing war paint and chasing each other with spears, but rather the Deus Ex Machina marine at the end that walks into the weird company and says, in disbelief, “what are you guys doing?” It’s not exactly that I have some kind of gift for “thinking outside the box” but rather that I’m not burdened with hive mind assumptions. If anything, I’m kind of like a kid that blurts out socially unacceptable things because he doesn’t know any better: “geez, Uncle Jerry, you talk funny and smell like cough medicine.”

You can’t just not kill cats.

ScaredCat

What this series of context switches has led me to observe is that organizations make actual attempts to adapt, but at an extremely slow pace and desperately clinging to bad habits. This tends to create a repelling effect for a permanent resident, but a sad, knowing smile for a consultant or job hopper.  If you find yourself in this position, here’s a (slightly satirical) narrative that’s a pretty universal description of what happens when you start at (and eventually exit) a place that’s in need of some help.

You get a call from a shop or a person that says, “we need help writing software, can you come in and talk to us for an afternoon?” “Sure,” you say and you schedule some time to head over. When you get there, you notice (1) they have no computers and (2) it smells terrible. “Okay,” you say, hesitantly, “show me what you’re doing!” At this point, they lead you to a room with a state of the art, industrial grade furnace and they say, “well, this is where we round up stray cats and toss them in the furnace, but for some reason, it’s not resulting in software.” Hiding your horror and lying a little, you say, “yeah, I’ve seen this before — your big mistake here is that instead of killing cats, you want to write software. Just get a computer and a compiler and, you know, write the software.”

The next week you come in and the terrible smell is gone, replaced by a lot of yowling and thumping. You walk into the team room and discover a bunch of “software engineers” breathing heavily and running around trying to bludgeon cats with computers. “What are you doing,” you ask. “You’re not writing code or using the compiler — you’re still killing cats.” “Well,” the guy who called you in replies shamefacedly, “we definitely think you’re right about needing computers, and I know this isn’t exactly what you recommended, but you can’t just, like, not kill cats.” “Yes, you can just not kill cats,” you reply. “It’s easy. Just stop it. Here, right now. Hand me that computer, let’s plug it in, and start writing code.” Thinking you’ve made progress, you head out.

The next week, you return and there’s no thundering or yowling, and everyone is quietly sitting and coding. Your work here is done. You start putting together a retrospective and maintenance plans when all of a sudden, you hear the “woosh” of the old oven and get a whiff of something horrible. Exasperated, you march in and demand to know what’s going on. “Oh, that — every time we finish a feature, we throw a bunch of cats in the furnace so that we can start on the next feature.” Now at the end of your rope you restrain the urge to say, “I’m pretty sure you’re just Hannibal Lecter,” opting instead for, “I think we’ve made some good progress here so I’m going to move on now.”  “Oh, we hate to see you go when you’re doing so much good,” comes the slightly wounded reply, “but we understand.  Just head over to maintenance and give them your parking garage pass along with this cat — they’ll know what to do with it.  And yes, mister smarty pants, they do really need the cat.”

As you leave, someone grabs you in the hallway for a hushed exchange.  “Haha, leaving us already, I see,” they say in a really loud voice.  Then, in a strained whisper, “take me with you — these people are crazy.  They’re killing cats for God’s sake!”  What you’ve come to understand during your stay with the company is that, while at first glance it seems like everyone is on board with the madness, in reality, very few people think it makes sense.  It just has an inexplicable, critical mass of some kind with an event horizon from which the company simply cannot escape.

What’s the score here?  What’s next?

So once you leave the company, what happens next?  Assuming this exit is one of departing employee, the most likely outcome is that you move on and tell this story every now and then over beers, shuddering slightly as you do.  But, it might be that you leave and take the would-be escapees with you.  Maybe you start to consult for your now-former company or maybe you enter the market as a competitor of some kind.  No matter what you do, someone probably starts to compete with them and probably starts to win.  If the company does change, it’s unlikely to happen as a result of internal introspection and initiative and much more likely to happen in desperate response to crisis or else steered by some kind of external force like a consulting firm or a business partner.  Whatever the outcome, “we just worked really hard and righted the ship on our own” is probably not the vehicle.

If that company survives its own bizarre and twisted cat mangling process, it doesn’t survive it by taking incremental baby-steps from “cat murdering” to “writing software,” placating all involved at every step of the way that “nothing really needs to change that much.”  The success prerequisite, in fact, is that you need to change that much and more and in a hail of upheaval and organizational violence.  (People like to use the term “disruption” in this context, but I don’t think that goes far enough at  capturing the wailing, gnashing of teeth and stomping of feet that are required). In order to arrive at such a point of entrenched organizational failure, people have grown really comfortable doing really unproductive things for a really long time, which means that a lot of people need to get really uncomfortable really fast if things are going to improve.

I think the organizations that survive the “form->burn->bloat->die” star cycle are the ones that basically gather a consortium of hot burning small stars and make it look to the outside world that it’s a single, smoothly running star.  This is a bit of a strained metaphor, but what I mean is that organizations need to come up with ways to survive and even start to drive lurching, staggering upheaval that characterizes innovation.  They may seem calm to the outside world, but internally, they need malcontents and revolutionaries leaving the cat killing rooms and taking smart people with them.  It’s just that instead of leaving, they re-form and tackle the problem differently and better right up until the next group of revolutionaries comes along.  Oh, and somehow they need all of this upheaval to drive real added value and not just cause chaos.  Yeah, it’s not an easy problem.  If it were, corporations would have much better long term survival rates.

I don’t know the solution, but I do know that many established organizations are like drug addicts, hooked on a cocktail of continuity and mediocrity.  They know it isn’t good for them and that it’s even slowly killing them, and they seek interventions and hide their shame from those that try to help them, but it’s just so comfortable and so easy, and just a few more months like this, and look, man, this is just the way it is, so leave me alone about it, okay!?  If you’re working for or with a company struggling to change, understand this dynamic and game plan for drastic measures.  You’re going to need to come  up with a way to change the game, radically, and somehow not piss all involved parties off in the process.  If you want to stay, that is.  If not, there’s probably good money to be made in disrupting or competing with outfits that just want to keep burning those cats.

By

Dependency Injection or Inversion?

The hardest thing about being a software developer, for me, is coming up with names for things. I’ve worked out a system with which I’m sort of comfortable where, when coding, I pay attention to every namespace, type, method and variable name that I create, but in a time-box (subject to later revisiting, of course). So I think about naming things a lot and I’m usually in a state of thinking, “that’s a decent name, but I feel like it could be clearer.”

And so we arrive at the titular question. Why is it sometimes called “dependency injection” and at other times, “dependency inversion.” This is a question I’ve heard asked a lot and answered sometimes too, often with responses that make me wince. The answer to the question is that I’m playing a trick on you and repeating a question that’s flawed.

Confused

Dependency Injection and Dependency Inversion are two distinct concepts. The reason that I led into the post with the story about naming is that these two names seem fine in a vacuum but, used together, they seem to create a ‘collision,’ if you will. If I were wiping the slate clean, I’d probably give “dependency inversion” a slightly different name, though I hesitate to say it since a far more accomplished mind than my own gave it the name in the first place.

My aim here isn’t to publish the Nth post exhaustively explaining the difference between these two concepts, but rather to supply you with (hopefully) a memorable mnemonic. So, here goes. Dependency Injection == “Gimme it” and Dependency Inversion == “Someone take care of this for me, somehow.” I’ll explain a bit further.

Dependency Injection is a generally localized pattern of writing code (though it may be used extensively in a code base). In any given method or class (or module, if you want) rather than you going out and finding or making the things you need, you simply order your collaborators to “gimme it.”

So instead of this:

You say, “nah, gimme it,” and do this instead:

It isn’t you responsible for figuring out that time comes from atomic clocks which, in turn, come from atoms somehow. Not your problem. You say to your collaborators, “you want the time, Buddy? I’m gonna need a ThingThatTellsTime, and then it’s all yours.” (Usually you wouldn’t write this rather pointless method, but I wanted to keep the example as simple as humanly possible).

Dependency Inversion is a different kind of tradeoff. To visualize it, don’t think of code just yet. Think of a boss yelling at a developer. Before the ‘inversion’ this would have been straightforward. “Developer! You, Bill! Write me a program that tells time!” and Bill scurries off to do it.

But that’s so pre-Agile. Let’s do some dependency inversion and look at how it changes. Now, boss says, “Help, someone, I need a program that tells time! I’m going to put a story in the product backlog” and, at some point later, the team says, “oh, there’s something in the backlog. Don’t know how it got there, exactly, but it’s top priority, so we’ll figure out the details and get it done.” The boss and the team don’t really need to know about each other directly, per se. They both depend on the abstraction of the software development process; boss has no idea which person writes the code or how, and the team doesn’t necessarily know or care who plopped the story in the backlog. And, furthermore, the backlog abstraction doesn’t depend on knowing who the boss is or the developers are or exactly what they’re doing, but those details do depend on the backlog.

Okay, so first of all, why did I do one example in code and the other in anecdote, when I could have also done a code example? I did it this way to drive home the subtle scope difference in the concepts. Dependency injection is a discrete, code-level tactic. Dependency inversion is more of an architectural strategy and way of structuring (decoupling) code bases.

And finally, what’s my (mild) beef with the naming? Well, dependency inversion seems a little misleading. Returning to the boss ordering Bill around, one would think a strict inversion of the relationship would be the stuff of inane sitcom fodder where, “aha! The boss has become the bossed! Bill is now in charge!” Boss and Bill’s relationship is inverted, right? Well, no, not so much — boss and Bill just have an interface slapped in between them and don’t deal with one another directly anymore. That’s more of an abstraction or some kind of go-between than an inversion.

There was certainly a reason for that name, though, in terms of historical context. What was being inverted wasn’t the relationship between the dependencies themselves, but the thinking (of the time) about object oriented programming. At the time, OOP was very much biased toward having objects construct their dependencies and those dependencies construct their dependencies, and so forth. These days, however, the name lives on even as that style of OOP is more or less dead this side of some aging and brutal legacy code bases.

Unfortunately, I don’t have a better name to propose for either one of these things — only my colloquial mnemonics that are pretty silly. So, if you’re ever at a user group or conference or something and you hear someone talking about the “gimme it” pattern or the “someone take care of this for me, somehow” approach to architecture, come over and introduce yourself to me, because there will be little doubt as to who is talking.

By

Have a Cigar

There are a few things that I think have done subtle but massive damage to the software development industry. The “software is like a building” metaphor comes to mind. Another is modeling the software workforce after the Industrial Age factory model, where the end goal seems to be turning knowledge work into 15 minute parcels that can be cranked out and billed in measured, brainless, assembly line fashion. (In fact, I find the whole concept of software “engineering” to be deeply weird, though I must cop to having picked out the title “software engineer” for people in a group I was managing because I knew it would be among the most marketable for them in their future endeavors.) Those two subtleties have done massive damage to software quality and to software development process quality, respectively, but today I’d like to talk about one that has done damage to our careers and our autonomy and that frankly I’m sick of.

The easiest way to give the phenomenon a title would be to call it “nerd stereotyping” and the easiest way to get you to understand quickly what I mean is to ask you to consider the idea that, historically, it’s always been deemed necessary to have “tech people,” “business people,” and “analysts” and “project managers” who are designated as ‘translators’ that can interpret “tech speak” for “normal people.” It’s not a wholly different metaphor than the 1800s having horses and people, with carriage drivers who could talk to the people but also manipulate the dumb, one dimensional beasts into using their one impressive attribute, their strength, to do something useful. Sometimes this manipulation meant the carrot and other times the stick. See what I did there? It’s metaphor synergy FTW!

If you’re wondering at this point why there are no cigars involved in the metaphor, don’t worry — I’ll get to that later.

The Big Bang Theory and Other Nerd Caricatures

Last week, I was on a long weekend fishing trip with my dad and girlfriend fiancee (as of the second edit), and one night before bed, we popped the limited access cable on and were vegetating, watching what the limited selection allowed. My dad settled on the sitcom “The Big Bang Theory.” I’ve never watched this show because there have historically been about seven sitcoms that I’ve ever found watchable, and basically none of those have aired since I became an adult. It’s just not really my sense of humor, to be honest. But I’ve always suspected this one sitcom in particular of a specific transgression — the one about which I’m talking here. I’d never before seen the show, though, so I didn’t know for sure. Well, I know now.

In the two episodes I saw, the humor could best be summarized as, “it’s funny because that guy is so smart in one way but so dumb in another! It’s ironic and hi-larious!” The Sheldon character, who seems to be your prototypical low EQ/high IQ dweeb, decided in one episode to make every decision in life based on a dice roll, like some kind of programmer version of Harvey Dent. In another episode, he was completely unable to grasp the mechanics of haggling over price, repeatedly blurting out that he really wanted the thing even as his slightly less nerdy friend tried to play it cool. I don’t know what Sheldon does for a living, but I’ll wager he’s a programmer or mathematician or actuary or something. My money isn’t on “business analyst” or “customer support specialist” or “account manager.” But, hey, I bet that’d make for a great spin-off! Nerdy guy forced to do non-nerdy job — it’s funny because you wouldn’t expect it!

My intention here isn’t to dump on this sitcom, per se, and my apologies if it’s a favorite of yours and the characters are endearing to you. I’m really picky and hard to please when it comes to on-screen comedy (for example, I’d summarize “Everybody Loves Raymond” as “it’s funny because he’s a narcissistic, incompetent mama’s boy and she’s an insufferable harpy — hi-larious!”). So, if you’d prefer another example of this that I’ve seen in the past, consider the character on the show “Bones.” I tried watching that show for a season or two, but the main character was just absurd, notwithstanding the fact that the whole show was clearly set up to string you along, waiting for her to hook up with that FBI dude. But her whole vibe was, “I am highly intelligent and logical, but even searching my vast repository of situational knowledge and anthropological nuance and I cannot seem to deduce why there is moisture in and around your tear ducts after hearing that the woman who gave birth to you expired. Everyone expires, so it’s hardly remarkable.” She has an IQ of about 245 (and is also apparently a beautiful ninja) but hasn’t yet groked the syllogism of “people cry when they’re sad and people are sad when their parents die.” This character and Sheldon and so many others are preposterous one-dimensional caricatures of human beings, and when people in mathy-sciency fields ham it up along with them, I’m kind of reminded of this blurb from the Onion from a long time ago.

But it goes beyond just playing to the audience. As a collective, we engineers, programmers, scientists, etc., embrace and exaggerate this persona for internal cred. Because my field is programming, I’ll speak to the programmer archetype: the lone hero and iconoclast, a socially inept hacker. If Hollywood and reductionist popular culture are to be believed, it is the mediocre members of our field who are capable of social lives, normal interactions and acting like decent human beings. But the really good programmers are a mashup of Sheldon and Gregory House — lone, misanthropic, socially maladjusted weirdos whose borderline personalities and hissy fits simply have to be endured in order to bask in their prodigious, savant-like intellects and to extract social value out of them. Sheldon may be ridiculous, but he’s also probably the only one that can stop hackers or something, just as House’s felonious, unethical behavior and flaunted drug addiction are tolerated at his hospital because he’s good at his job.

Attribute Point Shaving

As humans, we like to believe in what some probably refer to as justice. I’m not really one to delve into religion on this blog, but the concept of “hell” is probably the single biggest illustrator of what I mean. It gives us the ability to answer our children’s question: “Mommy, why didn’t that evil man go to jail like in the movies?” We can simply say, “Oh, don’t worry, they’ll go to an awful place with fire and snakes and stuff after they die.” See, problem solved. Cosmic scales rebalanced. Hell is like a metaphysical answer to the real universe’s “dark energy” — it makes the balance sheet go to zero.

But we believe this sort of thing on a microscale as well, particularly when it comes to intelligence. “She’s not book smart, but she’s street smart.” “He may be book smart, but he has low EQ.” “She’s good at math, so don’t expect her to read any classic literature.” At some basic level, we tend to believe that those with one form of favorable trait have to pay the piper by sucking at something else, and those who lack a favorable trait must be good at something else. After all, if they had nothing but good traits, the only way to sort that out would be to send them directly to hell. And this RPG-like (or Madden Football-like, if you prefer), zero-sum system of points allocation for individual skills is how we perceive the world. Average people have a 5 out of 10 in all attributes. But since “math geniuses” have a 10 out of 10 in “good at math,” they must have a 0 out of 10 in “going out on dates.” The scales must balance.

This sword weirdly cuts the other way too. Maybe I’m only a 6 out of 10 at math and I really wish I were a 9 out of 10. I could try to get better, but that’s hard. What’s a lot easier to do is act like a 2 out of 10 in “going out on dates” instead of a 5 out of 10. People will then assume those 3 points I’m giving up must go toward math or some other dorky pursuit. If I want to hit a perfect 10 out of 10, I can watch Star Trek and begin most of my sentences with “so.” That’s gotta hurt me in some social category or another, and now I’m a math genius. Think this concept of personality point-shaving is BS? Ask yourself if you can remember anyone in junior high trying to get out of honors classes and into the mainstream so as not to seem geeky. Why do that? Shaving smart points for “street smart” points.

If you’re Hollywood, this is the best thing ever for portraying smart people. It’s hard to convey “extremely high IQ” in the medium of television to the masses. I mean, you can have other characters routinely talk about their intellect, but that’s a little trite. So what do you do? You can have them spout lots of trivia or show them beating grandmasters at chess or something… or you can shave points from everything else they do. You can make them woefully, comically inept at everything else, but most especially any form of social interaction. So you make them insufferable, low-EQ, dysfunctional d-bags in order to really drive home that they have high IQs.

In the lines of work that I mentioned earlier, there’s natural pressure to point shave as a measure of status. I think that this hits a feedback loop and accelerates into weird monocultures and that having low scores in things like “not getting food on yourself while you eat” and “not looking at your feet while you talk” actually starts to up your cred in this weird, insular world. Some of us maybe grew up liking Star Trek while others who didn’t pretend to, since that shaves some points off of your social abilities. In turn, in the zero-sum game of personal attributes, it makes you a better STEM practitioner.

What’s the Harm?

So we might exaggerate our social awkwardness or affect some kind of speech impediment or write weird, cryptic code to augment the perception of our skills… so what? No big deal, right? And, yeah, maybe we go to work and delight in telling a bunch of suits that we don’t understand all of their BS talk about profits and other nonsense and to just leave us alone to write code. Awesome, right? In one fell swoop, we point shave for social grace in favor of intelligence and we also stick it to the man. Pretty sweet, right?

LiveLongAndProsper

I guess, in the moment, maybe. But macroscopically, this is a disaster. And it’s a disaster that’s spawned an entire industry of people that collect larger salaries than a lot of middle managers and even some executives but have almost no real voice in any non-software-based organization. It’s a disaster that’s left us in charge of the software that operates stock exchanges, nuclear plants and spaceships, but apparently not qualified enough to talk directly to users or manage our own schedules and budgets without detailed status reports. Instead of emerging into being self-sufficient, highly-paid, autonomous knowledge workers like doctors and lawyers, we’re lorded over by whip-cracking, Gantt-chart-waving middle managers as if we were assembling widgets on the factory floor and doing it too slowly. And we’ve done it almost entirely voluntarily.

So what am I advocating, exactly? Simply that you refuse to buy into the notion that you’re just a “code slinger” and that all that “business stuff” is someone else’s problem. It’s not. It’s your problem. And it’s really not that hard if you pay attention. I’m not suggesting that you trade in your IDE for Microsoft Project and Visio, but I am suggesting that you spend a bit of time learning enough about the way business is conducted to speak intelligently. Understand how to make a business case for things. Understand the lingo that analysts and project managers use well enough to filter out all of the signaling-oriented buzzwords and grasp that they are communicating some ideas. Understand enough to listen, understand and critique those ideas. In short, understand enough to do away with this layer of ‘translators’ the world thinks that we need, reclaim some autonomy, and go from “slinging code” to solving problems with technology and being rewarded with freedom and appropriate compensation for doing so.

I’ll close with one last thought, hopefully to drive my point home. How many times (this is kind of programmer-specific) have people approached you and said something like, “let’s make an app; you write the code and get it into the app store and I’ll do, like, the business stuff.” And how many times, when you hear this, is it proposed that you run the show? And how many times is it proposed that you’ll do it for pay or for 49% equity or something? They had the idea, they’ll do business things, and you’re the code-monkey, who, you know, just makes the entire product.

Consider this lyric from the Pink Floyd:

Everybody else is just green, have you seen the chart?
It’s a helluva start, it could be made into a monster
If we all pull together as a team.

And did we tell you the name of the game, boy?
We call it Riding the Gravy Train.

It’s from a song called, “Have a Cigar,” and it spoke to the corporate record industry who essentially brokered its position to “team up” with musicians to control their careers and passively profiteer (from the cynical songwriter’s perspective, anyway — I’m not interested in debating the role of the record industry in creating 70’s rock stars since it’s pretty easy to argue that there wouldn’t be a whole lot of money for anyone without the record labels). “If we all pull together as a team,” is the height of irony in the song, the same way it is in the pitches you hear where the “idea guy” tells you that he’ll be the CEO, since it was his idea, and Bill will be in charge of marketing and Sue will be the CFO, and you can handle the small detail of writing the entire application that you’re going into business to make.

Is this heads-down, workhorse role worth having the most geek cred? I don’t think so, personally. And if you also don’t, I’d encourage you to get a little outside of your comfort zone and start managing your career, your talent and your intellectual property like a business. If we all do that — if we all stop with the point shaving — I think we can change the nature of the tech game.

By

Be Idiomatic

I have two or three drafts now that start with the meta of me apologizing for being sparse in my posting of late. TL;DR is that I’ve started a new, out of town engagement and between ramp-up, travel and transitioning off of prior commitments, I’ve been pretty bad at being a regular blogger lately. Also, special apologies to followers of the Chess TDD series, but my wifi connection in room has just been brutal for the desktop (using the hotel’s little plugin converter), so it’s kind of hard to get one of those posts going. The good news is that what I’m going to be doing next involves a good bit of mentoring and coaching, which lends itself particularly well to vaguely instructional posts, so hopefully the drought won’t last. (For anyone interested in details of what I’m doing, check Linked In or hit me up via email/twitter).

Anywho, onto a brief post that I’ve had in drafts for a bit, waiting on completion. The subject is, as the title indicates, being “idiomatic.” In general parlance, idiomatic refers to the characteristic of speaking a language the way a native would. To best illustrate the difference between language correctness and being idiomatic, consider the expression, “go fly a kite.” If you were to say this to someone who had learned to speak flawless English in a classroom, that person would probably ask, “what kite, and why do you want me to fly it?” If you said this to an idiomatic USA English speaker (flawless or not), they’d understand that you were using a rather bland imprecation and essentially telling them to go away and leave you alone. And so we make a distinction between technically accurate language (syntax and semantics) and colloquially communicative language. The idiomatic speaker understands, well, the idioms of the local speech.

Applied to programming languages, the metaphor holds pretty well. It’s possible to write syntactically and semantically valid code (in the sense that the code compiles and does what the programmer intends at runtime) that isn’t idiomatic at all. I could offer all manner of examples, but I’ll offer the ones probably most broadly approachable to my reader base. Non-idiomatic Java would look like this:

And non-idiomatic C# would look like this:

In both cases, the code will compile, run, and print what you want to print, but in neither case are you speaking it the way the natives would likely speak it. Do this in either case, and you’ll be lucky if you’re just laughed at.

Now you may think that, because of this simple example that touches off a bike-shedding holy war about where to put curly braces that I’m advocating for rigid conformance to coding standards or something. That’s not my point at all. My point more goes along the lines of “When in Rome, speak like the Romans do.” Don’t walk in and say, “hey Buddy, give me an EX-PRESS-O!” Don’t be bad at speaking the way the community does and proud of it.

Here is a much more subtle and iconic example of what I’m talking about. Do you ever see people in C# do this?

I once referred to this as “The Yoda” in an old blog post. “If null is x, null argument exception you throw.” Anyone know why people do this, without cheating? Any guesses?

TheCouncil

If you said, “it’s an old C/C++ trick to prevent confusing assignment and comparison,” you’d be right. In C and C++ you could do things like if(x = null) and what the compiler would do would be to assign x to null and then compare it to zero, which would result in it returning true no matter what x had been previously. Intuitive, right? Well, no, not at all, and so C/C++ programmers got into the habit of yoda-ing to prevent a typo from compiling and doing something weird and unexpected.

And, some of them carry that habit right on through to other languages like C#. But the problem is, in C#, it’s pure cargo cult weirdness. If you make that typo in C#, the compiler just barfs. if(x = null) is not valid C#. So the C++ life-hack is pointless and serves only to confuse C# programmers, as evidenced by questions like this (and, for the record, I did not get the idea for my bloopers post name from Daniel’s answer, even though it predates the post — GMTA, I guess).

So, if you’re doing this, the only upside is that you don’t have to change your way of doing things in spite of the fact that you’re using a completely different programming language. But the downside is that you needlessly confuse people. I suspect this and many other instances are a weird, passive-aggressive form of signaling. People who do this might be saying, “I’m a C++ hacker at heart and you you can take my pointers, but you’ll never take MY CONVENTIONS!!!” And maybe the dyed-in-the-wool Java guys write C# with curly brackets on the same line and vice-versa with C# guys in Java. It’s the geekiest form of passive (aggressive) resistance ever. Or, maybe it’s just habit.

But whatever it is, my advice is to knock it off. Embrace any language you find yourself in and pride yourself how quickly you can become idiomatic and rid yourself of your “accents” from other languages. By really seeking out the conversion, you don’t just appear more versed more quickly, I’d argue that you become more versed more quickly. For instance, if you look around and see that no one in C# uses The Yoda, it might cause you to hit google to figure out why not. And then you learn a language difference.

And there’s plenty more where that came from. Why do people use more enums in language X than in language Y? Why do people do that “throws Blah” at the end of a java method declaration? What’s thing with the {} after a class instantiation in C#? Why is it okay in javascript to assign an integer to “asdf”? If you’re new to a language and, instead of asking questions like those, you say, “these guys are stupid; I’m not doing that,” you’re balling up an opportunity to learn about language features and differences and throwing it in the trash. It’ll mean getting further out of your comfort zone faster, but in the end, you’ll be a lot better at your craft.

By

The Secret to Avoiding Paralysis by Analysis

A while ago Scott Hanselman wrote a post about “paralysis by analysis” which I found to be an interesting read.  His blog is a treasure trove of not only technical information but also posts that humanize the developer experience, such as one of my all time favorites.  In this particular post, he quoted a stack overflow user who said:

Lately, I’ve been noticing that the more experience I gain, the longer it takes me to complete projects, or certain tasks in a project. I’m not going senile yet. It’s just that I’ve seen so many different ways in which things can go wrong. And the potential pitfalls and gotchas that I know about and remember are just getting more and more.

Trivial example: it used to be just “okay, write a file here”. Now I’m worrying about permissions, locking, concurrency, atomic operations, indirection/frameworks, different file systems, number of files in a directory, predictable temp file names, the quality of randomness in my PRNG, power shortages in the middle of any operation, an understandable API for what I’m doing, proper documentation, etc etc etc.

Scott’s take on this is the following:

This really hit me because THIS IS ME. I was wondering recently if it was age-related, but I’m just not that old to be senile. It’s too much experience combined with overthinking. I have more experience than many, but clearly not enough to keep me from suffering from Analysis Paralysis.

(emphasis his)

Paralysis by Lofty Expectations

The thing that struck out to me most about this post was reading Scott say, “THIS IS ME.” When I read the post about being a phony and so many other posts of his, I thought to myself, “THIS IS ME.” In reading this one, however, I thought to myself, “wow, fortunately, that’s really not me, although it easily could be.” I’ll come back to that.

Scott goes on to say that he combats this tendency largely through pairing and essentially relying on others to keep him more grounded in the task at hand. He says that, ironically, he’s able to help others do the same. With multiple minds at work, they’re able to reassure one another that they might be gold plating and worrying about too much at once. It’s a sanity check of sorts. At the end of the post, he invites readers to comment about how they avoid Paralysis by Analysis.

For me to answer this, I’d like to take a dime store psychology stab at why people might feel this pressure as they move along in their careers in the first place — pressure to “[worry] about permissions, locking, concurrency, atomic operations, indirection/frameworks, different file systems, number of files in a directory, predictable temp file names, the quality of randomness in my PRNG, power shortages in the middle of any operation, an understandable API for what I’m doing, proper documentation, etc etc etc.” Why was it so simple when you started out, but now it’s so complicated?

NervousTestTaker

I’d say it’s a matter not so much of diligence but of aversion to sharpshooting. What I mean is, I don’t think that people during their careers magically acquire some sort of burning need to make everything perfect if that didn’t exist from the beginning; I don’t think you grow into perfectionism. I think what actually happens is that you grow worried about the expectations of those around you. When you’re a programming neophyte, you’ll proudly announce that you successfully figured out how to write a file to disk and you’d imagine the reaction of your peers to be, “wow, good work figuring that out on your own!” When you’re 10 years in, you’ll announce that you wrote a file to disk and fear that someone will say, “what kind of amateur with 10 years of experience doesn’t guarantee atomicity in a file-write?”

The paralysis by analysis, I think, results from the opinion that every design decision you make should be utterly unimpeachable or else you’ll be exposed as a fraud. You fret that a maintenance programmer will come along and say, “wow, that guy sure sucks,” or that a bug will emerge in some kind of odd edge case and people will think, “how could he let that happen?!” This is what I mean about aversion to sharpshooting. It may even be personal sharpshooting and internal expectations, but I don’t think that the paralysis by analysis occurs as a proactive desire to do a good job but out of a reactive fear of doing a bad job.

(Please note: I have no idea whether this is true of Scott, the original Stack Overflow poster or anyone else individually; I’m just speculating about this general phenomenon that I have observed)

Regaining Your Movement

So, why doesn’t this happen to me? And how might you avoid it? Well my hope is that the answer to the first question is the answer to the second question for you. This doesn’t happen to me for two reasons:

  1. I pride myself not on what value I’ve already added, but what value I can quickly add from here forward.
  2. I make it a point of pride that I only solve problems when they become actual problems (sort of like YAGNI, but not exactly).

Let’s consider the first point as it pertains to the SO poster’s example. Someone tells me that they need an application that, among other things, dumps a file to disk. So, I spend a few minutes calling File.Create() and, hey, look at that — a file is written! Now, if someone comes to me and says, “Erik, this is awful because whenever there are two running processes one of them crashes.” My thought at this point isn’t, “what kind of programmer am I that I wrote this code that has this problem when someone might have been able to foresee this?!?” It’s, “oh, I guess that makes sense — I can definitely fix it pretty quickly.” Expanding to a broader and perhaps less obtuse scope, I don’t worry about the fact that I really don’t think of half of that stuff when dumping something to a file. I feel that I add value as a technologist since even if I don’t know what a random number generator has to do with writing files, I’ll figure it out pretty quickly if I have to. My ability to know what to do next is what sells.

For the second point, let’s consider the same situation slightly differently. I write a file to disk and I don’t think about concurrent access or what on Earth random number generation has to do with what I’m doing. Now if someone offers me the same, “Erik, this is awful because whenever there are two running processes…” I also might respond by saying, “sure, because that’s never been a problem until this moment, but hey, let’s solve it.” This is something I often try to impress upon less experienced developers, particularly about performance. And I’m not alone. I counsel them that performance isn’t an issue until it is — write code that’s clean, clear, and concise and that gets the job done. If at some point users want/need it to be faster, solve that problem then.

This isn’t YAGNI, per se, which is a general philosophy that counsels against writing abstractions and other forms of gold plating because you think that you’ll be glad you did later when they’re needed. What I’m talking about here is more on par with the philosophy that drives TDD. You can only solve one problem at a time when you get granular enough. So pick a problem and solve it while not causing regressions. Once it’s solved, move on to the next. Keep doing this until the software satisfies all current requirements. If a new one comes up later, address it the same as all previous ones — one at a time, as needed. At any given time, all problems related to the code base are either problems that you’ve already solved or problems on a todo list for prioritization and execution. There’s nothing wrong with you or the code if the software doesn’t address X; it simply has yet to be enough of a priority for you to do it. You’ll get to it later and do it well.

There’s a motivational expression that comes to mind about a journey of a thousand miles beginning with a single step (though I’m really more of a despair.com guy, myself). There’s no real advantage in standing still and thinking about how many millions of steps you’re going to need to take. Pick a comfortable pair of shoes, grab some provisions, and go. As long as you pride yourself in the ability to make sure your next step is a good one, you’ll get to where you need to be sooner or later.

By

Visualization Mnemonics for Software Principles

Whether it’s because you want to be able to participate in software engineering discussions without having to surreptitiously look things up on your phone, or whether it’s because you have an interview coming up with a firm that wants you to be some kind of expert in OOP or something, you probably have at least some desire to be knowledgeable about development terms. This is probably doubly true of you, since ipso facto, you read blogs about software.

Toward that end, I’m writing this post. My goal is to provide you with a series of somewhat vivid ways to remember software concepts so that you’ll have a fighting chance at remembering what they’re about sometime later. I’m going to do this by telling a series of stories. So, I’ll get right to it.

Law of Demeter

Last week I was on a driving trip and I stopped by a gas station to get myself a Mountain Dew for the sake or road alertness. I grabbed the soda from the cooler and plopped it down on the counter, prompting the clerk to say, “that’ll be $1.95.” At this point, naturally, I removed my pants and the guy started screaming at me about police and indecent exposure. Confused, I said, “look, I’m just trying to pay you — I’ll hand you my pants and you go rummaging around in my pockets until you find my wallet, which you’ll take out and go looking through for cash. If I’m due change put it back into the wallet, unless it’s a coin, and then just put it in my pocket, and give me back the pants.” He pulled a shotgun out from behind the counter and told me that in his store, people obey the Law of Demeter or else.

PantlessLawOfDemeter

So what does the Law of Demeter say? Well, anecdotally, it says “give collaborators exactly what they’re asking for and don’t give them something they’ll have to go picking through to get what they want.” There’s a reason we don’t hand the clerk our pants (or even our wallet) at the store and just hand them money instead; it’s inappropriate to send them hunting for the money. The Law of Demeter encourages you to think this way about your code. Don’t return Pants and force clients of your method to get what they want by invoking Pants.Pockets[1].Wallet.Money — just give them a Money. And, if you’re the clerk, don’t accept someone handing you a Pants and you going to look for the money — demand the money or show them your shotgun.

Single Responsibility Principle

My girlfriend and I recently bought an investment property a couple of hours away. It’s a little house on a lake that was built in the 1950’s and, while cozy and pleasant, it doesn’t have all of the modern amenities that I might want, resulting in a series of home improvement projects to re-tile floors, build some things out and knock some things down. That kind of stuff.

One such project was installing a garbage disposal, which has two components: plumbing and electrical. The plumbing part is pretty straightforward in that you just need to remove the existing drain pipe and insert the disposal between the drain and the drainage pipe. The electrical is a little more interesting in that you need to run wiring from a switch to the disposal so that you can turn it on and off. Now, naturally, I didn’t want to go to all the hubub of creating a whole different switch, so I decided just to use one that was already there. The front patio light switch had the responsibility for turning the front patio light on and off, but I added a little to its burden, asking it also to control the garbage disposal.

That’s worked pretty well. So far the only mishap occurred when I was rinsing off some dishes and dropped a spoon in the drain while, at the same time, my girlfriend turned the front light on for visitors we were expecting. Luckily, I had only a minor scrape and a mangled spoon, and that’s a small price to pay to avoid creating a whole new light switch. And really, what’s the worst that could happen?

Well, I think you know the worst thing that could happen is that someone loses a hand due to this absurd design. When you run afoul of the Single Responsibility Principle, which could loosely be described as saying “do one thing only and do that thing well” or “have only one reason to change.” In my house, we have two reasons to change the state of the switch: turning on the disposal and turning on the light, and this creates an obvious problem. The parallel situation in code is true as well. If you have a class that needs to be changed whenever schema updates occur and whenever GUI changes occur, then you have a class that serves two masters and the possibility for changes to one thing to affect the other. Disk space is cheap and classes/namespaces/modules are renewable resources. When in doubt, create another one.

Open/Closed Principle

I don’t have a ton of time for TV these days, and that’s mainly because TV is so time consuming. It was a lot simpler when I just had a TV that got an analog signal over the air. But then, things went digital, so I had to take apart my TV and rewire it to handle digital signals. Next, we got cable and, of course, there I am, disassembling the TV again so that we can wire it up to get a cable signal. The worst part of that was that when I became furious with the cable provider and we switched to Dish, I was right back to work on the TV. Now, we have a Nintendo Wii, a DVD player, and a Roku, but who has the time to take the television apart and rewire it to handle each of these additional items? And if that weren’t bad enough, I tried hooking up an old school Sega Genesis last year, and my Dish stopped working.

… said no one, ever. And the reason no one has ever said this is that televisions that you purchase follow the Open/Closed Principle, which basically says that you should create components that are closed to modification, but open for extension. Televisions you purchased aren’t made to be disassembled by you, and certainly no one expects you to hack into the guts of the TV just to plug some device into it. That’s what the Coax/RCA/Component/HDMI/etc feeds are for. With the inputs and the sealed-under-warranty case, your television is open for extension, but closed for modification. You can extend its functionality by plugging anything you like into it, including things not even out yet, like an X-Box 12 or something. Follow this same concept for flexible code. When you write code, you strive to maximize flexibility by facilitating maintenance via extension and new code. If you program to interfaces or allow overriding of behavior via inheritance, life is a lot easier when it comes time to change functionality. So favor that over writing some juggernaut class that you have to go in and modify literally every sprint. That’s icky, and you’ll learn to hate that class and the design around it the same way you’d hate the television I just described.

Liskov Substitution Principle

I’m someone that usually eats a pretty unremarkable salad with dinner. You know, standard stuff: lettuce, tomatoes, crutons, scallions, carrots, and hemlock. One thing that I seem to do differently than most, however, is that I examine each individual item in the salad to see whether or not it will kill me before I put it into my mouth (a lot of other salad consumers seem to play pretty fast and loose with their lives, sheesh). I have a pretty simple algorithm for this. If the item is not hemlock, I eat it. If it is hemlock, I put it onto my plate to throw out later. I highly recommend eating your hemlock salad this way.

Or, you could bear in mind the Liskov Substitution Principle, which basically says that if you’re going to have an inheritance relationship, then derived types should be seamlessly swappable for their base type. So, if I have a salad full of Edibles, I shouldn’t have some derived type, Hemlock, that doesn’t behave the way other Edibles do. Another way to think of this is that if you have a heterogeneous collection of things in an inheritance hierarchy, you shouldn’t go through them one by one and say, “let’s see which type this is and treat it specially.” So, obey the LSP and don’t make hemlock salads for people. You’ll have cleaner code and avoid jail.

Interface Segregation Principle

Thank goodness for web page caching — it’s a life saver. Whenever I go to my favorite dictionary site, expertbeginnerdictionary.com (not a real site if you were thinking of trying it), it prompts me for a word to lookup and, when I type in the word and hit enter, it sends me the dictionary over HTTP, at which time I can search the page text with Ctrl-F to find my word. It takes such a long time for my browser to load the entire English dictionary that I’d be really up a creek without page caching. The only trouble is, whenever a word changes and the cache is invalidated, my next lookup takes forever while the browser re-downloads the dictionary. If only there were a better way…

… and there is. Don’t give me the entire dictionary when I want to look up a word. Just give me that word. If I want to know what “zebra” means, I don’t care what “aardvark” means, and my zebra lookup experience shouldn’t be affected and put at risk by changes to “aardvark.” I should only be depending on the words and definitions that I actually use, rather than the entire dictionary. Likewise, if you’re defining public interfaces in your code for clients, break them into minimum composable segments and let your clients assemble them as needed, rather than forcing the kitchen sink (or dictionary) on them.  The Interface Segregation Principle says that clients of an interface shouldn’t be forced to depend on methods that they don’t use because of the excess, pointless baggage that comes along.  Give clients the minimum that they need.

Dependency Inversion Principle

Have you ever been to an automobile factory?  It’s amazing to watch how these things are made.  They start with a car, and the car assembles its own engine, seats, steering wheel, etc.  It’s pretty amazing to watch.  And, for a real treat, you can watch these sub-parts assemble their own internals.  The engine builds its own alternator, battery, transmission, etc — a breathtaking feat of engineering.  Of course, there’s a downside to everything, and, as cool as this is, it can be frustrating that the people in the factory have no control over what kind of engine the car builds for itself.  All they can do is say, “I want a car” and the car does the rest.

I bet you can picture the code base I’m describing.  A long time ago, I went into detail about this piece of imagery, but I’ll summarize by saying that this is “command and control” programming where constructors of objects instantiate all of the object’s dependencies — FooService instantiates its own logger.  This runs afoul of the Dependency Inversion Principle, which holds that high level modules, like Car, should not depend directly on lower level modules, like Engine, but rather that both should depend on an abstraction of the Car-Engine interaction.  This allows the car and the engine to vary independently meaning that our automobile factory workers actually could have control over which engines go in which cars.  And, as described in the linked post, a code base making heavy use of the Dependency Inversion Principle tends to be composable whereas a command and control style code base is not, favoring instead the “car, build thyself” approach.  So to remember and understand Dependency Inversion principle ask yourself who should control what parts go in your car — the people building the car, or the car itself?  Only one of those ideas is preposterous.

Incidentally, five of the principles here compose the SOLID Principles. If you’d like a much deeper dive, check out the Pluralsight course on the topic.

Acknowledgements | Contact | About | Social Media