DaedTech

Stories about Software

By

Delegating is Not Just for Managers

I remember most the tiredness that would come and stick around through the next day. After late nights where the effort had been successful, the tiredness was kind of a companion that had accompanied me through battle. After late nights of futility, it was a taunting adversary that wouldn’t go away. But whatever came, there was always tiredness.

I have a personality quirk that probably explains whatever success I’ve enjoyed as well as my frequent tiredness. I am a relentless DIY-er and inveterate tinkerer. In my life I’ve figured out and become serviceable at things ranging from home improvement to cooking to technology. This relentless quest toward complete understanding back to first principles has given me a lot of knowledge, practice, and drive; staying up late re-assembling a garbage disposal when others might have called a handyman is the sort of behavior that’s helped me advance myself and my career. On a long timeline, I’ll figure the problem out, whatever it is, out of a stubborn refusal to be defeated and/or a desire to know and understand more.

Delegating

And so, throughout my career, I’ve labored on things long after I should have gone to bed. I’ve gotten 3 hours of sleep because I refused to go to bed before hacking some Linux driver to work with a wireless networking USB dongle that I had. I’ve stayed up late doing passion projects, tracking down bugs, and everything in between. And wheels, oh, how I’ve re-invented them. It’s not so much that I suffered from “Not Invented Here” syndrome, but that I wanted the practice, satisfaction, and knowledge that accompanied doing it myself. I did these things for the same reason that I learned to cook or fix things around the house: I could pay someone else, but why do that when I’m sure I could figure it out myself?

Read More

By

Are You Changing the Rules of the Road?

Happy Friday, all.  A while back, I announced some changes to the blog, including a partnership with Infragistics, who sponsors me.  Part of my arrangement with them and with a few other outfits (stay tuned for those announcements) is that I now write blog posts for them.  Between writing posts for this blog, writing posts for those blogs, and now writing a book, I’m doing a lot of writing.  So instead of writing Friday’s post late Thursday evening, I’m going to do some work on my book instead and link you to one of my Infragistics posts.

The title is, “Are You Changing the Rules of the Road?”  Please go check it out.  Because they didn’t initially have my headshot and bio, it’s posted under the account “DevToolsGuy,” but it’s clearly me, right down to one of Amanda’s signature drawings there in the post.  I may do this here and there going forward to free up a bit of my time to work on the book.  But wherever the posts reside, they’re still me, and they’re still me writing for the same audience that I always do.

 

 

CodersBlock

 

By

What Story Does Your Code Tell?

I’ve found that as the timeline of my life becomes longer, my capacity for surprise at my situation diminishes. And so my recent combination of types of work and engagements, rather than being strange in any way to me, is simply ammo for genuineness when I offer up the cliche, “variety is the spice of life.” Of late, I’ve been reviewing a lot of code in a coaching capacity as well as creating and giving workshops on story telling and creative writing. And given how much practice I’ve had over the last several years at multi-purposing my work, I’m quite vigilant for opportunities to merge story-telling and software advice. This post is one such opportunity, if a small one.

A little under a year ago, I offered up a post in which I suggested some visualization mnemonics to help make important software design principles more memorable. It was a relatively popular post, so I assume that people found it helpful. And the reason, I believe, that people found it helpful is that stories engage your brain far more than simple conveyance of information. When you read a white-paper explaining the Law of Demeter, the part of your brain that processes natural language activates and decodes the words. But when I tell you a story about a customer in a convenience store removing his pants to pay for a soda, your brain processes this text as if it were experiencing the event. Stories really engage the brain.

One of the most difficult aspects of writing code is to find ways to build abstraction and make your code readable so that others (or you, months later) can read the code as easily as prose. The idea is that code is read far more often than written or modified, so readability is important. But it isn’t just that the code should be readable — it should be understandable and, in some way, even memorable. Usually, understandability is achieved through simplicity and crisp, clear abstractions. Memorability, if achieved at all, is usually created via Principle of Least Surprise. It’s a cheat — your code is memorable not because it captivates the reader, but because the reader knows that mapping what she’s used to will probably work. (Of course, I recognize that atrocious code will be memorable in the vivid, conversational sense, but I’m talking about it being memorable in terms of its function and exact behavior).

It’s therefore worth asking what story your code is telling. Look at this code. What story is it telling?

Read More

By

10x Developer, Reconsidered

Unless my memory and a quick search of my blog are both mistaken, I’ve never tackled the controversial concept of a “10x developer.” For those who aren’t aware, this is the idea that some programmers are order(s) of magnitude more productive than others, with the exact figure varying. I believe that studies of varying statistical rigor have been suggestive of this possibility for decades and that Steve McConnell wrote about it in the iconic book, “Code Complete.” But wherever, exactly, it came from, the sound byte that dominates our industry is the “10x Developer” — the outlier developer archetype so productive as to be called a “rock star.” This seems to drive an endless debate cycle in the community as to whether or not it’s a myth that some of us are 10 or more times as awesome as others. You can read some salient points and a whole ton of interesting links from this jumping off stack exchange question.

Read More

By

I’ll Take a Crack at Defining Tech Debt

Not too long ago, on one of my Chess TDD posts, someone commented asking me to define technical debt because I had tossed the term out while narrating a video in that series. I’ll speak to it a bit because it’s an extremely elegant metaphor, particularly in the context of clarifying things to business stakeholders. In 1992, Ward Cunningham coined the term in a report that he wrote:

Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.

Everyone with an adult comprehension of finance can understand the concept of fiscal debt and interest. Whether it was a bad experience with one’s first credit card or a car or home loan, people understand that someone loaning you a big chunk of money you don’t have comes at a cost… and not just the cost of paying back the loan. The greater the sum and the longer you take to pay it back, the more it costs you. If you let it get out of hand, you can fall into a situation where you’ll never be capable of paying it back and are just struggling to keep up with the interest.

Likewise, just about every developer can understand the idea of a piece of software getting out of hand. The experience of hacking things together, prototyping and creating skunk-works implementations is generally a fun and heady one, but the idea of living for months or years with the resulting code is rather horrifying. Just about every developer I know has some variant of a story where they put something “quick and dirty” together to prove a concept and were later ordered, to their horror, to ship it by some project or line manager somewhere.

The two parallel concepts make for a rather ingenious metaphor, particularly when you consider that the kinds of people for whom we employ metaphors are typically financially savvy (“business people”). They don’t understand “we’re delivering features more slowly because of the massive amount of nasty global state in the system” but they do understand “we made some quality sacrifices up front on the project in order to go faster, and now we’re paying the ‘interest’ on those, so we either need to suck it up and fix it or keep going slowly.”

Since the original coining of this term, there’s been a good bit of variance in what people mean by it. Some people mean the generalized concept of “things that aren’t good in the code” while others look at specifics like refactoring-oriented requirements/stories that are about improving the health of the code base. There’s even a small group of people that take the metaphor really, really far and start trying to introduce complex financial concepts like credit swaps and securities portfolios to the conversation (it always seems to me that these folks have an enviable combination of lively imaginations and surplus spare time).

I can’t offer you any kind of official definition, but I can offer my own take. I think of technical debt as any existing liability in the code base (or in tooling around the code base such as build tools, CI setup, etc) that hampers development. It could be code duplication that increases the volume of code to maintain and the likelihood of bugs. It could be a nasty, tangled method that people are hesitant to touch and thus relates in oddball workarounds. It could be a massive God class that creates endless merge conflicts. Whatever it is, you can recognize it by virtue of the fact that its continued existence hurts the development effort.

I don’t like to get too carried away trying to make technical debt line up perfectly with financial debt. Financial debt is (usually) centered around some form of collateral and/or credit, and there’s really no comparable construct in a code base. You incur financial debt to buy a house right now and offer reciprocity in the form of interest payments and ownership of the house as collateral for default. The tradeoff is incredibly transparent and measured, with fixed parameters and a set end time. You know up front that to get a house for $200,000, you’ll be paying $1000 per month for 30 years and wind up ‘wasting’ $175,000 in interest. (Numbers made up out of thin air and probably not remotely realistic).

GratuitousAddOn

Tech debt just doesn’t work this way. There’s nothing firm about it; never do you say “if I introduce this global variable instead of reconsidering my object graph, it will cost me 6 hours of development time per week until May of 2017, when I’ll spend 22 hours rethinking the object graph.” Instead, you get lazy, introduce the global variable, and then, six months later, find yourself thinking, “man, this global state is killing me! Instead of spending all of my time adding new features, I’m spending 20 hours per week hunting down subtle defects.” While fiscal debt is tracked with tidy, future-oriented considerations like payoff quotes and asset values, technical debt is all about making visible the amount of time you’re losing to prior bad decisions or duct-tape fixes (and sometimes to drive home the point that making the wrong decision right now will result in even more slowness sometime in the future).

So in my world, what is tech debt? Quite simply, it’s the amount of wasted time that liabilities in your code base are causing you or the amount of wasted time that you anticipate they will cause you. And, by the way, it’s entirely unavoidable. Every line of code that you write is a potential liability because every line of code that you write is something that will, potentially, need to be read, understood, and modified. The only way to eliminate tech debt is to solve people’s problems without writing code and, assuming since you’re probably in the wrong line of work not to write code, the best you can do is make an ongoing, concerted, and tireless effort to minimize it.

(And please don’t take away from this, “Erik thinks writing code is bad.” Each line of code is an incremental liability, sure, but as a software developer, writing no code is a much larger, more immediate liability. It’s about getting the job done as efficiently and with as little liability as you can.)

By

You Want an Estimate? Give Me Odds.

I was asked recently in a comment what I thought about the “No Estimates Movement.” In the interests of full disclosure, what I thought was, “I kinda remember that hashtag.” Well, almost. Coincidentally, when the comment was written to my blog, I had just recently seen a friend reading this post and had read part of it myself. That was actually the point at which I thought, “I remember that hashtag.”

That sounds kind of flippant, but that’s really all that I remember about it. I think it was something like early 2014 or late 2013 when I saw that term bandied about in my feed, and I went and read a bit about it, but it didn’t really stick with me. I’ve now gone back and read a bit about it, and I think that’s because it’s not really a marketing teaser of a term, but quite literally what it advertises. “Hey, let’s not make estimates.” My thought at the time was probably just, “yeah, that sounds good, and I already try to minimize the amount of BS I spew forth, so this isn’t really a big deal for me.” Reading back some time later, I’m looking for deeper meaning and not really finding it.

Oh, there are certainly some different and interesting opinions floating around, but it really seems to be more bike-sheddy squabbling than anything else. It’s arguments like, “I’m not saying you shouldn’t size cards in your backlog — just that you shouldn’t estimate your sprint velocity” or “You don’t need to estimate if all of your story cards are broken into small enough chunks to be ones.” Those seem sufficiently tactical that their wisdom probably doesn’t extend too far beyond a single team before flirting with unknowable speculation that’d be better verified with experiments than taken as wisdom.

The broader question, “should I provide speculative estimates about software completion timelines,” seems like one that should be answered most honestly with “not unless you’re giving me pretty good odds.” That probably seems like an odd answer, so let me elaborate. I’m a pretty knowledgeable football fan and each year I watch preseason games and form opinions about what will unfold in the regular season. I play fantasy football, and tend to do pretty well at that, actually placing in the money more often than not. That, sort of by definition, makes me better than average (at least for the leagues that I’m in). And yet, I make really terrible predictions about what’s going to happen during the season.

ArmchairQuarterback

At the beginning of this season, for instance, I predicted that the Bears were going to win their division (may have been something of a homer pick, but there it is). The Bears. The 5-11 Bears, who were outscored by the Packers something like 84-3 in the first half of a game and who have proceeded to fire everyone in their organization. I’m a knowledgeable football fan, and I predicted that the Bears would be playing in January. I predicted this, but I didn’t bet on it. And, I wouldn’t have bet even money on it. If you’d have said to me, “predict this year’s NFC North Division winner,” I would have asked what odds you were giving on the Bears, and might have thrown down a $25 bet if you were giving 4:1 odds. I would have said, when asked to place that bet, “not unless you’re giving me pretty good odds.”

Like football, software is a field in which I also consider myself pretty knowledgeable. And, like football, if you ask me to bet on some specific outcome six months from now, you’d better offer me pretty good odds to get me to play a sucker’s game like that. It’d be fun to say that to some PMP asking you to estimate how long it would take you to make “our next gen mobile app.” “So, ballpark, what are we talking? Three months? Five?” Just look at him deadpan and say, “I’ll bite on 5 months if you give me 7:2 odds.” When he asks you what on earth you mean, just patiently explain that your estimate is 5 months, but if you actually manage to hit that number, he has to pay you 3.5 times the price you originally agreed on (or 3.5 times your salary if you’re at a product/service company, or maybe bump your equity by 3.5 times if it’s a startup).

See, here’s the thing. That’s how Vegas handles SWAGs, and Vegas does a pretty good job of profiting from the predictions racket. They don’t say, “hey, why don’t you tell us who’s going to win the Super Bowl, Erik, and we’ll just adjust our entire plan accordingly.”

So, “no estimates?” Yeah, ideally. But the thing is, people are going to ask for them anyway, and it’s not always practical to engage in a stoic refusal. You could politely refuse and describe the Cone of Uncertainty. Or you could point out that measuring sprint velocity with Scrum and extrapolating sized stories in the backlog is more of an empirically based approach. But those things and others like them tend not to hit home when you’re confronted with wheedling stakeholders looking to justify budgets or plans for trade shows. So, maybe when they ask you for this kind of estimate, tell them that you’ll give them their estimate when they tell you who is going to win next year’s super bowl so that you can bet your life savings on their guarantee estimate. When they blink at you dubiously, smile, and say, “exactly.”

By

The “Synthesize the Experts” Anti-Pattern

I have a relatively uncomplicated thought for this Wednesday and, as such, it should be a reasonably sized post. I’d like to address what I consider to be perhaps the most relatable and understandable anti-pattern that I’ve encountered in this field, and I call it the “Synthesize the Experts” anti-pattern. I’m able to empathize so intensely with its commission because (1) I’ve done it and (2) the mentality that drives it is the polar opposite of the mentality that results in the Expert Beginner. It arises from seeking to better yourself in a a vacuum of technical mentorship.

Let’s say that you hire on somewhere to write software for a small business and you’re pretty much a one-person show. From a skill development perspective, the worst thing you could possibly do is make it up as you go and assume that what you’re doing makes sense. In other words, the fact that you’re the least bad at software in your company doesn’t mean that you’re good at it or that you’re really delivering value to the business; it only means that it so happens that no one around can do a better job. Regardless of your skill level, developing software in a vacuum is going to hurt you because you’re not getting input and feedback and you’re learning only at the pace of evolutionary trial and error. If you assume that reaching some sort of equilibrium in this sense is mastery, you’re only compounding the problem.

But what if you’re aware of this limitation and you throw open the floodgates for external input? In other words, you’re alone and you know that better ways to do it exist, so you seek them out to the best of your ability. You read blog posts, follow prominent developers, watch videos from conferences, take online learning courses, etc. Absent a localized mentor or even partner, you seek unidirectional wisdom from outside sources. Certainly this is better than concluding that you’ve nailed it on your own, but there’s another problem brewing here.

Specifically, the risk you’re running is that you may be trying to synthesize from too many disparate and perhaps contradictory sources. Most of us go through secondary and primary education with a constant stream of textbooks that are more or less cut and dry matters of fact and cannon. The more Algebra and Geometry textbooks you read and complete exercises for, the better you’re going to get at Algebra and Geometry. But when you apply this same approach to lessons gleaned from the movers and shakers in the technosphere, things start to get weird, quickly.

To put a specific hypothetical to it, imagine you’ve just read an impassioned treatise from one person on why it’s a complete violation of OOP to have “property bag” classes (classes that have only accessors and mutators for fields and no substantive methods). That seems to make sense, but you’re having trouble reconciling it with a staunch proponent of layered architecture who says that you should only communicate between layers using interfaces and/or “DTO” classes, which are property bags. And, hey, don’t web services do most of their stuff with property bags… or something? Ugh…

Confused

But why ugh? If you’re trying to synthesize these opinions, you’re probably saying, “ugh” because you’ve figured out that your feeble brain can’t reconcile these concepts. They seem contradictory to you, but the experts must have them figured out and you’re just failing to incorporate them all elegantly. So what do you do? Well, perhaps you define some elaborate way to get all of your checked in classes to have data and behavior and, at runtime to communicate, they use an elaborate reflection scheme to build and emit, on the fly, property bag DTO classes for communication between layers and to satisfy any web service consumers. Maybe they do this by writing a text file to disk, invoking a compiler on it and calling into the resulting compiled product at runtime. Proud of this solution, you take to a Q&A forum to show how you’ve reconciled all of this in the most elegant way imaginable, and you’re met with virtual heckling and shaming from resident experts. Ugh… You failed again to make everyone happy, and in the process you’ve written code that seems to violate all common sense.

The problem here is the underlying assumption that you can treat any industry software developer’s opinion on architecture, even those of widely respected developers, as canonical. Statements like, “you shouldn’t have property bags” or “application layers should only communicate with property bags” aren’t the Pythagorean Theorem; they’re more comparable to accomplished artists saying, “right triangles suck and you shouldn’t use them when painting still lifes.” Imagine trying to learn to be a great artist by rounding up some of the most opinionated, passionate, and disruptive artists in the industry and trying to mash all of their “how to make great, provocative art” advice into one giant chimera of an approach. It’d be a disaster.

I’m not attempting a fatalistic/relativistic “no approach is better than any other” argument. Rather I’m saying that there is healthy disagreement among those considered experts in the industry about the finer points of how best to approach creating software. Trying to synthesize their opinionated guidance on how to approach software into one, single approach will tie you in knots because they are not operating from a place of universal agreement. So when you read what they’re saying and ask their advice, don’t read for Gospel truth. Read instead to cherry pick ideas that make sense to you, resonate with you, and advance your understanding.

So don’t try to synthesize the experts and extract a meeting that is the sum of all of the parts. You’re not a vacuum cleaner trying to suck every last particle off of the carpet; you’re a shopper at a bazaar, collecting ideas that will complement one another and look good, put together and arranged, when you get home. But whatever you do, don’t stop shopping — everything goes to Expert-Beginner land when you say, “nah, I’m good with what’s in my house.”

By

How to Get Your Company to Stop Killing Cats

We humans are creatures of routine, and there’s some kind of emergent property of groups of humans that makes us, collectively, creatures of rut. This is the reason that corporations tend to have a life trajectory sort of similar to stars, albeit on a much shorter timeline. At various paces, they form up and then reach sort of a moderate, burning equilibrium like our Sol. Eventually however, they bloat into massive giants which is generally the beginning of their death cycle, and, eventually, they collapse in on themselves, either going supernova or drifting off into oblivion as burned out husks. If you don’t believe me, check out the biggest employers of the 1950s, which included such household names as “US Steel” and “Standard Oil.” And, it’s probably a pretty safe bet that in 2050, people will say things like, “oh yeah, Microsoft, I heard of them once when playing Space-Trivial-Pursuit” or “wow, Apple was a major company? That Apple? The one that sells cheap, used hologram machines?”  (For what it’s worth, I believe GE and some other stalwarts were on that list as well, so the dying off is common, though not universal)

Yes, in the face of “adapt or die” large companies tend to opt for “die.” Though, “opt” may be a strong word in the sense of agency. It’s more “drift like a car idling toward a cliff a mile away, but inexplicably, no one is turning.” Now, before you get any ideas that I’m about to solve the problem of “how to stop bureaucratic mobs from making ill-advised decisions,” I’m not. I’m really just setting the stage for an observation at a slightly smaller scale.

I’ve worked for and with a number of companies, which has meant that I’ve not tended to be stationary enough to develop the kind of weird, insular thinking that creates the Dead Sea Effect (wow, that’s my third link-back to that article) or, more extremely, Lord of the Flies. I’m not the kids wearing war paint and chasing each other with spears, but rather the Deus Ex Machina marine at the end that walks into the weird company and says, in disbelief, “what are you guys doing?” It’s not exactly that I have some kind of gift for “thinking outside the box” but rather that I’m not burdened with hive mind assumptions. If anything, I’m kind of like a kid that blurts out socially unacceptable things because he doesn’t know any better: “geez, Uncle Jerry, you talk funny and smell like cough medicine.”

You can’t just not kill cats.

ScaredCat

What this series of context switches has led me to observe is that organizations make actual attempts to adapt, but at an extremely slow pace and desperately clinging to bad habits. This tends to create a repelling effect for a permanent resident, but a sad, knowing smile for a consultant or job hopper.  If you find yourself in this position, here’s a (slightly satirical) narrative that’s a pretty universal description of what happens when you start at (and eventually exit) a place that’s in need of some help.

You get a call from a shop or a person that says, “we need help writing software, can you come in and talk to us for an afternoon?” “Sure,” you say and you schedule some time to head over. When you get there, you notice (1) they have no computers and (2) it smells terrible. “Okay,” you say, hesitantly, “show me what you’re doing!” At this point, they lead you to a room with a state of the art, industrial grade furnace and they say, “well, this is where we round up stray cats and toss them in the furnace, but for some reason, it’s not resulting in software.” Hiding your horror and lying a little, you say, “yeah, I’ve seen this before — your big mistake here is that instead of killing cats, you want to write software. Just get a computer and a compiler and, you know, write the software.”

The next week you come in and the terrible smell is gone, replaced by a lot of yowling and thumping. You walk into the team room and discover a bunch of “software engineers” breathing heavily and running around trying to bludgeon cats with computers. “What are you doing,” you ask. “You’re not writing code or using the compiler — you’re still killing cats.” “Well,” the guy who called you in replies shamefacedly, “we definitely think you’re right about needing computers, and I know this isn’t exactly what you recommended, but you can’t just, like, not kill cats.” “Yes, you can just not kill cats,” you reply. “It’s easy. Just stop it. Here, right now. Hand me that computer, let’s plug it in, and start writing code.” Thinking you’ve made progress, you head out.

The next week, you return and there’s no thundering or yowling, and everyone is quietly sitting and coding. Your work here is done. You start putting together a retrospective and maintenance plans when all of a sudden, you hear the “woosh” of the old oven and get a whiff of something horrible. Exasperated, you march in and demand to know what’s going on. “Oh, that — every time we finish a feature, we throw a bunch of cats in the furnace so that we can start on the next feature.” Now at the end of your rope you restrain the urge to say, “I’m pretty sure you’re just Hannibal Lecter,” opting instead for, “I think we’ve made some good progress here so I’m going to move on now.”  “Oh, we hate to see you go when you’re doing so much good,” comes the slightly wounded reply, “but we understand.  Just head over to maintenance and give them your parking garage pass along with this cat — they’ll know what to do with it.  And yes, mister smarty pants, they do really need the cat.”

As you leave, someone grabs you in the hallway for a hushed exchange.  “Haha, leaving us already, I see,” they say in a really loud voice.  Then, in a strained whisper, “take me with you — these people are crazy.  They’re killing cats for God’s sake!”  What you’ve come to understand during your stay with the company is that, while at first glance it seems like everyone is on board with the madness, in reality, very few people think it makes sense.  It just has an inexplicable, critical mass of some kind with an event horizon from which the company simply cannot escape.

What’s the score here?  What’s next?

So once you leave the company, what happens next?  Assuming this exit is one of departing employee, the most likely outcome is that you move on and tell this story every now and then over beers, shuddering slightly as you do.  But, it might be that you leave and take the would-be escapees with you.  Maybe you start to consult for your now-former company or maybe you enter the market as a competitor of some kind.  No matter what you do, someone probably starts to compete with them and probably starts to win.  If the company does change, it’s unlikely to happen as a result of internal introspection and initiative and much more likely to happen in desperate response to crisis or else steered by some kind of external force like a consulting firm or a business partner.  Whatever the outcome, “we just worked really hard and righted the ship on our own” is probably not the vehicle.

If that company survives its own bizarre and twisted cat mangling process, it doesn’t survive it by taking incremental baby-steps from “cat murdering” to “writing software,” placating all involved at every step of the way that “nothing really needs to change that much.”  The success prerequisite, in fact, is that you need to change that much and more and in a hail of upheaval and organizational violence.  (People like to use the term “disruption” in this context, but I don’t think that goes far enough at  capturing the wailing, gnashing of teeth and stomping of feet that are required). In order to arrive at such a point of entrenched organizational failure, people have grown really comfortable doing really unproductive things for a really long time, which means that a lot of people need to get really uncomfortable really fast if things are going to improve.

I think the organizations that survive the “form->burn->bloat->die” star cycle are the ones that basically gather a consortium of hot burning small stars and make it look to the outside world that it’s a single, smoothly running star.  This is a bit of a strained metaphor, but what I mean is that organizations need to come up with ways to survive and even start to drive lurching, staggering upheaval that characterizes innovation.  They may seem calm to the outside world, but internally, they need malcontents and revolutionaries leaving the cat killing rooms and taking smart people with them.  It’s just that instead of leaving, they re-form and tackle the problem differently and better right up until the next group of revolutionaries comes along.  Oh, and somehow they need all of this upheaval to drive real added value and not just cause chaos.  Yeah, it’s not an easy problem.  If it were, corporations would have much better long term survival rates.

I don’t know the solution, but I do know that many established organizations are like drug addicts, hooked on a cocktail of continuity and mediocrity.  They know it isn’t good for them and that it’s even slowly killing them, and they seek interventions and hide their shame from those that try to help them, but it’s just so comfortable and so easy, and just a few more months like this, and look, man, this is just the way it is, so leave me alone about it, okay!?  If you’re working for or with a company struggling to change, understand this dynamic and game plan for drastic measures.  You’re going to need to come  up with a way to change the game, radically, and somehow not piss all involved parties off in the process.  If you want to stay, that is.  If not, there’s probably good money to be made in disrupting or competing with outfits that just want to keep burning those cats.

By

Dependency Injection or Inversion?

The hardest thing about being a software developer, for me, is coming up with names for things. I’ve worked out a system with which I’m sort of comfortable where, when coding, I pay attention to every namespace, type, method and variable name that I create, but in a time-box (subject to later revisiting, of course). So I think about naming things a lot and I’m usually in a state of thinking, “that’s a decent name, but I feel like it could be clearer.”

And so we arrive at the titular question. Why is it sometimes called “dependency injection” and at other times, “dependency inversion.” This is a question I’ve heard asked a lot and answered sometimes too, often with responses that make me wince. The answer to the question is that I’m playing a trick on you and repeating a question that’s flawed.

Confused

Dependency Injection and Dependency Inversion are two distinct concepts. The reason that I led into the post with the story about naming is that these two names seem fine in a vacuum but, used together, they seem to create a ‘collision,’ if you will. If I were wiping the slate clean, I’d probably give “dependency inversion” a slightly different name, though I hesitate to say it since a far more accomplished mind than my own gave it the name in the first place.

My aim here isn’t to publish the Nth post exhaustively explaining the difference between these two concepts, but rather to supply you with (hopefully) a memorable mnemonic. So, here goes. Dependency Injection == “Gimme it” and Dependency Inversion == “Someone take care of this for me, somehow.” I’ll explain a bit further.

Dependency Injection is a generally localized pattern of writing code (though it may be used extensively in a code base). In any given method or class (or module, if you want) rather than you going out and finding or making the things you need, you simply order your collaborators to “gimme it.”

So instead of this:

You say, “nah, gimme it,” and do this instead:

It isn’t you responsible for figuring out that time comes from atomic clocks which, in turn, come from atoms somehow. Not your problem. You say to your collaborators, “you want the time, Buddy? I’m gonna need a ThingThatTellsTime, and then it’s all yours.” (Usually you wouldn’t write this rather pointless method, but I wanted to keep the example as simple as humanly possible).

Dependency Inversion is a different kind of tradeoff. To visualize it, don’t think of code just yet. Think of a boss yelling at a developer. Before the ‘inversion’ this would have been straightforward. “Developer! You, Bill! Write me a program that tells time!” and Bill scurries off to do it.

But that’s so pre-Agile. Let’s do some dependency inversion and look at how it changes. Now, boss says, “Help, someone, I need a program that tells time! I’m going to put a story in the product backlog” and, at some point later, the team says, “oh, there’s something in the backlog. Don’t know how it got there, exactly, but it’s top priority, so we’ll figure out the details and get it done.” The boss and the team don’t really need to know about each other directly, per se. They both depend on the abstraction of the software development process; boss has no idea which person writes the code or how, and the team doesn’t necessarily know or care who plopped the story in the backlog. And, furthermore, the backlog abstraction doesn’t depend on knowing who the boss is or the developers are or exactly what they’re doing, but those details do depend on the backlog.

Okay, so first of all, why did I do one example in code and the other in anecdote, when I could have also done a code example? I did it this way to drive home the subtle scope difference in the concepts. Dependency injection is a discrete, code-level tactic. Dependency inversion is more of an architectural strategy and way of structuring (decoupling) code bases.

And finally, what’s my (mild) beef with the naming? Well, dependency inversion seems a little misleading. Returning to the boss ordering Bill around, one would think a strict inversion of the relationship would be the stuff of inane sitcom fodder where, “aha! The boss has become the bossed! Bill is now in charge!” Boss and Bill’s relationship is inverted, right? Well, no, not so much — boss and Bill just have an interface slapped in between them and don’t deal with one another directly anymore. That’s more of an abstraction or some kind of go-between than an inversion.

There was certainly a reason for that name, though, in terms of historical context. What was being inverted wasn’t the relationship between the dependencies themselves, but the thinking (of the time) about object oriented programming. At the time, OOP was very much biased toward having objects construct their dependencies and those dependencies construct their dependencies, and so forth. These days, however, the name lives on even as that style of OOP is more or less dead this side of some aging and brutal legacy code bases.

Unfortunately, I don’t have a better name to propose for either one of these things — only my colloquial mnemonics that are pretty silly. So, if you’re ever at a user group or conference or something and you hear someone talking about the “gimme it” pattern or the “someone take care of this for me, somehow” approach to architecture, come over and introduce yourself to me, because there will be little doubt as to who is talking.

By

Have a Cigar

There are a few things that I think have done subtle but massive damage to the software development industry. The “software is like a building” metaphor comes to mind. Another is modeling the software workforce after the Industrial Age factory model, where the end goal seems to be turning knowledge work into 15 minute parcels that can be cranked out and billed in measured, brainless, assembly line fashion. (In fact, I find the whole concept of software “engineering” to be deeply weird, though I must cop to having picked out the title “software engineer” for people in a group I was managing because I knew it would be among the most marketable for them in their future endeavors.) Those two subtleties have done massive damage to software quality and to software development process quality, respectively, but today I’d like to talk about one that has done damage to our careers and our autonomy and that frankly I’m sick of.

The easiest way to give the phenomenon a title would be to call it “nerd stereotyping” and the easiest way to get you to understand quickly what I mean is to ask you to consider the idea that, historically, it’s always been deemed necessary to have “tech people,” “business people,” and “analysts” and “project managers” who are designated as ‘translators’ that can interpret “tech speak” for “normal people.” It’s not a wholly different metaphor than the 1800s having horses and people, with carriage drivers who could talk to the people but also manipulate the dumb, one dimensional beasts into using their one impressive attribute, their strength, to do something useful. Sometimes this manipulation meant the carrot and other times the stick. See what I did there? It’s metaphor synergy FTW!

If you’re wondering at this point why there are no cigars involved in the metaphor, don’t worry — I’ll get to that later.

The Big Bang Theory and Other Nerd Caricatures

Last week, I was on a long weekend fishing trip with my dad and girlfriend fiancee (as of the second edit), and one night before bed, we popped the limited access cable on and were vegetating, watching what the limited selection allowed. My dad settled on the sitcom “The Big Bang Theory.” I’ve never watched this show because there have historically been about seven sitcoms that I’ve ever found watchable, and basically none of those have aired since I became an adult. It’s just not really my sense of humor, to be honest. But I’ve always suspected this one sitcom in particular of a specific transgression — the one about which I’m talking here. I’d never before seen the show, though, so I didn’t know for sure. Well, I know now.

In the two episodes I saw, the humor could best be summarized as, “it’s funny because that guy is so smart in one way but so dumb in another! It’s ironic and hi-larious!” The Sheldon character, who seems to be your prototypical low EQ/high IQ dweeb, decided in one episode to make every decision in life based on a dice roll, like some kind of programmer version of Harvey Dent. In another episode, he was completely unable to grasp the mechanics of haggling over price, repeatedly blurting out that he really wanted the thing even as his slightly less nerdy friend tried to play it cool. I don’t know what Sheldon does for a living, but I’ll wager he’s a programmer or mathematician or actuary or something. My money isn’t on “business analyst” or “customer support specialist” or “account manager.” But, hey, I bet that’d make for a great spin-off! Nerdy guy forced to do non-nerdy job — it’s funny because you wouldn’t expect it!

My intention here isn’t to dump on this sitcom, per se, and my apologies if it’s a favorite of yours and the characters are endearing to you. I’m really picky and hard to please when it comes to on-screen comedy (for example, I’d summarize “Everybody Loves Raymond” as “it’s funny because he’s a narcissistic, incompetent mama’s boy and she’s an insufferable harpy — hi-larious!”). So, if you’d prefer another example of this that I’ve seen in the past, consider the character on the show “Bones.” I tried watching that show for a season or two, but the main character was just absurd, notwithstanding the fact that the whole show was clearly set up to string you along, waiting for her to hook up with that FBI dude. But her whole vibe was, “I am highly intelligent and logical, but even searching my vast repository of situational knowledge and anthropological nuance and I cannot seem to deduce why there is moisture in and around your tear ducts after hearing that the woman who gave birth to you expired. Everyone expires, so it’s hardly remarkable.” She has an IQ of about 245 (and is also apparently a beautiful ninja) but hasn’t yet groked the syllogism of “people cry when they’re sad and people are sad when their parents die.” This character and Sheldon and so many others are preposterous one-dimensional caricatures of human beings, and when people in mathy-sciency fields ham it up along with them, I’m kind of reminded of this blurb from the Onion from a long time ago.

But it goes beyond just playing to the audience. As a collective, we engineers, programmers, scientists, etc., embrace and exaggerate this persona for internal cred. Because my field is programming, I’ll speak to the programmer archetype: the lone hero and iconoclast, a socially inept hacker. If Hollywood and reductionist popular culture are to be believed, it is the mediocre members of our field who are capable of social lives, normal interactions and acting like decent human beings. But the really good programmers are a mashup of Sheldon and Gregory House — lone, misanthropic, socially maladjusted weirdos whose borderline personalities and hissy fits simply have to be endured in order to bask in their prodigious, savant-like intellects and to extract social value out of them. Sheldon may be ridiculous, but he’s also probably the only one that can stop hackers or something, just as House’s felonious, unethical behavior and flaunted drug addiction are tolerated at his hospital because he’s good at his job.

Attribute Point Shaving

As humans, we like to believe in what some probably refer to as justice. I’m not really one to delve into religion on this blog, but the concept of “hell” is probably the single biggest illustrator of what I mean. It gives us the ability to answer our children’s question: “Mommy, why didn’t that evil man go to jail like in the movies?” We can simply say, “Oh, don’t worry, they’ll go to an awful place with fire and snakes and stuff after they die.” See, problem solved. Cosmic scales rebalanced. Hell is like a metaphysical answer to the real universe’s “dark energy” — it makes the balance sheet go to zero.

But we believe this sort of thing on a microscale as well, particularly when it comes to intelligence. “She’s not book smart, but she’s street smart.” “He may be book smart, but he has low EQ.” “She’s good at math, so don’t expect her to read any classic literature.” At some basic level, we tend to believe that those with one form of favorable trait have to pay the piper by sucking at something else, and those who lack a favorable trait must be good at something else. After all, if they had nothing but good traits, the only way to sort that out would be to send them directly to hell. And this RPG-like (or Madden Football-like, if you prefer), zero-sum system of points allocation for individual skills is how we perceive the world. Average people have a 5 out of 10 in all attributes. But since “math geniuses” have a 10 out of 10 in “good at math,” they must have a 0 out of 10 in “going out on dates.” The scales must balance.

This sword weirdly cuts the other way too. Maybe I’m only a 6 out of 10 at math and I really wish I were a 9 out of 10. I could try to get better, but that’s hard. What’s a lot easier to do is act like a 2 out of 10 in “going out on dates” instead of a 5 out of 10. People will then assume those 3 points I’m giving up must go toward math or some other dorky pursuit. If I want to hit a perfect 10 out of 10, I can watch Star Trek and begin most of my sentences with “so.” That’s gotta hurt me in some social category or another, and now I’m a math genius. Think this concept of personality point-shaving is BS? Ask yourself if you can remember anyone in junior high trying to get out of honors classes and into the mainstream so as not to seem geeky. Why do that? Shaving smart points for “street smart” points.

If you’re Hollywood, this is the best thing ever for portraying smart people. It’s hard to convey “extremely high IQ” in the medium of television to the masses. I mean, you can have other characters routinely talk about their intellect, but that’s a little trite. So what do you do? You can have them spout lots of trivia or show them beating grandmasters at chess or something… or you can shave points from everything else they do. You can make them woefully, comically inept at everything else, but most especially any form of social interaction. So you make them insufferable, low-EQ, dysfunctional d-bags in order to really drive home that they have high IQs.

In the lines of work that I mentioned earlier, there’s natural pressure to point shave as a measure of status. I think that this hits a feedback loop and accelerates into weird monocultures and that having low scores in things like “not getting food on yourself while you eat” and “not looking at your feet while you talk” actually starts to up your cred in this weird, insular world. Some of us maybe grew up liking Star Trek while others who didn’t pretend to, since that shaves some points off of your social abilities. In turn, in the zero-sum game of personal attributes, it makes you a better STEM practitioner.

What’s the Harm?

So we might exaggerate our social awkwardness or affect some kind of speech impediment or write weird, cryptic code to augment the perception of our skills… so what? No big deal, right? And, yeah, maybe we go to work and delight in telling a bunch of suits that we don’t understand all of their BS talk about profits and other nonsense and to just leave us alone to write code. Awesome, right? In one fell swoop, we point shave for social grace in favor of intelligence and we also stick it to the man. Pretty sweet, right?

LiveLongAndProsper

I guess, in the moment, maybe. But macroscopically, this is a disaster. And it’s a disaster that’s spawned an entire industry of people that collect larger salaries than a lot of middle managers and even some executives but have almost no real voice in any non-software-based organization. It’s a disaster that’s left us in charge of the software that operates stock exchanges, nuclear plants and spaceships, but apparently not qualified enough to talk directly to users or manage our own schedules and budgets without detailed status reports. Instead of emerging into being self-sufficient, highly-paid, autonomous knowledge workers like doctors and lawyers, we’re lorded over by whip-cracking, Gantt-chart-waving middle managers as if we were assembling widgets on the factory floor and doing it too slowly. And we’ve done it almost entirely voluntarily.

So what am I advocating, exactly? Simply that you refuse to buy into the notion that you’re just a “code slinger” and that all that “business stuff” is someone else’s problem. It’s not. It’s your problem. And it’s really not that hard if you pay attention. I’m not suggesting that you trade in your IDE for Microsoft Project and Visio, but I am suggesting that you spend a bit of time learning enough about the way business is conducted to speak intelligently. Understand how to make a business case for things. Understand the lingo that analysts and project managers use well enough to filter out all of the signaling-oriented buzzwords and grasp that they are communicating some ideas. Understand enough to listen, understand and critique those ideas. In short, understand enough to do away with this layer of ‘translators’ the world thinks that we need, reclaim some autonomy, and go from “slinging code” to solving problems with technology and being rewarded with freedom and appropriate compensation for doing so.

I’ll close with one last thought, hopefully to drive my point home. How many times (this is kind of programmer-specific) have people approached you and said something like, “let’s make an app; you write the code and get it into the app store and I’ll do, like, the business stuff.” And how many times, when you hear this, is it proposed that you run the show? And how many times is it proposed that you’ll do it for pay or for 49% equity or something? They had the idea, they’ll do business things, and you’re the code-monkey, who, you know, just makes the entire product.

Consider this lyric from the Pink Floyd:

Everybody else is just green, have you seen the chart?
It’s a helluva start, it could be made into a monster
If we all pull together as a team.

And did we tell you the name of the game, boy?
We call it Riding the Gravy Train.

It’s from a song called, “Have a Cigar,” and it spoke to the corporate record industry who essentially brokered its position to “team up” with musicians to control their careers and passively profiteer (from the cynical songwriter’s perspective, anyway — I’m not interested in debating the role of the record industry in creating 70’s rock stars since it’s pretty easy to argue that there wouldn’t be a whole lot of money for anyone without the record labels). “If we all pull together as a team,” is the height of irony in the song, the same way it is in the pitches you hear where the “idea guy” tells you that he’ll be the CEO, since it was his idea, and Bill will be in charge of marketing and Sue will be the CFO, and you can handle the small detail of writing the entire application that you’re going into business to make.

Is this heads-down, workhorse role worth having the most geek cred? I don’t think so, personally. And if you also don’t, I’d encourage you to get a little outside of your comfort zone and start managing your career, your talent and your intellectual property like a business. If we all do that — if we all stop with the point shaving — I think we can change the nature of the tech game.

Acknowledgements | Contact | About | Social Media