DaedTech

Stories about Software

By

What Is a Best Practice in Software Development?

(Editorial Note: Hello, Code Project, folks and thanks for stopping by! I’m really excited about the buzz around this post, but an unintended consequence of the sudden popularity is that so many people have signed up that I’ve run out of Pluralsight trial cards. Please still feel free to sign up for the mailing list, but understand that it may be a week or so before I can get you a signup code for the 30 day trial. I will definitely send it out to you, though, when I get more. If you’d like to sign up for a 10 day trial, here is a link to the signup that’s also under my courses on the right.)

A while ago, I released a course on Pluralsight entitled, “Making the Business Case for Best Practices.”  (If you want to check it out, but don’t have a Pluralsight account, sign up for my mailing list in the side bar to the right and I’ll send you a free 30 day subscription).  There was an element of tongue-in-cheek to the title, which might not necessarily have been the best idea in a medium where my profitability is tied to maximizing the attractiveness of the title.  But, life is more fun if you’re smiling.

Anyway, the reason it was a bit tongue in cheek is that I find the term “best practice” to be spurious in many contexts.  At best, it’s vague, subjective, and highly context dependent.  The aim of the course was, essentially, to say, “hey, if you think that your team should be adopting practice X, you’d better figure out how to make a dollars and cents case for it to management — ‘best’ practices are the ones that are profitable.”  So, I thought I’d offer up a transcript from the introductory module of the course, in which I go into more detail about this term.  The first module, in fact, is called “What is a ‘Best Practice’ Anyway?”

Best Practice: The Ideal, Real and Cynical

The first definition that I’ll offer for “best practice” is what one might describe as the “official” version, but I’ll refer to it as the “ideal version.”  Wikipedia defines it as, “method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark.”  In other words, a “best practice” is a practice that has been somehow empirically proven to be the best.  As an example, if there were three possible ways to prepare chicken: serve it raw, serve it rare, and serve it fully cooked, fully cooked would emerge as a best practice as measured by the number of incidents of death and illness.  The reason that I call this definition “ideal” is that it implies that there is clearly a single best way to do something, and real life is rarely that neat.  Take the chicken example.  Cooked is better than undercooked, but there is no shortage of ways to fully cook a chicken – you can grill it, broil it, bake it, fry it, etc.  Is one of these somehow empirically “best” or does it become a matter of preference and opinion?

Barbecue

Read More

By

What Story Does Your Code Tell?

I’ve found that as the timeline of my life becomes longer, my capacity for surprise at my situation diminishes. And so my recent combination of types of work and engagements, rather than being strange in any way to me, is simply ammo for genuineness when I offer up the cliche, “variety is the spice of life.” Of late, I’ve been reviewing a lot of code in a coaching capacity as well as creating and giving workshops on story telling and creative writing. And given how much practice I’ve had over the last several years at multi-purposing my work, I’m quite vigilant for opportunities to merge story-telling and software advice. This post is one such opportunity, if a small one.

A little under a year ago, I offered up a post in which I suggested some visualization mnemonics to help make important software design principles more memorable. It was a relatively popular post, so I assume that people found it helpful. And the reason, I believe, that people found it helpful is that stories engage your brain far more than simple conveyance of information. When you read a white-paper explaining the Law of Demeter, the part of your brain that processes natural language activates and decodes the words. But when I tell you a story about a customer in a convenience store removing his pants to pay for a soda, your brain processes this text as if it were experiencing the event. Stories really engage the brain.

One of the most difficult aspects of writing code is to find ways to build abstraction and make your code readable so that others (or you, months later) can read the code as easily as prose. The idea is that code is read far more often than written or modified, so readability is important. But it isn’t just that the code should be readable — it should be understandable and, in some way, even memorable. Usually, understandability is achieved through simplicity and crisp, clear abstractions. Memorability, if achieved at all, is usually created via Principle of Least Surprise. It’s a cheat — your code is memorable not because it captivates the reader, but because the reader knows that mapping what she’s used to will probably work. (Of course, I recognize that atrocious code will be memorable in the vivid, conversational sense, but I’m talking about it being memorable in terms of its function and exact behavior).

It’s therefore worth asking what story your code is telling. Look at this code. What story is it telling?

Read More

By

Cutting Down on Code Telepathy

Let’s say that you have some public facing method as part of an API:

CustomerOrder is something that you don’t control but that you do have to use. Life is good, but then let’s say that a requirement comes in saying that orders can now be post-dated, so you need to modify your API somewhat, to something like this:

Great, but that was really painful because you learn that publishing changes to your public API is a real hassle for both yourself and for your users. After a lot of elbow grease and grumbling at the breaking change, though, things are stable once again. At least until a stakeholder with a lot of clout comes along and demands that it be possible to process orders through that method while noting that the order is actually a gift. You kick and scream, but to no avail. It has to go out and it has to hurt, and you’re powerless to stop it. Grumbling, you write the following code, trying at least to sneak it in as a non-breaking change:

But then you start reading and realize that life isn’t that simple and that you’re probably going to break your clients anyway. Fed up, you decide that you’re going to prevent yourself ever from being bothered by this again. You’ll write the API that stands the test of time:

Now, this can never be wrong. CustomerOrder can’t be touched, and the options dictionary can support any extensions that are requested of you from here forward. If changes need to be made, you can make them internally without publishing painful changes to the API. You have, fortunately, separated your concerns enough that you can simply deploy a new DLL that handles order processing, and any new values supplied by your clients can be handled. No more API changes — just a quick update, some testing, and an explanatory Word document sent to your client explaining how to use the thing. Here’s the first one:

There. A flexible API and the whole “is gift” thing neatly handled. If they specify that it’s a gift, you handle that. If they specify that it isn’t or just don’t add that option at all, then you treat those equally as the default case. Important stakeholder satisfied, and you won’t be bothered with nasty publications. So, all good, right?

Flexibility, but at what cost?

I’m guessing that, at a visceral level, your reaction to this sequence of events is probably to cringe a little, even if you’re not sure why. Maybe it’s the clunky use of a collection type instead of something slicker. Maybe it’s the (original) passing of a Boolean to the method. Perhaps it’s simply to demand to know why CustomerOrder is inviolate or why we couldn’t work to an order interface or at least define an inheritor. Maybe “options” reminds you of ViewState.

But, whatever it is, doesn’t defining a system boundary that doesn’t need to change seem like a worthwhile goal? Doesn’t it make sense to etch painful boundaries in stone so that all parties can rely on them without painful integration? And if you’re going to go that route, doesn’t it make sense to build in as much flexibility as possible so that all parties can continue to innovate?

Well, that brings me to the thing that makes me wince about this approach. I’m not a fan of shying away from the pain of “icky publish/integration” instead of going with “if it hurts, do it more and get better at it.” That shying away doesn’t make me wince in and of itself, but it does seem like the wrong turn at a fork in the road to what does make me wince, which is the irony of this ‘flexible’ approach. The idea in doing it this way is essentially to say, “okay, publishing sucks, so let’s lock down the integration point so that all parties can work independently, but let’s also make sure that we’re future proof so we can add functionality later.” Or, tl;dr, “minimize multi-party integration coordination with hyper-flexible API.”

So where’s the irony? Well, how about the fact that any new runtime-bound additions to “options” require an insane amount of coordination between the parties? You’re now more coupled than ever! For instance, let’s say that we want to add a “gift wrap” option. How does that go? Well, first I would have to implement the functionality in the code. Then, I’d have to test and deploy my changes to the server, but that’s only the beginning. From there, I need to inform you what magic string to use, and probably to publish a Word document with an example, since it’s easy to get this wrong. Then, once you have that document, I have to go through my logs and troubleshoot to discover that, “oh yeah, see that — you’re passing us ‘shouldGiftwrap’ when it should really be ‘shouldGiftWrap’ with a capital W.” And if I ever change it, by accident or on purpose? You’ll keep compiling and running, and everything will be normal except that, from your perspective, gift wrapping will just quietly stop working. How much pain have we saved in the end with this non-discoverable, counter-intuitive, but ‘flexible’ setup? Wouldn’t it be better not to get cute and just make publishing a more routine, friction-free experience?

The take-away that I’d offer here is to consider something about your code and your software that you may not previously have considered. It’s relatively easy to check your code for simple defects and even to write it in such a way to minimize things like duplication and code churn. We’re good at figuring out how not to have to keep doing the same thing over and over as well and to simplify. Those are all good practices. But the new thing I’d ask you to consider is “how much out of band knowledge does this require between parties?”

It could be a simple scenario like this, with a public facing API. Or, maybe it’s an internal integration point between your team and another team. But maybe it’s even just the interaction surface between two modules, or even classes, within your code base. Do both parties need to understand something that’s not part of the method signatures and general interaction between these entities? Are you passing around magic numbers? Are you relying on the same implicit assumptions in both places? Are there things you’re communicating through a means other than the actual interactions or else just not communicating at all? If so, I suggest you do a mental exercise to ask yourself what would be required to eliminate that out of band communication. Otherwise, today’s clever ideas become tomorrow’s maintenance nightmares.

By

Agile Methodologies or Agile Software?

Over the last couple of months, I’ve been doing mostly management-y things, so I haven’t had a lot of trade craft driven motivations to pick Pluralsight videos to watch while jogging. In other words, I’m not coming up to speed on any language, framework, or methodology, so I’m engaging in undirected learning and observation. (I’m also shamelessly scouring other authors’ courses for ways to make my own courses better). That led me to watch this course about Agile fundamentals.

As I was watching and jogging, I started thinking about Agile Manifesto and the 14 years that have passed since its conception. “Agile” is undeniably here to stay and probably approaching “industry standard.” It’s become so commonplace, in fact, that it is an industry unto itself, containing training courses, conferences, seminars, certifications — the works. And this cottage industry around “getting Agile” has sparked a good bit of consternation and, frequently, derision. We as an industry, critics might say, got so good at planning poker and daily standups that we forgot about the relatively minor detail of making software. Martin Fowler coined a term, “flaccid Scrum” to describe this phenomenon wherein a team follows all of the mechanics of some Agile methodology (presumably Scrum) to the letter and still produces crap.

It’s no mystery how something like this could happen. You’ve probably seen it. The most common culprit is some “Waterfall” shop that decides it wants to “get Agile.” So the solution is to go out and get the coaches, the certifiers, the process experts, and the whole crew to teach everyone how to do all of the ceremonies. A few weeks or months, some hands on training, some seminars, and now the place is Agile. But, what actually happens is that they just do the same basic thing they’d been doing, more or less, but with an artificially reduced cycle time. Instead of shipping software every other year with a painful integration period, they now ship software quarterly, with the same painful integration period. They’re just doing waterfall on a scale of an eighth of the previous size. But with daily standups and retrospectives.

There may be somewhat more nuance to it in places, but it’s a common theme, this focus on the process instead of the “practices.” In fact, it’s so common that I believe the Software Craftsmanship Manifesto and subsequent movement was mainly a rallying cry to say, “hey, remember that stuff in Agile about TDD and pair programming and whatnot…? Instead of figuring out how to dominate Scrum until you’re its master, let’s do that stuff.” So, the Agile movement is born and essentially says, “let’s adopt short feedback cycles and good development practices, and here are some frameworks for that” and what eventually results is the next generation of software process fetishism (following on the heels of the “Rational Unified Process” and “CMM”).

That all played through my head pretty quickly, and what I really started to contemplate was “why?” Why did this happen? It’s not as if the original signatories of the manifesto were focused on process at the exclusion of practices by a long shot. So how did we get to the point where the practices became a second class citizen? And then, the beginnings of a hypothesis occurred to me, and so exists this post.

The Agile Manifesto starts off with “We are uncovering better
ways
of developing software…” (emphasis mine). The frameworks for this type of development were and are referred to as “Agile Methodologies.” Subtly but very clearly, the thing we’re talking about here — Agile — is a process. Here were a bunch of guys who got together and said, “we’ve dumped a lot of the formalism and had good results and here’s how,” and, perversely, the only key phrase most of the industry heard was “here’s how.” So when the early adopted success became too impressive too ignore, the big boys with their big, IBM-ish processes brought in Agile Process People to say, “here’s a 600 page slide deck on exactly how to replace your formal, buttoned-up waterfall process with this new, somehow-eerily-similar, buttoned-up Agile process.” After all, companies that have historically tended to favor waterfall approaches tend to view software development as a mashup of building construction and assembly line pipelining, so their failure could only, possibly be caused by a poorly engineered process. They needed the software equivalent of an industrial engineer (a process coach) to come in and show them where to place the various machines and mindless drones in their employ responsible for the software. Clearly, the problem was doing design documents instead of writing story cards and putting Fibonacci numbers on them.

The Software Craftsmanship movement, I believe, stands as evidence to support what I’m saying here. It removes the emphasis altogether from process and places it, in very opinionated fashion, on the characteristics of the actual software: “not only working software, but also well-crafted software.” (emphasis theirs) I can’t speak exactly to what drove the creation of this document, but I suspect it was at least partially driven by the obsession with process instead of with actually writing software.

MolotovCocktail

All of this leads me to wonder about something very idly. What if the Agile Manifesto, instead of talking about “uncovering better ways,” had spoken to the idea of “let’s create agile software?” In other words, forget about the process of doing this altogether, and let’s simply focus on the properties of the software… namely, that it’s agile. What if it had established a definition that agile software is software that should be able to be deployed within, say, a day? It’s software that anyone on the team can change without fear. It’s software that’s so readable that new team members can understand it almost immediately. And so on.

I think there’s a deep appeal to this concept. After all, one of the most annoying things to me and probably to a lot of you is having someone tell me how to solve a problem instead of what their problem is, when asking for help. And, really, software development methodologies/processes are perhaps the ultimate example of this. Do a requirements phase first, then a design phase, then an implementation phase, etc. Or, these days, write what the users want on story cards, have a grooming session with the product owner, convene the team for planning poker, etc. In both cases, what the person giving the direction is really saying is, “hey, I want you to produce software that caters to my needs,” but instead of saying that and specifying those needs, they’re telling you exactly how to operate. What if they just said, “it should be possible to issue changes to the software with the press of a button, it needs to be easy for new team members to come on board, I need to be able to have new features implemented without major architectural upheaval, etc?” In other words, what if they said, “I need agile software, and you figure out how to do that?”

I can’t possibly criticize the message that came out of that meeting of the minds and gave birth to the Agile Manifesto. These were people that bucked entrenched industry trends, gained traction, and pulled it off with incredible success. They changed how we conceive of software development and clearly for the better. And it’s entirely possible that any different phrasing would have made the message either too radical or too banal for widespread adoption. But I can’t help but wonder… what if the call had been for agile software rather than agile methods.

By

Flexibility vs Simplicity? Why Not Both?

Don’t hard code file paths in your application. If you have some log file that it’s writing to or some XML file that it’s reading, there’s a well established pattern for how to keep track of the paths of those files: an external configuration scheme. This might be a .config file or a settings.xml file or even a yourapp.ini file if you’re gray enough in the beard. Or, perhaps it’s something more exotic like a database table or web service that stores key value configuration pairs. Maybe it’s something as simple as command line parameters that specify the path. Whatever the case may be, everyone knows that you don’t hard code — you don’t store the file path right in the source code. That’s amateur hour.

You can imagine how this began. Maybe a long time ago someone said, “hey, let’s log critical application events to a file so that we can review and troubleshoot if things go wrong.” They shipped this for some machine running Windows 3.1 or something, and were logging to C:\temp, which was fine unless users didn’t have a C:\temp directory. In that case, it blew up spectacularly and they were flooded with support calls at which point, they could tell their users to create the directory or they could ship a new set of floppy disks with the new source code, amended to log to a directory guaranteed to exist. Or something like that, anyway.

The lesson couldn’t be more obvious. If they had just thought ahead, they would have realized their choice for the path of the log file, which isn’t even critical anyway, was a poor one. It would have been good if they had chosen better, but it would have been almost as good if they’d just made this configurable somehow so that it needn’t be a disaster. They could have made the path configurable or they could have just made it a configurable option to create C:\temp if it didn’t exist. Next time, they’d do better by building flexibility into the application. They’d create a scheme where the application was flexible and thus the cost of not getting configuration settings right up-front was dramatically reduced.

This approach made sense and it became the norm. User settings and preferences would be treated as data, which would make it easy to create a default experience but to allow users to customize it if they were sufficiently sophisticated. And the predecessor to the “Advanced” menu tab was born. But the other thing that was born was subtle complexity, both for the users and for the programmers. Application configurability is second nature to us now (e.g. the .NET .config file), but make no mistake — it is a source of complexity even if you’re completely used to it. Think of paying $300 per month for all of your different telco concerns — the fact that you’ve been doing this for years does’t mean you’re not shelling out a ton of money.

What’s even more insidious is how this mentality has crept into application development in other ways. Notice that I called this practice “preferences as data” rather than “future-proofing,” but “future-proofing” is the lesson that many took away. If you design your application to be flexible enough, you can insulate yourself against bad initial guesses about user preferences or usage scenarios and you can ensure that the right set of tweaks, configuration alterations, and hacks will allow users to achieve what they want without you needing to re-deploy.

So, what’s the problem, apart from a huge growth in the number of available settings in a config file? I’d argue that the problem is the subtle one of striving for configurability as a first class goal. Rather than express this in general, definition-oriented terms, consider an example that may be the logical conclusion to take this thinking as far as it will go. You have some method that you expose publicly, called, ProcessOrder and it takes a parameter. Contrary to what you might think, the parameter isn’t an order ID and it isn’t an order: it’s an object. Why? Because this API is endlessly flexible. The method signature will suffice even if the entire order processing mechanism changes and even if the structure of the order itself changes. Heck, it won’t need to be altered if you decide to alter ProcessOrder(object order) to send emails. Just pass in an “Email” object and add a check for typeof(Email) to ProcessOrder. Awesome, right?

SwissArmy

Yeah, ugh. Flexibility run amok. It’d be easy to interpret my point thus far as “you need to find the balance between inflexibility/simplicity on one end and flexibility/complexity on the other.” But that’s a consultant answer at best, and a non-point at worst. It’s no great revelation that these tradeoffs exist or that it’d be ideal if you could understand which trait was more valuable in a given moment.

The interesting thing here is to consider the original problem — the one we’ve long since file away as settled. We shipped a piece of software with a setting that turned out to be a mistake, so what lesson do we take away from that? The lesson we did take away was that we should make mistakes less costly by adding a configurability out. But what if we made the mistake less costly by making rollouts of the software trivial and inexpensive? Imagine a hypothetical world where rollout didn’t mean shipping a bunch of shrink-wrapped boxes with floppy disks in them but rather a single mouse click and high confidence that everything would go well. If this were a reality, hard-coding a log file path wouldn’t really be a big deal because if that path turned out to be a problem, you could just alter that source code file, click a button, and correct the mistake. By introducing and adjusting a previously unconsidered variable, you’ve managed to achieve both simplicity and flexibility without having to trade one for the other.

The danger for software decision makers comes from creating designs with the goal of satisfying principles or interim goals rather than the goal of solving immediate business problems. For instance, the problem of hard-coding tends to arise from (generally inexperienced) software developers optimizing for their own understanding and making code as procedurally simple as possible — “hardcoding is good because I can see right where the file is going when I look at my code.” That’s not a reasonable business goal for the software. But the same problem occurs with developers automatically creating a config file for application settings — they’re following the principle of “flexibility” rather than thinking of what might make the most sense for their customers or their situation. And, of course, this also applies to the designer of the aforementioned “ProcessOrder(object)” API. Here the goal is “flexibility” rather than something tangible like “our users have expressed an interest in changing the structure of the Order object and we think this is a good idea and want to support them.”

Getting caught up in making your code conform to principles will not only result in potentially suboptimal design decisions — it will also stop you from considering previously unconsidered variables or situations. If you abide the principle “hard-coding is bad,” without ever revisiting it, you’re not likely to consider “what if we just made it not matter by making deployments trivial?” There is nothing wrong with principles; they make it easy to communicate concepts and lay the groundwork for making good decisions. But use them as tools to help you achieve your goals and not as your actual goals. Your goals should always be expressible as humans interacting with your software — not characteristics of the software.

By

Dependency Injection or Inversion?

The hardest thing about being a software developer, for me, is coming up with names for things. I’ve worked out a system with which I’m sort of comfortable where, when coding, I pay attention to every namespace, type, method and variable name that I create, but in a time-box (subject to later revisiting, of course). So I think about naming things a lot and I’m usually in a state of thinking, “that’s a decent name, but I feel like it could be clearer.”

And so we arrive at the titular question. Why is it sometimes called “dependency injection” and at other times, “dependency inversion.” This is a question I’ve heard asked a lot and answered sometimes too, often with responses that make me wince. The answer to the question is that I’m playing a trick on you and repeating a question that’s flawed.

Confused

Dependency Injection and Dependency Inversion are two distinct concepts. The reason that I led into the post with the story about naming is that these two names seem fine in a vacuum but, used together, they seem to create a ‘collision,’ if you will. If I were wiping the slate clean, I’d probably give “dependency inversion” a slightly different name, though I hesitate to say it since a far more accomplished mind than my own gave it the name in the first place.

My aim here isn’t to publish the Nth post exhaustively explaining the difference between these two concepts, but rather to supply you with (hopefully) a memorable mnemonic. So, here goes. Dependency Injection == “Gimme it” and Dependency Inversion == “Someone take care of this for me, somehow.” I’ll explain a bit further.

Dependency Injection is a generally localized pattern of writing code (though it may be used extensively in a code base). In any given method or class (or module, if you want) rather than you going out and finding or making the things you need, you simply order your collaborators to “gimme it.”

So instead of this:

You say, “nah, gimme it,” and do this instead:

It isn’t you responsible for figuring out that time comes from atomic clocks which, in turn, come from atoms somehow. Not your problem. You say to your collaborators, “you want the time, Buddy? I’m gonna need a ThingThatTellsTime, and then it’s all yours.” (Usually you wouldn’t write this rather pointless method, but I wanted to keep the example as simple as humanly possible).

Dependency Inversion is a different kind of tradeoff. To visualize it, don’t think of code just yet. Think of a boss yelling at a developer. Before the ‘inversion’ this would have been straightforward. “Developer! You, Bill! Write me a program that tells time!” and Bill scurries off to do it.

But that’s so pre-Agile. Let’s do some dependency inversion and look at how it changes. Now, boss says, “Help, someone, I need a program that tells time! I’m going to put a story in the product backlog” and, at some point later, the team says, “oh, there’s something in the backlog. Don’t know how it got there, exactly, but it’s top priority, so we’ll figure out the details and get it done.” The boss and the team don’t really need to know about each other directly, per se. They both depend on the abstraction of the software development process; boss has no idea which person writes the code or how, and the team doesn’t necessarily know or care who plopped the story in the backlog. And, furthermore, the backlog abstraction doesn’t depend on knowing who the boss is or the developers are or exactly what they’re doing, but those details do depend on the backlog.

Okay, so first of all, why did I do one example in code and the other in anecdote, when I could have also done a code example? I did it this way to drive home the subtle scope difference in the concepts. Dependency injection is a discrete, code-level tactic. Dependency inversion is more of an architectural strategy and way of structuring (decoupling) code bases.

And finally, what’s my (mild) beef with the naming? Well, dependency inversion seems a little misleading. Returning to the boss ordering Bill around, one would think a strict inversion of the relationship would be the stuff of inane sitcom fodder where, “aha! The boss has become the bossed! Bill is now in charge!” Boss and Bill’s relationship is inverted, right? Well, no, not so much — boss and Bill just have an interface slapped in between them and don’t deal with one another directly anymore. That’s more of an abstraction or some kind of go-between than an inversion.

There was certainly a reason for that name, though, in terms of historical context. What was being inverted wasn’t the relationship between the dependencies themselves, but the thinking (of the time) about object oriented programming. At the time, OOP was very much biased toward having objects construct their dependencies and those dependencies construct their dependencies, and so forth. These days, however, the name lives on even as that style of OOP is more or less dead this side of some aging and brutal legacy code bases.

Unfortunately, I don’t have a better name to propose for either one of these things — only my colloquial mnemonics that are pretty silly. So, if you’re ever at a user group or conference or something and you hear someone talking about the “gimme it” pattern or the “someone take care of this for me, somehow” approach to architecture, come over and introduce yourself to me, because there will be little doubt as to who is talking.

By

Rapid Fire Craftsmanship Tips

The last month has been something of a transitional time for me. I had been working out of my house for a handful of clients pretty much all summer, but now I’ve signed on for a longer term engagement out of state where I’m doing “craftsmanship coaching.” Basically, this involves the lesser-attended side of an agile transformation. There is no shortage of outfits that say, “hey, sign up with us, get a certification and learn how to have meetings differently,” but there does seem to be a shortage of outfits that say, “we’ll actually teach you how to write code in a way that makes delivering every couple of weeks more than a pipe dream.” I believe this state of affairs leads to what has been described as “flaccid scrum.” So my gig now is going to be working with a bunch of developers on things like writing modular code, dependency inversion, test driven development, etc.

This background is relevant for 2 reasons. First of all, it’s my excuse for why my posting cadence has dipped. Sorry ‘bout that. Secondly, it explains and segues into this post. What is software craftsmanship, anyway? I’m apparently teaching it, but I’m not really sure I can answer this question other than to say that I share a lot of opinions about what it means to write code effectively with people who identify this way. I think that TDD, factored methods, iterative, high-communication approaches, failing early, and testable code constitute are efficient approaches to writing software, and I’m happy to help people who want to improve at these things as best I can.

In that vein of thought, I’d like to offer some suggestions for tangible and easy-to-remember/easy-to-do things that you can do that are likely to improve your code. Personally, more than anything else, I think my programming was improved via random suggestions like this that were small by themselves, but in aggregate added up to a huge improvement. So, here is a series of things to tuck into your toolbelt as a programmer.

Make your variable names conversational

Ugh. The only thing worse than naming the variable after its type is then abbreviating that bad name. Assuming you’re not concerned with shaving a few bytes off your hard disk storage, this name signifies to maintainers, “I don’t really know what to call this because I haven’t given it any thought.”

Better. Now when this thing is referenced elsewhere, I’ll know that it probably contains days of some sort or another. They may be calendar days or days of the week, but at least I know that it’s talking about days, which is more than “cb” told me. But what about this?

Any doubt in your mind as to what’s in this combo box? Yeah, me neither. And that’s pretty handy when you’re reading code, especially if you’re in some code-behind or any kind of MVC model-binding scheme. And, of the objections you might have, modern IDE’s cover a lot of them. What if you later want to add Saturday and Sunday and the name becomes out of date? Easy to change now that just about all major IDEs have “rename all” support at your fingertips. Isn’t the name a little over-descriptive? Sure, but who cares — it’s not like you need to conserve valuable bytes of disk space. But with cb name, you know it’s a combo box! Your IDE should give you that information easily and quickly and, if it doesn’t, get a plugin that tells you (at least for a statically typed language).

Try to avoid booleans as method parameters

This might seem a little weird at first, but, on the whole your code will tend to be more readable and expressive if you don’t do this. The reason for this is that boolean parameters are rarely data. Rather, they’re generally control parameters. Consider this method signature:

This is a reasonably readable method signature and what you can infer from it is that the method is going to log output to a file. Well, unless you pass it “true”, in which case it will log to the console. And this tends to run afoul of the Single Responsibility Principle. This method is really two different methods kind of bolted together and its up to a caller to figure that out. I mean, you can probably tell exactly what this method looks like:

This method has two very distinct reasons to change: if you want to change the scheme for console logging and if you want to change the scheme for file logging. You’ve also established a design anti-pattern here, which is that you’re going to need to update this method (and possibly callers) every time a new logging strategy is needed.

Are there exceptions to this? Sure, obviously. But my goal here isn’t to convince you never to use a boolean parameter. I’m just trying to get you to think twice or three times about doing so. It’s a code smell.

If you type // stop and extract a method

How many times do you see something like this:

Would it kill you to do this:

and put the rest in its own method? Now you’ve got smaller, more factored, and descriptive methods, and you don’t need the comment. As a rule of thumb, if you find yourself creating “comment bookmarks” in your method like a table of contents with chapters, the method is too big and should be factored. And what better way to divide things up than to stop typing a comment and instead add a method with a descriptive name? So, when you find you’ve typed that “//”, hit backspace twice, type the comment without spaces, and then slap a parenthesis on it and, viola, new method signature you can add.

Make variable name length vary with scope size

This seems like an odd thing to think about, but it will lead to clearer code that’s easier to read. Consider the following:

Notice that there are three scopes in question: method level scope (i), class level scope (_processedCustomers) and global scope (that gigantic public static property). The method level scope variable, i, has a really tiny name. And, why not? It’s repeated 4 times in 2 lines, but it’s only in scope for 2 lines. Giving it a long name would clog up those two lines with redundancy, and it wouldn’t really add anything. I mean, it’s not hard to keep track of, since it goes out of scope one line after being defined.

The class level scope variable has a more descriptive name because there’s a pretty good chance that its declaration will be off of your screen when you are using it. The extra context helps. But there’s no need to go nuts, especially if you’re following the Single Responsibility Principle, because the class will probably be cohesive. For instance, if the class is called CustomerProcessor, it won’t be too hard to figure out what a variable named “_processedCustomers” is for. If you have some kind of meandering, 2000 line legacy class that contains 40 fields, you might want to make your class level fields more descriptive.

The globally scoped variable is gigantic. The reason for this is twofold. First and most obviously, it’s in scope from absolutely anywhere with a reference to its containing assembly, so it better be very descriptive for context. And secondly, global state is icky, so it’s good to give it a name that discourages people from using it as much as possible.

In general, the broader the usage scope for a variable/property, the more context you’ll want to bake into its name.

Try to conform to the Principle of Least Surprise

This last one is rather subjective, but it’s good practice to consider. The Principle of Least Surprise says that you should aim to minimize the learning curve or inscrutability of code that you write, bearing in mind a target audience of your fellow developers (probably your team, unless you’re writing a more public API). As a slight caveat to this, I’d say it’s fair to assume a reasonable level of language proficiency — it might not make sense to write horribly non-idiomatic code when your team is likely to become more proficient later. But the point remains — it’s best to avoid doing weird or “clever” things.

Imagine stumbling across this bad boy that compares two integers… sort of:

What pops into your head? Something along the lines of, “why is that line about production in there?” Or maybe, “what does a comparison function set some count equal to one of the parameters?” Or is it, “why compare two ints by converting them to strings?” All of those are perfectly valid questions because all of those things violate the Principle of Least Surprise. They’re surprising, and if you ask the original author about them, they’ll probably be some weird, “clever” solution to a problem that came up somewhere at some point. “Oh, that line about production is to remind me to go back and change that method. And, I set customer count equal to x because the only time this is used it’s to compare customer count to something and I’m caching it for later and saving a database write.”

One might say the best way to avoid this is to take a break and revisit your code as if you’re someone else, but that’s pretty hard to do and I would argue that it’s an acquired skill. Instead, I’d suggest playing a game where you pretend you’re about to show this code to someone and make mental note of what you start preparing yourself to explain. “Oh, yeah, I had to add 39 parameters to that method — it’s an interesting story, actually…” If you find yourself preparing to explain something, it probably violates the Principle of Least Surprise. So, rather than surprising someone, maybe you should reconsider the code.

Anyway, that’s all for the tips. Feel free to chime in if you have any you’d like to share. I’d be interested to hear them, and this list was certainly not intended to be exhaustive — just quick tips.

By

Be Idiomatic

I have two or three drafts now that start with the meta of me apologizing for being sparse in my posting of late. TL;DR is that I’ve started a new, out of town engagement and between ramp-up, travel and transitioning off of prior commitments, I’ve been pretty bad at being a regular blogger lately. Also, special apologies to followers of the Chess TDD series, but my wifi connection in room has just been brutal for the desktop (using the hotel’s little plugin converter), so it’s kind of hard to get one of those posts going. The good news is that what I’m going to be doing next involves a good bit of mentoring and coaching, which lends itself particularly well to vaguely instructional posts, so hopefully the drought won’t last. (For anyone interested in details of what I’m doing, check Linked In or hit me up via email/twitter).

Anywho, onto a brief post that I’ve had in drafts for a bit, waiting on completion. The subject is, as the title indicates, being “idiomatic.” In general parlance, idiomatic refers to the characteristic of speaking a language the way a native would. To best illustrate the difference between language correctness and being idiomatic, consider the expression, “go fly a kite.” If you were to say this to someone who had learned to speak flawless English in a classroom, that person would probably ask, “what kite, and why do you want me to fly it?” If you said this to an idiomatic USA English speaker (flawless or not), they’d understand that you were using a rather bland imprecation and essentially telling them to go away and leave you alone. And so we make a distinction between technically accurate language (syntax and semantics) and colloquially communicative language. The idiomatic speaker understands, well, the idioms of the local speech.

Applied to programming languages, the metaphor holds pretty well. It’s possible to write syntactically and semantically valid code (in the sense that the code compiles and does what the programmer intends at runtime) that isn’t idiomatic at all. I could offer all manner of examples, but I’ll offer the ones probably most broadly approachable to my reader base. Non-idiomatic Java would look like this:

And non-idiomatic C# would look like this:

In both cases, the code will compile, run, and print what you want to print, but in neither case are you speaking it the way the natives would likely speak it. Do this in either case, and you’ll be lucky if you’re just laughed at.

Now you may think that, because of this simple example that touches off a bike-shedding holy war about where to put curly braces that I’m advocating for rigid conformance to coding standards or something. That’s not my point at all. My point more goes along the lines of “When in Rome, speak like the Romans do.” Don’t walk in and say, “hey Buddy, give me an EX-PRESS-O!” Don’t be bad at speaking the way the community does and proud of it.

Here is a much more subtle and iconic example of what I’m talking about. Do you ever see people in C# do this?

I once referred to this as “The Yoda” in an old blog post. “If null is x, null argument exception you throw.” Anyone know why people do this, without cheating? Any guesses?

TheCouncil

If you said, “it’s an old C/C++ trick to prevent confusing assignment and comparison,” you’d be right. In C and C++ you could do things like if(x = null) and what the compiler would do would be to assign x to null and then compare it to zero, which would result in it returning true no matter what x had been previously. Intuitive, right? Well, no, not at all, and so C/C++ programmers got into the habit of yoda-ing to prevent a typo from compiling and doing something weird and unexpected.

And, some of them carry that habit right on through to other languages like C#. But the problem is, in C#, it’s pure cargo cult weirdness. If you make that typo in C#, the compiler just barfs. if(x = null) is not valid C#. So the C++ life-hack is pointless and serves only to confuse C# programmers, as evidenced by questions like this (and, for the record, I did not get the idea for my bloopers post name from Daniel’s answer, even though it predates the post — GMTA, I guess).

So, if you’re doing this, the only upside is that you don’t have to change your way of doing things in spite of the fact that you’re using a completely different programming language. But the downside is that you needlessly confuse people. I suspect this and many other instances are a weird, passive-aggressive form of signaling. People who do this might be saying, “I’m a C++ hacker at heart and you you can take my pointers, but you’ll never take MY CONVENTIONS!!!” And maybe the dyed-in-the-wool Java guys write C# with curly brackets on the same line and vice-versa with C# guys in Java. It’s the geekiest form of passive (aggressive) resistance ever. Or, maybe it’s just habit.

But whatever it is, my advice is to knock it off. Embrace any language you find yourself in and pride yourself how quickly you can become idiomatic and rid yourself of your “accents” from other languages. By really seeking out the conversion, you don’t just appear more versed more quickly, I’d argue that you become more versed more quickly. For instance, if you look around and see that no one in C# uses The Yoda, it might cause you to hit google to figure out why not. And then you learn a language difference.

And there’s plenty more where that came from. Why do people use more enums in language X than in language Y? Why do people do that “throws Blah” at the end of a java method declaration? What’s thing with the {} after a class instantiation in C#? Why is it okay in javascript to assign an integer to “asdf”? If you’re new to a language and, instead of asking questions like those, you say, “these guys are stupid; I’m not doing that,” you’re balling up an opportunity to learn about language features and differences and throwing it in the trash. It’ll mean getting further out of your comfort zone faster, but in the end, you’ll be a lot better at your craft.

By

Recall, Retrieval, and the Scientific Method

Improving Readability with Small Things

In my series on building a Chess game using TDD I’ve defined a value type called BoardCoordinate that I introduced instead of passing around X and Y coordinate integer primitives everywhere. It’s a simple enough construct:

This was a win early on the series to get me away from a trend toward Primitive Obsession, and I haven’t really revisited it since. However, I’ve found myself in the series starting to think that I want a semantically intuitive way to express equality among BoardCoordinates. Here’s why:

This is a series of unit tests of the “Queen” class that represents, not surprisingly, the Queen piece in chess. The definition of “MovesFrom11″ is elided, but it’s a collection of BoardCoordinate that represents the possible moves a queen has from piece 1, 1 on the chess board.

This series of tests was my TDD footprint for driving the functionality of determining the queen’s moves. So, I started out saying that she should be able to move from (1,1) to (1,2), then had her also able to move to (2,2), etc. If you read the test, what I’m doing is saying that this collection of BoardCoordinates to which she can move should have in it one that has X coordinate of 1 and Y coordinate of 2, for instance.

What I don’t like here and am making mental note to change is this “and”. That’s not as clear as it could be. I don’t want to say, “there should be a coordinate in this collection with X property of such and such and Y property of such and such.” I want to say, “the collection should contain this coordinate.” This may seem like a small semantic difference, but I value readability to the utmost. And readability is a journey, not a destination — the more you practice it, the more naturally you’ll write readable code. So, I never let my foot off the gas.

During the course of the series, this nagging readability hiccup has caused me to note and refer to a TODO of implementing some kind of concept of equals. In the latest post, Sten asks in the comments, referring to my desire to implement equals, “isn’t that unnecessary since structs that doesn’t contain reference type members does a byte-by-byte in-memory comparison as default Equals implementation?” It is this question I’d like to address here in this post.

Not directly, mind you, because the assessment is absolutely spot on. According to
MSDN:

If none of the fields of the current instance and obj are reference types, the Equals method performs a byte-by-byte comparison of the two objects in memory. Otherwise, it uses reflection to compare the corresponding fields of obj and this instance.

So, the actual answer to that question is simply, “yes,” with nothing more to say about it. But I want to provide my answer to that question as it occurred to me off the cuff. I’m a TDD practitioner and a C# veteran, for context.

Answering Questions on Language Knowledge

My answer, when I read the question was, “I don’t remember what the default behavior of Equals is for value types — I have to look that up.” What surprised me wasn’t my lack of knowledge on this subject (I don’t find myself using value types very often), but rather my lack of any feeling that I should have known that. I mean, C# has been my main language for the last 4 years, and I’ve worked with it for more years than that besides. Surely, I just failed some hypothetical job interview somewhere, with a cabal of senior developers reviewing my quiz answers and saying, “for shame, he doesn’t even know the default Equals behavior for value types.” I’d be laughed off of stack overflow’s C# section, to be certain.

And yet, I don’t really care that I don’t know that (of course, now I do know the answer, but you get what I’m saying). I find myself having an attitude of “I’ll figure things out when I need to know them, and hopefully I’ll remember them.” Pursuing encyclopedic knowledge of a language’s behavior doesn’t much interest me, particularly since those goalposts may move, or I may wind up coding in an entirely different language next month. But there’s something deeper going on here because I don’t care now, but that wasn’t always true — I used to.

The Scientific Method

When I began to think back on this, I think the drop off in valuing this type of knowledge correlated with my adoption of TDD. It then became obvious to me why my attitude had changed. One of the more subtle value propositions of TDD is that it basically turns your programming into an exercise in the Scientific Method with extremely rapid feedback. Think of what TDD has you doing. You look at the code and think something along the lines of, “I want it to do X, but it doesn’t — why not?” You then write a test that fails. Next, you look at the code and hypothesize about what would make it pass. You then do that (experimentation) and see if your test goes green (testing). Afterward, you conduct analysis (do other tests pass, do you want to refactor, etc).

ScientificMethod

Now you’re probably thinking (and correctly) that this isn’t unique to TDD. I mean, if you write no unit tests ever, you still presumably write code for a while and then fire up the application to see if it’s doing what you hypothesized that it would while writing it. Same thing, right?

Well, no, I’d argue. With TDD, the feedback loop is tight and the experiments are more controlled and, more importantly, isolated. When you fire up the GUI to check things out after 10 minutes of coding, you’ve doubtless economized by making a number of changes. When you see a test go green in TDD, you’ve made only one specific, focused change. The modify and verify application behavior method has too many simultaneous variables to be scientific in approach.

Okay, fine, but what does this have to do with whether or not I value encyclopedic language knowledge? That’s a question with a slightly more nuanced answer. After years of programming according to this mini-scientific method, what’s happened is that I’ve devalued anything but “proof is in the pudding” without even realizing it. In other words, I sort of think to myself, “none of us really knows the answer until there’s a green test proving it to all of us.” So, my proud answer to questions like, “wouldn’t it work to use the default equals method for value types” has become, “dunno for certain, let’s write a test and see.”

False Certainty

Why proud? Well, I’ll tell you a brief story about a user group I attended a while back. The presenter was doing a demonstration on Linq, closures, and deferred execution and he made the presentation interactive. He’d show us methods that exposed subtle, lesser known behaviors of the language in this context and the (well made) point was that these things were complex and trying to get the answers right was humbling.

It’s generally knowledgeable people that attend user groups and often even more knowledgeable people that brave the crowd to go out on a limb and answer questions. So, pretty smart C# experts were shouting out their answers to “what will this method return” and they were getting it completely wrong because it was hard and it required too much knowledge of too many edge cases in too short a period of time. A friend of mine said something like, “man, I don’t know — slap a unit test on it and see.” And… he’s absolutely right, in my opinion. We’re not language authors, much less compilers and runtimes, and thus the most expedient answer to the question comes not from applying amassed language knowledge but from experimentation.

Think now of the world of programming over the last 50 years. In times where compiles and executions were extremely costly or lengthy, you needed to be quite sure that you got everything right ahead of time. And doing so required careful analysis that could only be done well with a lot of knowledge. Without prodigious knowledge of the libraries and languages you were using, you would struggle mightily. But that’s really no longer true. We’re living in an age of abundant hardware power and lightning fast feedback where knowing where to get the answers quickly and accurately is more valuable than knowing them. It’s like we’ve been given the math textbook with the answers in the back and the only thing that matters is coming up with the answers. Yeah, it’s great that you’re enough of a hotshot to get 95% of the answers right by hand, but guess what — I can get 100% of them right and much, much faster than you can. And if the need to solve new problems arises, it’s still entirely possible for me to work out a good way to do it by using the answer to deduce how the calculation process works.

Caveats

In the course of writing this, I can think of two valid objections/comments that people might have critiquing what I’m saying, so I’d like to address them. First of all, I’m not saying that you should write production unit tests to answer questions about how the framework/language works. Unit testing the libraries and languages that you use is an anti-pattern. I’m talking about writing tests to see how your code will behave as it uses the frameworks and languages. (Although, a written and then deleted unit test is a great, fast-feedback way to clarify language behavior to yourself.)

Secondly, I’m not devaluing knowledge of the language/framework nor am I taking pride in my ignorance of it. I didn’t know how the default Equals behavior worked for value types yesterday and today I do. That’s an improvement. The reason it’s an improvement is that the knowledge is now stored in a more responsive cache. I maintain having the knowledge is trumped by knowing how to acquire it, and I look at reaching into my own personal memory stores as like having it in a CPU cache versus the memory of writing a quick test to see versus the disk space location of looking it up on the internet or asking a friend.

The more knowledge you have of the way the languages and frameworks you use work, the less time you’ll have to sink into proving behaviors to yourself, so that’s clearly a win. To continue the metaphor, what I’m saying is that there’s no value or sense in going out preemptively and loading as much as you can from disk into the CPU cache so that you can show others that it’s there. In our world, memory and disk lookups are just no longer expensive enough to make that desirable.

By

Visualization Mnemonics for Software Principles

Whether it’s because you want to be able to participate in software engineering discussions without having to surreptitiously look things up on your phone, or whether it’s because you have an interview coming up with a firm that wants you to be some kind of expert in OOP or something, you probably have at least some desire to be knowledgeable about development terms. This is probably doubly true of you, since ipso facto, you read blogs about software.

Toward that end, I’m writing this post. My goal is to provide you with a series of somewhat vivid ways to remember software concepts so that you’ll have a fighting chance at remembering what they’re about sometime later. I’m going to do this by telling a series of stories. So, I’ll get right to it.

Law of Demeter

Last week I was on a driving trip and I stopped by a gas station to get myself a Mountain Dew for the sake or road alertness. I grabbed the soda from the cooler and plopped it down on the counter, prompting the clerk to say, “that’ll be $1.95.” At this point, naturally, I removed my pants and the guy started screaming at me about police and indecent exposure. Confused, I said, “look, I’m just trying to pay you — I’ll hand you my pants and you go rummaging around in my pockets until you find my wallet, which you’ll take out and go looking through for cash. If I’m due change put it back into the wallet, unless it’s a coin, and then just put it in my pocket, and give me back the pants.” He pulled a shotgun out from behind the counter and told me that in his store, people obey the Law of Demeter or else.

PantlessLawOfDemeter

So what does the Law of Demeter say? Well, anecdotally, it says “give collaborators exactly what they’re asking for and don’t give them something they’ll have to go picking through to get what they want.” There’s a reason we don’t hand the clerk our pants (or even our wallet) at the store and just hand them money instead; it’s inappropriate to send them hunting for the money. The Law of Demeter encourages you to think this way about your code. Don’t return Pants and force clients of your method to get what they want by invoking Pants.Pockets[1].Wallet.Money — just give them a Money. And, if you’re the clerk, don’t accept someone handing you a Pants and you going to look for the money — demand the money or show them your shotgun.

Single Responsibility Principle

My girlfriend and I recently bought an investment property a couple of hours away. It’s a little house on a lake that was built in the 1950’s and, while cozy and pleasant, it doesn’t have all of the modern amenities that I might want, resulting in a series of home improvement projects to re-tile floors, build some things out and knock some things down. That kind of stuff.

One such project was installing a garbage disposal, which has two components: plumbing and electrical. The plumbing part is pretty straightforward in that you just need to remove the existing drain pipe and insert the disposal between the drain and the drainage pipe. The electrical is a little more interesting in that you need to run wiring from a switch to the disposal so that you can turn it on and off. Now, naturally, I didn’t want to go to all the hubub of creating a whole different switch, so I decided just to use one that was already there. The front patio light switch had the responsibility for turning the front patio light on and off, but I added a little to its burden, asking it also to control the garbage disposal.

That’s worked pretty well. So far the only mishap occurred when I was rinsing off some dishes and dropped a spoon in the drain while, at the same time, my girlfriend turned the front light on for visitors we were expecting. Luckily, I had only a minor scrape and a mangled spoon, and that’s a small price to pay to avoid creating a whole new light switch. And really, what’s the worst that could happen?

Well, I think you know the worst thing that could happen is that someone loses a hand due to this absurd design. When you run afoul of the Single Responsibility Principle, which could loosely be described as saying “do one thing only and do that thing well” or “have only one reason to change.” In my house, we have two reasons to change the state of the switch: turning on the disposal and turning on the light, and this creates an obvious problem. The parallel situation in code is true as well. If you have a class that needs to be changed whenever schema updates occur and whenever GUI changes occur, then you have a class that serves two masters and the possibility for changes to one thing to affect the other. Disk space is cheap and classes/namespaces/modules are renewable resources. When in doubt, create another one.

Open/Closed Principle

I don’t have a ton of time for TV these days, and that’s mainly because TV is so time consuming. It was a lot simpler when I just had a TV that got an analog signal over the air. But then, things went digital, so I had to take apart my TV and rewire it to handle digital signals. Next, we got cable and, of course, there I am, disassembling the TV again so that we can wire it up to get a cable signal. The worst part of that was that when I became furious with the cable provider and we switched to Dish, I was right back to work on the TV. Now, we have a Nintendo Wii, a DVD player, and a Roku, but who has the time to take the television apart and rewire it to handle each of these additional items? And if that weren’t bad enough, I tried hooking up an old school Sega Genesis last year, and my Dish stopped working.

… said no one, ever. And the reason no one has ever said this is that televisions that you purchase follow the Open/Closed Principle, which basically says that you should create components that are closed to modification, but open for extension. Televisions you purchased aren’t made to be disassembled by you, and certainly no one expects you to hack into the guts of the TV just to plug some device into it. That’s what the Coax/RCA/Component/HDMI/etc feeds are for. With the inputs and the sealed-under-warranty case, your television is open for extension, but closed for modification. You can extend its functionality by plugging anything you like into it, including things not even out yet, like an X-Box 12 or something. Follow this same concept for flexible code. When you write code, you strive to maximize flexibility by facilitating maintenance via extension and new code. If you program to interfaces or allow overriding of behavior via inheritance, life is a lot easier when it comes time to change functionality. So favor that over writing some juggernaut class that you have to go in and modify literally every sprint. That’s icky, and you’ll learn to hate that class and the design around it the same way you’d hate the television I just described.

Liskov Substitution Principle

I’m someone that usually eats a pretty unremarkable salad with dinner. You know, standard stuff: lettuce, tomatoes, crutons, scallions, carrots, and hemlock. One thing that I seem to do differently than most, however, is that I examine each individual item in the salad to see whether or not it will kill me before I put it into my mouth (a lot of other salad consumers seem to play pretty fast and loose with their lives, sheesh). I have a pretty simple algorithm for this. If the item is not hemlock, I eat it. If it is hemlock, I put it onto my plate to throw out later. I highly recommend eating your hemlock salad this way.

Or, you could bear in mind the Liskov Substitution Principle, which basically says that if you’re going to have an inheritance relationship, then derived types should be seamlessly swappable for their base type. So, if I have a salad full of Edibles, I shouldn’t have some derived type, Hemlock, that doesn’t behave the way other Edibles do. Another way to think of this is that if you have a heterogeneous collection of things in an inheritance hierarchy, you shouldn’t go through them one by one and say, “let’s see which type this is and treat it specially.” So, obey the LSP and don’t make hemlock salads for people. You’ll have cleaner code and avoid jail.

Interface Segregation Principle

Thank goodness for web page caching — it’s a life saver. Whenever I go to my favorite dictionary site, expertbeginnerdictionary.com (not a real site if you were thinking of trying it), it prompts me for a word to lookup and, when I type in the word and hit enter, it sends me the dictionary over HTTP, at which time I can search the page text with Ctrl-F to find my word. It takes such a long time for my browser to load the entire English dictionary that I’d be really up a creek without page caching. The only trouble is, whenever a word changes and the cache is invalidated, my next lookup takes forever while the browser re-downloads the dictionary. If only there were a better way…

… and there is. Don’t give me the entire dictionary when I want to look up a word. Just give me that word. If I want to know what “zebra” means, I don’t care what “aardvark” means, and my zebra lookup experience shouldn’t be affected and put at risk by changes to “aardvark.” I should only be depending on the words and definitions that I actually use, rather than the entire dictionary. Likewise, if you’re defining public interfaces in your code for clients, break them into minimum composable segments and let your clients assemble them as needed, rather than forcing the kitchen sink (or dictionary) on them.  The Interface Segregation Principle says that clients of an interface shouldn’t be forced to depend on methods that they don’t use because of the excess, pointless baggage that comes along.  Give clients the minimum that they need.

Dependency Inversion Principle

Have you ever been to an automobile factory?  It’s amazing to watch how these things are made.  They start with a car, and the car assembles its own engine, seats, steering wheel, etc.  It’s pretty amazing to watch.  And, for a real treat, you can watch these sub-parts assemble their own internals.  The engine builds its own alternator, battery, transmission, etc — a breathtaking feat of engineering.  Of course, there’s a downside to everything, and, as cool as this is, it can be frustrating that the people in the factory have no control over what kind of engine the car builds for itself.  All they can do is say, “I want a car” and the car does the rest.

I bet you can picture the code base I’m describing.  A long time ago, I went into detail about this piece of imagery, but I’ll summarize by saying that this is “command and control” programming where constructors of objects instantiate all of the object’s dependencies — FooService instantiates its own logger.  This runs afoul of the Dependency Inversion Principle, which holds that high level modules, like Car, should not depend directly on lower level modules, like Engine, but rather that both should depend on an abstraction of the Car-Engine interaction.  This allows the car and the engine to vary independently meaning that our automobile factory workers actually could have control over which engines go in which cars.  And, as described in the linked post, a code base making heavy use of the Dependency Inversion Principle tends to be composable whereas a command and control style code base is not, favoring instead the “car, build thyself” approach.  So to remember and understand Dependency Inversion principle ask yourself who should control what parts go in your car — the people building the car, or the car itself?  Only one of those ideas is preposterous.

Incidentally, five of the principles here compose the SOLID Principles. If you’d like a much deeper dive, check out the Pluralsight course on the topic.

Acknowledgements | Contact | About | Social Media