Stories about Software


What I Learned from Learning about SpecFlow

In my ChessTDD series, I was confronted with the need to create some actual acceptance tests.  Historically, I’d generally done this by writing something like a console application that would exercise the system under test.  But I figured this series was about readers/viewers and me learning alongside one another on a TDD journey through a complicated domain, so why not add just another piece of learning to the mix.  I started watching a Pluralsight course about SpecFlow and flubbing my way through it in episodes of my series.

But as it turns out, I picked up SpecFlow quickly.  Like, really quickly.  As much as I’d like to think that this is because I’m some kind of genius, that’s not the explanation by a long shot.  What’s really going on is a lot more in line with the “Talent is Overrated” philosophy that the deck was stacked in my favor via tons and tons of deliberate practice.

SpecFlow is somewhat intuitive, but not remarkably so.  You create these text files, following a certain kind of format, and they’re easy to read.  And then somehow, through behind the scenes magic, they get tied to these actual code files, and not the “code behind” for the feature file that gets generated and is hard to read.  You tie them to the code files yourself in one of a few different ways.  SpecFlow in general relies a good bit on this magic, and anytime there’s magic involved, relatively inexperienced developers can be thrown easily for loops.  To remind myself of this fact, all I need to do is go back in time 8 years or so to when I was struggling to wrap my head around how Spring and an XML file in the Java world made it so that I never invoked constructors anywhere.  IoC containers were utter black magic to me; how does this thing get instantiated, anyway?!


Read More


Professional Code

About a year ago, I read this post in my feed reader and created a draft with a link to it and a little note to myself that said, “interesting subject.” Over the past weekend, I was going through old drafts that I’d never gotten around to finishing and looking to remedy the situation when I came across this one and decided to address it.

To be perfectly honest, I had no idea what I was going to write about a year ago. I can’t really even speculate. But I can talk a bit about what I think of now as professional code. Like Ayende and Trystan, I don’t think it’s a matter of following certain specific and abiding principles like SOLID as much as it is something else. They talk about professional code in terms of how quickly the code can be understood by maintainers since a professional should be able to understand what’s going on with the code and respond to the need to change. I like this assessment but generally categorize professionalism in code slightly differently. I think of it as the degree to which things that are rational for users to want or expect can be done easily.

To illustrate, I’ll start with a counter-example, lifted from my past and obfuscated a bit. A handful of people had written an application that centered around modifications to an XML file. The XML file and the business rules governing its contents were fairly complex, so it wasn’t a trivial application. The authors of this app had opted to prevent concurrent edits and race conditions by implementing an abstraction wherein the file was represented by a singleton class. Predictably, the design heavily depended on XmlFile.Instance.CallSomeMethod() style invocations.

One day, someone in the company expressed that it’d be a nice value-add to allow this application to show differences between incarnations of this XML file — a diff changes, if you will. When this idea was presented to the lead/architect of this code base, he scoffed and actually became sort of angry. Evidently, this was a crazy request. Why would ever want to do that? Inconceivable! And naturally, this was completely unfeasible without a rewrite of the application, and good luck getting that through.

If you’re looking for a nice ending to this story, you’re barking up the wrong tree. The person asking for this was humbled when it should have been the person with the inflexible design that was humbled. As a neutral observer, I was amazed at this exchange — but then again, I knew what the code looked like. The requester went away feeling dumb because the scoffer had a lot of organizational clout, so it was assumed that scoffing was appropriate. But I knew better.

What had really happened was that a questionable design decision (representing an XML file as a singleton instance) became calcified as a cornerstone assumption of the application. Then along came a user with a perfectly reasonable request, and the request was rebuffed because the system, as designed, simply couldn’t handle it. I think of this as equivalent to you calling up the contractor that built your house and asking him if he’d be able to paint your living room, and having him respond, “not the way I built your house.”

And that, to me, is unprofessional code. And, I don’t mean it in the sense that you often hear it when people are talking about childish or inappropriate behavior — I mean that it actually seems like amateur hour. The more frequently you tell your users that things that seem easy are actually really difficult, the less professional your code is going to seem. The reasoning is the same as with the example of a contractor that somehow built your house so that the walls couldn’t be painted. It represents a failure to understand and anticipate the way the systems you design tend to evolve and change in the wild, which is indicative of a lack of relevant professional experience. Would a seasoned, professional contractor fail to realize that most people want to paint the rooms in their houses sooner or later? Would a seasoned, professional software developer fail to realize that someone might want multiple instances of a file type?

Don’t get me wrong. I’m not saying that you’re a hack if there’s something that a user dreams up and thinks will be easy that you can’t do. There are certainly cases where something that seems easy won’t be easy, and it doesn’t mean that your design is bad or unprofessional. I’m talking about what I perceive to be a general, overarching trend. If changes to the software seem like they should be easy, then they probably should be easy. If you’ve added 20 different customer types to your system, it’d be weird if adding a 21st was extremely hard. If you currently support storing data in a database or to a file, it’d be weird if there was a particular record type that you couldn’t put in a file. If you have some concept of security and roles in your system, it’d be weird if adding a user required a re-deployment of your software.

According to the Clean Code videos by Bob Martin, a defining characteristic of good architecture is that it allows decisions to be deferred as long as possible. If the architecture is well designed, for instance, you should be able to write a lot of the code without knowing if it’s going to be a web app or desktop app or without knowing whether you’d use MySQL or PostgreSQL or MongoDB. I’d carry this a bit further and say that being able to anticipate what users might want and what they might change their minds about and then designing accordingly is the calling card of a writer of professional code.


Good Magic and Bad Magic

Not too long ago, I was working with a web development framework that I had inherited on a project, and I was struggling mightily with it to get it to work. The functionality was not discoverable at all, and it was provided almost exclusively via inheritance rather than composition. Runtime debugging was similarly fruitless, as a great deal of functionality was obfuscated via heavy use of reflection and a lot “squishing” of the application to fit into a forms over data paradigm (binding the GUI right to data sources, for instance). Generally, you would find some kind of prototype class/form to look at, try to adapt it to what you were doing and struggle for a few hours before reverse engineering the fact that you weren’t setting some random property defined in an ancestor class properly. Until you set this string property to “asdffdsa,” absolutely nothing would work. When you finally figured out the answer, the reaction wasn’t one of triumph but indignation. “Really?!? That’s what just ate the last two hours of my life?!?”

I remember a different sort of experience when I started working Web API. With that technology, I frequently found myself thinking things like “this seems like it should work,” and then, lo and behold, it did. In other words, I’d write a bit of code or give something a name that I figured would make sense in context, and things just kind of magically worked. It was a pretty heady feeling, and comparing these two experiences is a study in contrast.

One might say that this a matter of convention and configuration. After all, having to set some weird, non-discoverable string property is really configuration, and a lot of the newer web frameworks, Web API included, rely heavily on convention. But I think it goes beyond that and into the concepts that I’ll call “good and bad magic.” And the reason I say it’s not the same is that one could pretty easily establish non-intuitive conventions and have extremely clear, logical configurations.

When I talk about “magic,” I’m talking about things in a code base or application behind the scenes. This is “magic” in the sense that you can’t spell “automagically” without “magic.” In a MVC or Web API application, the routing conventions and ways that views and controllers are selected are magic. You create FooController and FooView in the right places, and suddenly, when you navigate to app/Foo, things just work. If you want to customize and change things, you can, but it’s not a battle. By default, it does what it seems like it ought to do. The dark side of is the application I described in the first paragraph — the one in which all of the other classes were working because of some obscure setting of a cryptically named property defined in a base class. When you define your own class, everything blows up because you’re not setting this property. It seems like a class should just work, but due to some hidden hocus-pocus, it actually doesn’t.

The reason I mention all this is to offer some advice based on my own experience. When you’re writing code for other developers (and you almost always are because, sooner or later, someone besides you will be maintaining your code or invoking it), think about whether your code hides some magic from others. This will most likely be the case if you’re writing framework or utility code, but it can always apply. If your code is, in fact, doing things that will seem magical to others, ask yourself if it’s good magic or bad magic. A good way to answer this question for yourself is to ask yourself how likely you think it will be that you’ll need to explain what’s going on to people that use your code. If you find yourself thinking, “oh, yeah, they’ll need some help — maybe even an instruction manual,” you’re forcing bad magic on them. If you find yourself thinking, “if they do industry standard things, they’ll figure it out,” you might be in good shape.

I say you might be in good shape because it’s possible you think others will understand it, but they won’t. This comes with practice, and good magic is hard. Bad magic is depressingly easy. So if you’re going to do some magic for your collaborators, make sure it’s good magic because no magic at all is better than bad magic.


Seeing the Value in Absolutes

The other day, I told a developer on my team that I wouldn’t write methods with more than three parameters. I said this in a context where many people would say, “don’t write code with more than three parameters in a method,” in that I am the project architect and coding decisions are mine to make. However, I feel that the way you phrase things has a powerful impact on people, and I believe code reviews that feature orders to change items in the code are creativity-killing and soul-sucking. So, as I’ve explained to people on any number of occasions, my feedback consists neither of statements like “that’s wrong” nor statements like “take that out.” I specifically and always say, “that’s not what I would do.” I’ve found that people listen to this the overwhelming majority of the time and, when they don’t, they often have a good reason I hadn’t considered. No barking of orders necessary.

But back to what I said a few days ago. I basically stated the opinion that methods should never have more than three parameters. And right after I had stated this, I was reminded of the way I’ve seen countless conversations go in person, on help sites like Stack Overflow, and in blog comments. Does this look familiar?

John: You should never have more than three parameters in a method call.
Jane: Blanket statements like that tend to be problematic. Three method parameters is really, technically, more of a “code smell” than necessarily a problem. It’s often a problem, but it might not be.
John: I think it’s necessarily a problem. I can’t think of a situation where that’s desirable.
Jane: How about when someone is holding a gun to your head and telling you to write a method that takes four parameters.
John: (Rolls his eyes)
Jane: Look, there’s probably a better example. All I’m saying is you should never use absolutes, because you never know.
John: “You should never use absolutes” is totally an absolute! You’re a hypocrite!
Both: (Devolves into pointless bickering)

A lot of times during debates, particularly when you have smart and/or exacting participants, the conversation is derailed by a sort of “gotcha” game of one-upsmanship. It’s as though they are at an impasse as to the crux of the matter, so the two begin sniping at one another about tangentially-related or totally non-related minutiae until someone makes a slip, and this somehow settles the debate. Of course, it’s an exercise in futility because both sides think their opponent is the first to slip up. Jane thinks she’s won this argument because John used an absolute qualifier and she pointed out some (incredibly preposterous and contrived) counter-example, and John thinks he won with his ad hominem right before the end about Jane’s hypocrisy.

In this debate, they both lose, in my opinion. I agree with John’s premise but not his justification, and the difference matters. And Jane’s semantic nitpicking doesn’t get us to the right justification (counter or pro), either. Prescriptive matters of canon when it comes to programming are troubling for the same reason that absolutes are often troubling in our day-to-day lives. Even the most clear-cut seeming things, like “it’s morally reprehensible to kill people,” wind up having many loopholes in their application (“it’s morally reprehensible to kill people — unless, of course, it’s war, self-defense, certain kinds of revenge for really bad things, accidental, state-sanctioned execution, etc., etc.”). So for non-important stuff like the number of parameters to a method, aren’t we kind of hosed and thus stuck in a relativistic quagmire?

I’d argue not, and furthermore, I’d argue that the fact of the rules is more important than the rules themselves. It’s more important to have a restriction like “don’t have more than three parameters to a method” than it is to have that specific restriction. If it were “don’t have more than two method parameters” or “don’t have more than four method parameters,” we’d still be sitting pretty. Why, you ask? Well, a man named Barry Schwartz coined this phrase: “the paradox of choice: why more is less.” Restrictions limit choice, which is merciful

Developers are smart, and they want to solve problems — often hard problems. But, really, they want to solve directed problems efficiently. To understand what I mean, ask yourself which of these propositions is more appealing to you: (1) make a website that does anything in any programming language with any framework or (2) use F# to parse a large text file and have the running process use no more than 1 gig of memory. The first proposition makes your head hurt while the second gets your mental juices flowing as you decide whether to try to solve the problem algorithmically or to cheat and write interim results to disk.

Well, the same thing happens with a lot of the “best practice” rules that surround us in software development. Don’t make your classes too big. Don’t make your methods too big. Don’t have too many parameters. Don’t repeat your code. While they can seem like (and be, if you don’t understand the purpose behind them) cargo-cult mandates if you simply focus on the matter of relativism vs absolutes, they’re really about removing (generally bad) options so that you can be creative within the context remaining, as well as productive and happy. Developers who practice DRY and who write small classes with small methods and small method signatures don’t have to spend time thinking “how many parameters should this method have” or “is this class getting too long?” Maybe this sounds restrictive or draconian to you, but think of how many options have been removed from you by others: “does the code have to compile,” or “is the code allowed to wipe out our production data?” If you’re writing code for any sort of business purposes, the number of things you can’t do dwarfs the number of things you can.

Of course, just having rules for the sake of rules is the epitome of dumb cargo cult activity. The restrictions have to be ones that contribute overall to a better code base. And while there may be some debate about this, I doubt that anyone would really argue with statements like “favor small methods over large ones” and “favor simple signatures over complex ones.” Architects (or self-organizing teams) need to identify general goals like these and turn them into liberating restrictions that remove paralysis by analysis while keeping the code base clean. I’ve been of the opinion for a while now that one of the core goals of an architect should be providing a framework that prevents ‘wrong’ decisions so that the developers can focus on being creative and solving problems rather than avoiding pitfalls. I often see this described as “making sure people fall into the pit of success.”


Going back to the “maximum of three parameters rule,” it’s important to realize that the question isn’t “is that right 99% of the time or 100% of the time?” While Jane and John argue over that one percent, developers on their team are establishing patterns and designs predicated upon methods with 20 parameters. Who cares if there’s some API somewhere that really, truly, honestly makes it better to user four parameters in that one specific case? I mean, great — you proved that on a long enough timeline, weird aberrations happen. But you’re missing out on the productivity-harnessing power of imposing good restrictions. The developers in the group might agree, or they might be skeptical. But if they care enough to be skeptical, it probably means that they care about their craft and enjoy a challenge. So when you present it to them as a challenge (in the same way speeding up runtime or reducing memory footprint is a challenge), they’ll probably warm to it.


How We Get Coding Standards Wrong

The other day, I sat in on a meeting where a large-ish group was discussing “standards” for their particular area of software development. I have the word standards in quotes because, by design, there wasn’t a clear definition as to what sorts of standards they would be; it was an open-ended exercise. The standard could cover anything from variable casing to development practices and principles to holistic approaches. When asked for my input, I was sort of bemused by the process, and I said that I didn’t really have much in the way of an answer. I declined to elaborate much more on that since I wasn’t interested in derailing the meeting in any way, but it did get me to thinking about why the exercise seemed sort of futile to me.

I generally have a natural leeriness when it comes to coding and development standards and especially activities designed to flesh those out, and in this post I’d like to explore why. It isn’t that I don’t believe standards should exist or that I believe they aren’t important. It’s just that I think we frequently miss the point and create standards out of some sense that it’s The Right Thing, and thus create standards that are pointless or even detrimental.

Standards by Committee Anti-Pattern

One problem with defining standards in a group setting is that any group containing some socially savvy people is going to gravitate toward diplomacy. Contentious and arbitrary subjects (so-called “religious wars”) like camel case versus Pascal case or where the bracket after a function goes will be avoided in favor or things upon which a consensus may be reached. But think about what’s actually happening–everyone’s agreeing that the things that everyone already does should be standardized. This is a fairly vacuous exercise in bureaucracy, useful only in the hypothetical realm where a new person comes on board and happens to disagree with something upon which twenty others agree.

People doing this are solving a problem that doesn’t exist: “how do we make sure everyone does this the same way when everyone’s currently doing it the same way?” It also tends to favor documenting current process rather than thinking critically about ideal process.

Let’s capture all of the stuff that we all do and write it down. Okay, so, coding standards. When working on a .NET project, first drive to the office. Then, have your keycard ready to get in the building. Next, enter the building…

Obviously this is silly, but hopefully the point hits home. The simple fact that you do something or that everyone in the group does something doesn’t mean that it’s worth capturing as trainable knowledge and enforcing on the group. And yet this is a direction I frequently see groups take as they get into a groove of “yes, and” when discussing standards. It can just turn into “let’s make a list of everything we do.”

Pointless Homogeneity

The concept of capturing the intersection of everyone’s approach and coding style dovetails into another problem with groups hashing out standards: a group-think bias. Slightly different from the notion that everything common should be documented, this is the notion that everything should be common. For instance, I once worked in a shop where developers were all mandated to use the same diff tool. I’m not kidding. If anyone bothered with a justification for this, I don’t recall what it was, other than some nod to pointless standards.


You can take this pretty far. Imagine demands that you use the same syntax highlighting colors as your peers or that you keep your file system organized in the same way as everyone else. What does this have to do with the code you’re producing? Who knows…

It might seem like the kind of thing where you should just indulge the the harmless control freak driving it or the group that dreams it up as a unit, but this runs the risk of birthing a toxic culture. With everything, however inconsequential, homogenized, there is no room for creative thinkers to innovate with new approaches.

Make-Work Tasks

Another risk you run when coming up with standards is to create so-called standards that amount to codifying and mandating busy-work. I’ve made my evolving opinion of comments in code quite clear on a few occasions, and I consider them to be an excellent example. When you make “comment every method” a standard, you’re standardizing the procedure (mindlessly adding comments) and not the goal (clarity and communication).

There are plenty of other examples one might dream up. The silly mandate of “sort and organize usings” that I blogged about some time back comes to mind. This is another example of standardizing pointless make-work tasks that provide no substantive benefit. In all cases, the problem is that you’re not only asking developers to engage in brainless busy-work–you’re codifying it as an official mandate.

Getting Too Specific

Another source of issues that I’ve seen in the establishment of standards is a tendency to get too specific. “What sort of convention should we use when we declare a generic interface below an enumeration inside of a nested class?” Really? Does that come up often enough that it’s important for everyone to get on the same page about how to approach it?

I recognize the human desire for set closure; we don’t like it when a dresser is missing a drawer or when we haven’t collected the whole set, but sometimes you’ve just got to let it go. We’re not the IRS–it’s going to be alright if there are contingencies that we haven’t covered and oddball loopholes that we haven’t addressed.

Missing the Point

For me, this is the biggest one. Usually standards discussions are about superficial programming concerns rather than substantive ones, and that’s unfortunate. It is the aforementioned camel vs Pascal case wars or whether to put brackets and which kinds to use. To var or not to var? Should constants be all caps? If an interface is in a forest and doesn’t have an “I” in front of its name, is it still an interface?

I understand the benefit of consistency in naming, casing, and other syntactic considerations. I really do, in spite of my tendency to be dismissive and iconoclast on this front when discussing them. But, first off, let’s not pretend that there really is a right way with these things–there’s just the way that you’re used to doing them. And, more importantly, let’s not pretend that this is really all that important in the grand scheme of things.

We use consistent casing and naming so that a reader of the code can tell at a glance whether something is a field or a local variable or whether something is a method or a property or a constant. It’s really about promoting readability, which, in turn, is about maximizing maintainability. But you know what’s much harder on maintainability than Jones’s great constant casing blunder of 2010 where he forgot to use ALL CAPS? Writing bad code.

If you’re banging out behemoth methods with control statements eight deep, all of the camel case in the world isn’t going to make your code readable. A standard mandating that all such methods be prepended with “yuck” might help, but the real thing that you need is some standards about writing clean code. Keeping methods and classes small and focused, principles like DRY and SOLID, and other good design principles are much more important standards to which to aspire, but they’re often less concrete and harder to enforce. It’s much easier and more rote for a code reviewer to look for casing issues or missing comments than to analyze code for good software practice and object-oriented design. The latter is often less cut-and-dry and more a matter of degrees, so it’s frequently glossed over in favor of more tangible, simple things. Problem is, those tangible, simple things really aren’t all that important to the health of your applications and projects over the long haul.

It’s All Just Premature Optimization

The common thread here is that all of these standards anti-patterns result from solving non-existent problems. If you have some collection of half-baked standards at your company that go on for some pages and then say, “after that, follow the Microsoft standards,” imagine how they came about. I bet a few of the group’s original engineers or most senior people had a conversation that went something like, “We should probably have some standards.” “Yeah, I guess… but why now?” “I dunno… I think it’s, like, what you’re supposed to do.”

I suspect that if you did a survey, a lot more standards documents have started with conversations like that than with conversations about hours lost to maintenance and difficulty reading code. They are born out of cargo-cult practice rather than a necessity to solve some problem. Philosophically, they start as solutions in search of a problem rather than solutions to actual problems.

The situation is complicated by the fact that adoption of certain standards may have solved real problems in the past for developers on the team, and they’re simply doing the smart thing and carrying their knowledge forward. The trouble is that not all projects face the same problems. When discussing approaches, start with abstract and general abiding principles like SOLID and DRY and take it from there. If half of your team uses camel case and the other half Pascal and it’s causing communication and maintenance difficulties, flip a coin and set a standard. Repeat as necessary with other standards to keep the project moving and humming. But don’t make them up just for the sake of doing so. You wouldn’t start writing random code that may never solve any actual problem, so why create a standard that way?