Stories about Software


Visualizing Your (Real) Architecture

Editorial note: I originally wrote this post for the NDepend blog.  Go check out the original at the NDepend site.  Take a look around at the other posts while you’re there as well — there’s a lot of good stuff to be had.

Diagrams of software architecture have a certain aesthetic appeal to them.  They usually consist of grayscale or muted pastel colors and nice, soft shapes with rounded edges.  The architects that make them assemble them into pleasing patterns and flowing structures such that they often resemble 7-layer cakes, pinwheels, or slalom courses.  With circles and ovals arranged neatly inside of rectangles connected by arrows, there is a certain, orderly beauty.  If you’re lucky, there will even be some fluffy clouds.

If you want to see an example, here’s one that has it all.  It’s even got a service bus.  Clearly, the battle for quality was over long before the first shots were ever fired.  After the initial conception of this thing, the mundane details of bringing the architecture to life would likely have been a simple matter of digital paint by numbers.  Implement an interface here, inherit from a framework class there, and Presto!  Instant operational beauty that functions as smoothly on servers as it does in the executive readout power point.

At least, that’s the plan.

Read More


What I Learned from Learning about SpecFlow

In my ChessTDD series, I was confronted with the need to create some actual acceptance tests.  Historically, I’d generally done this by writing something like a console application that would exercise the system under test.  But I figured this series was about readers/viewers and me learning alongside one another on a TDD journey through a complicated domain, so why not add just another piece of learning to the mix.  I started watching a Pluralsight course about SpecFlow and flubbing my way through it in episodes of my series.

But as it turns out, I picked up SpecFlow quickly.  Like, really quickly.  As much as I’d like to think that this is because I’m some kind of genius, that’s not the explanation by a long shot.  What’s really going on is a lot more in line with the “Talent is Overrated” philosophy that the deck was stacked in my favor via tons and tons of deliberate practice.

SpecFlow is somewhat intuitive, but not remarkably so.  You create these text files, following a certain kind of format, and they’re easy to read.  And then somehow, through behind the scenes magic, they get tied to these actual code files, and not the “code behind” for the feature file that gets generated and is hard to read.  You tie them to the code files yourself in one of a few different ways.  SpecFlow in general relies a good bit on this magic, and anytime there’s magic involved, relatively inexperienced developers can be thrown easily for loops.  To remind myself of this fact, all I need to do is go back in time 8 years or so to when I was struggling to wrap my head around how Spring and an XML file in the Java world made it so that I never invoked constructors anywhere.  IoC containers were utter black magic to me; how does this thing get instantiated, anyway?!


Read More


Professional Code

About a year ago, I read this post in my feed reader and created a draft with a link to it and a little note to myself that said, “interesting subject.” Over the past weekend, I was going through old drafts that I’d never gotten around to finishing and looking to remedy the situation when I came across this one and decided to address it.

To be perfectly honest, I had no idea what I was going to write about a year ago. I can’t really even speculate. But I can talk a bit about what I think of now as professional code. Like Ayende and Trystan, I don’t think it’s a matter of following certain specific and abiding principles like SOLID as much as it is something else. They talk about professional code in terms of how quickly the code can be understood by maintainers since a professional should be able to understand what’s going on with the code and respond to the need to change. I like this assessment but generally categorize professionalism in code slightly differently. I think of it as the degree to which things that are rational for users to want or expect can be done easily.

To illustrate, I’ll start with a counter-example, lifted from my past and obfuscated a bit. A handful of people had written an application that centered around modifications to an XML file. The XML file and the business rules governing its contents were fairly complex, so it wasn’t a trivial application. The authors of this app had opted to prevent concurrent edits and race conditions by implementing an abstraction wherein the file was represented by a singleton class. Predictably, the design heavily depended on XmlFile.Instance.CallSomeMethod() style invocations.

One day, someone in the company expressed that it’d be a nice value-add to allow this application to show differences between incarnations of this XML file — a diff changes, if you will. When this idea was presented to the lead/architect of this code base, he scoffed and actually became sort of angry. Evidently, this was a crazy request. Why would ever want to do that? Inconceivable! And naturally, this was completely unfeasible without a rewrite of the application, and good luck getting that through.

If you’re looking for a nice ending to this story, you’re barking up the wrong tree. The person asking for this was humbled when it should have been the person with the inflexible design that was humbled. As a neutral observer, I was amazed at this exchange — but then again, I knew what the code looked like. The requester went away feeling dumb because the scoffer had a lot of organizational clout, so it was assumed that scoffing was appropriate. But I knew better.

What had really happened was that a questionable design decision (representing an XML file as a singleton instance) became calcified as a cornerstone assumption of the application. Then along came a user with a perfectly reasonable request, and the request was rebuffed because the system, as designed, simply couldn’t handle it. I think of this as equivalent to you calling up the contractor that built your house and asking him if he’d be able to paint your living room, and having him respond, “not the way I built your house.”

And that, to me, is unprofessional code. And, I don’t mean it in the sense that you often hear it when people are talking about childish or inappropriate behavior — I mean that it actually seems like amateur hour. The more frequently you tell your users that things that seem easy are actually really difficult, the less professional your code is going to seem. The reasoning is the same as with the example of a contractor that somehow built your house so that the walls couldn’t be painted. It represents a failure to understand and anticipate the way the systems you design tend to evolve and change in the wild, which is indicative of a lack of relevant professional experience. Would a seasoned, professional contractor fail to realize that most people want to paint the rooms in their houses sooner or later? Would a seasoned, professional software developer fail to realize that someone might want multiple instances of a file type?

Don’t get me wrong. I’m not saying that you’re a hack if there’s something that a user dreams up and thinks will be easy that you can’t do. There are certainly cases where something that seems easy won’t be easy, and it doesn’t mean that your design is bad or unprofessional. I’m talking about what I perceive to be a general, overarching trend. If changes to the software seem like they should be easy, then they probably should be easy. If you’ve added 20 different customer types to your system, it’d be weird if adding a 21st was extremely hard. If you currently support storing data in a database or to a file, it’d be weird if there was a particular record type that you couldn’t put in a file. If you have some concept of security and roles in your system, it’d be weird if adding a user required a re-deployment of your software.

According to the Clean Code videos by Bob Martin, a defining characteristic of good architecture is that it allows decisions to be deferred as long as possible. If the architecture is well designed, for instance, you should be able to write a lot of the code without knowing if it’s going to be a web app or desktop app or without knowing whether you’d use MySQL or PostgreSQL or MongoDB. I’d carry this a bit further and say that being able to anticipate what users might want and what they might change their minds about and then designing accordingly is the calling card of a writer of professional code.


Good Magic and Bad Magic

Not too long ago, I was working with a web development framework that I had inherited on a project, and I was struggling mightily with it to get it to work. The functionality was not discoverable at all, and it was provided almost exclusively via inheritance rather than composition. Runtime debugging was similarly fruitless, as a great deal of functionality was obfuscated via heavy use of reflection and a lot “squishing” of the application to fit into a forms over data paradigm (binding the GUI right to data sources, for instance). Generally, you would find some kind of prototype class/form to look at, try to adapt it to what you were doing and struggle for a few hours before reverse engineering the fact that you weren’t setting some random property defined in an ancestor class properly. Until you set this string property to “asdffdsa,” absolutely nothing would work. When you finally figured out the answer, the reaction wasn’t one of triumph but indignation. “Really?!? That’s what just ate the last two hours of my life?!?”

I remember a different sort of experience when I started working Web API. With that technology, I frequently found myself thinking things like “this seems like it should work,” and then, lo and behold, it did. In other words, I’d write a bit of code or give something a name that I figured would make sense in context, and things just kind of magically worked. It was a pretty heady feeling, and comparing these two experiences is a study in contrast.

One might say that this a matter of convention and configuration. After all, having to set some weird, non-discoverable string property is really configuration, and a lot of the newer web frameworks, Web API included, rely heavily on convention. But I think it goes beyond that and into the concepts that I’ll call “good and bad magic.” And the reason I say it’s not the same is that one could pretty easily establish non-intuitive conventions and have extremely clear, logical configurations.

When I talk about “magic,” I’m talking about things in a code base or application behind the scenes. This is “magic” in the sense that you can’t spell “automagically” without “magic.” In a MVC or Web API application, the routing conventions and ways that views and controllers are selected are magic. You create FooController and FooView in the right places, and suddenly, when you navigate to app/Foo, things just work. If you want to customize and change things, you can, but it’s not a battle. By default, it does what it seems like it ought to do. The dark side of is the application I described in the first paragraph — the one in which all of the other classes were working because of some obscure setting of a cryptically named property defined in a base class. When you define your own class, everything blows up because you’re not setting this property. It seems like a class should just work, but due to some hidden hocus-pocus, it actually doesn’t.

The reason I mention all this is to offer some advice based on my own experience. When you’re writing code for other developers (and you almost always are because, sooner or later, someone besides you will be maintaining your code or invoking it), think about whether your code hides some magic from others. This will most likely be the case if you’re writing framework or utility code, but it can always apply. If your code is, in fact, doing things that will seem magical to others, ask yourself if it’s good magic or bad magic. A good way to answer this question for yourself is to ask yourself how likely you think it will be that you’ll need to explain what’s going on to people that use your code. If you find yourself thinking, “oh, yeah, they’ll need some help — maybe even an instruction manual,” you’re forcing bad magic on them. If you find yourself thinking, “if they do industry standard things, they’ll figure it out,” you might be in good shape.

I say you might be in good shape because it’s possible you think others will understand it, but they won’t. This comes with practice, and good magic is hard. Bad magic is depressingly easy. So if you’re going to do some magic for your collaborators, make sure it’s good magic because no magic at all is better than bad magic.


Seeing the Value in Absolutes

The other day, I told a developer on my team that I wouldn’t write methods with more than three parameters. I said this in a context where many people would say, “don’t write code with more than three parameters in a method,” in that I am the project architect and coding decisions are mine to make. However, I feel that the way you phrase things has a powerful impact on people, and I believe code reviews that feature orders to change items in the code are creativity-killing and soul-sucking. So, as I’ve explained to people on any number of occasions, my feedback consists neither of statements like “that’s wrong” nor statements like “take that out.” I specifically and always say, “that’s not what I would do.” I’ve found that people listen to this the overwhelming majority of the time and, when they don’t, they often have a good reason I hadn’t considered. No barking of orders necessary.

But back to what I said a few days ago. I basically stated the opinion that methods should never have more than three parameters. And right after I had stated this, I was reminded of the way I’ve seen countless conversations go in person, on help sites like Stack Overflow, and in blog comments. Does this look familiar?

John: You should never have more than three parameters in a method call.
Jane: Blanket statements like that tend to be problematic. Three method parameters is really, technically, more of a “code smell” than necessarily a problem. It’s often a problem, but it might not be.
John: I think it’s necessarily a problem. I can’t think of a situation where that’s desirable.
Jane: How about when someone is holding a gun to your head and telling you to write a method that takes four parameters.
John: (Rolls his eyes)
Jane: Look, there’s probably a better example. All I’m saying is you should never use absolutes, because you never know.
John: “You should never use absolutes” is totally an absolute! You’re a hypocrite!
Both: (Devolves into pointless bickering)

A lot of times during debates, particularly when you have smart and/or exacting participants, the conversation is derailed by a sort of “gotcha” game of one-upsmanship. It’s as though they are at an impasse as to the crux of the matter, so the two begin sniping at one another about tangentially-related or totally non-related minutiae until someone makes a slip, and this somehow settles the debate. Of course, it’s an exercise in futility because both sides think their opponent is the first to slip up. Jane thinks she’s won this argument because John used an absolute qualifier and she pointed out some (incredibly preposterous and contrived) counter-example, and John thinks he won with his ad hominem right before the end about Jane’s hypocrisy.

In this debate, they both lose, in my opinion. I agree with John’s premise but not his justification, and the difference matters. And Jane’s semantic nitpicking doesn’t get us to the right justification (counter or pro), either. Prescriptive matters of canon when it comes to programming are troubling for the same reason that absolutes are often troubling in our day-to-day lives. Even the most clear-cut seeming things, like “it’s morally reprehensible to kill people,” wind up having many loopholes in their application (“it’s morally reprehensible to kill people — unless, of course, it’s war, self-defense, certain kinds of revenge for really bad things, accidental, state-sanctioned execution, etc., etc.”). So for non-important stuff like the number of parameters to a method, aren’t we kind of hosed and thus stuck in a relativistic quagmire?

I’d argue not, and furthermore, I’d argue that the fact of the rules is more important than the rules themselves. It’s more important to have a restriction like “don’t have more than three parameters to a method” than it is to have that specific restriction. If it were “don’t have more than two method parameters” or “don’t have more than four method parameters,” we’d still be sitting pretty. Why, you ask? Well, a man named Barry Schwartz coined this phrase: “the paradox of choice: why more is less.” Restrictions limit choice, which is merciful

Developers are smart, and they want to solve problems — often hard problems. But, really, they want to solve directed problems efficiently. To understand what I mean, ask yourself which of these propositions is more appealing to you: (1) make a website that does anything in any programming language with any framework or (2) use F# to parse a large text file and have the running process use no more than 1 gig of memory. The first proposition makes your head hurt while the second gets your mental juices flowing as you decide whether to try to solve the problem algorithmically or to cheat and write interim results to disk.

Well, the same thing happens with a lot of the “best practice” rules that surround us in software development. Don’t make your classes too big. Don’t make your methods too big. Don’t have too many parameters. Don’t repeat your code. While they can seem like (and be, if you don’t understand the purpose behind them) cargo-cult mandates if you simply focus on the matter of relativism vs absolutes, they’re really about removing (generally bad) options so that you can be creative within the context remaining, as well as productive and happy. Developers who practice DRY and who write small classes with small methods and small method signatures don’t have to spend time thinking “how many parameters should this method have” or “is this class getting too long?” Maybe this sounds restrictive or draconian to you, but think of how many options have been removed from you by others: “does the code have to compile,” or “is the code allowed to wipe out our production data?” If you’re writing code for any sort of business purposes, the number of things you can’t do dwarfs the number of things you can.

Of course, just having rules for the sake of rules is the epitome of dumb cargo cult activity. The restrictions have to be ones that contribute overall to a better code base. And while there may be some debate about this, I doubt that anyone would really argue with statements like “favor small methods over large ones” and “favor simple signatures over complex ones.” Architects (or self-organizing teams) need to identify general goals like these and turn them into liberating restrictions that remove paralysis by analysis while keeping the code base clean. I’ve been of the opinion for a while now that one of the core goals of an architect should be providing a framework that prevents ‘wrong’ decisions so that the developers can focus on being creative and solving problems rather than avoiding pitfalls. I often see this described as “making sure people fall into the pit of success.”


Going back to the “maximum of three parameters rule,” it’s important to realize that the question isn’t “is that right 99% of the time or 100% of the time?” While Jane and John argue over that one percent, developers on their team are establishing patterns and designs predicated upon methods with 20 parameters. Who cares if there’s some API somewhere that really, truly, honestly makes it better to user four parameters in that one specific case? I mean, great — you proved that on a long enough timeline, weird aberrations happen. But you’re missing out on the productivity-harnessing power of imposing good restrictions. The developers in the group might agree, or they might be skeptical. But if they care enough to be skeptical, it probably means that they care about their craft and enjoy a challenge. So when you present it to them as a challenge (in the same way speeding up runtime or reducing memory footprint is a challenge), they’ll probably warm to it.