DaedTech

Stories about Software

By

Introducing “You Asked For It”

Believe it or not, it turns out that I actually get a pretty decent amount of emails (and growing) asking for advice in various forms or making requests for posts about various subjects. Typically, I answer and/or correspond as time allows and am fortunate to have a number of interesting conversations in this fashion. It’s a cool way to meet fellow techies, and it’s really quite flattering to be asked for advice.

I was thinking that I might turn these kinds of exchanges into posts when applicable, with posts corresponding to batches of shorter questions or individualized for bigger ones. Historically, I’ve regarded direct correspondence as just that, but I’m going to start seeing if people would be amenable to me posting responses here and then doing so, if they’re good with it. No worries if you’re not — I don’t mind email correspondence, and you’re still obviously free to write. I’ve even updated the contact page to have my direct email address instead of just the general LLC’s contact.

And, if you have post requests, email isn’t the only venue for you. Feel free to ask in the comments, on twitter, or anywhere else you find me. Feel free to make requests for content specifying that you’d prefer to remain anonymous or that you’d prefer to be identified as the question asker. I’ll be putting these posts into a new category, “You Asked For It” once I start making them. I don’t anticipate this being overly common, but look for them now and again. And, since I’ll be doing this, please continue to send questions and requests! It’s fun for me to answer them and if it helps you, all the better.

I’m off to the Pluralsight Author Summit for a long weekend, so enjoy your own weekend, and also, enjoy this illustrator’s choice drawing of a turtle-lady sipping a daiquiri.

TurtleLady

By

Define An API By Consuming It

Lately I’ve been working on a project that uses a layered architecture with DDD principles. There’s a repository layer and, below that, lies the purity of the domain modeling. Above that is an API service layer and then various flavors of presentation, as needed. One of the toughest thing to explain to people who are new to some of these concepts is “what should the service layer do?” Go ahead and try it yourself, or even look it up somewhere. Things like data access, domain, repository and presentation layers all have easily explained and understood purposes. The API service layer kind of winds up being the junk drawer of your architecture; “well, it doesn’t really belong in the domain logic and it’s not a presentation concern, so I guess we’ll stick it in the service layer.”

This seems to happen because it’s pretty hard for a lot of people to understand the concept of an API of which their team is both creator and consumer. I addressed this some time back in a post I wrote about building good abstractions and this post is sort of an actual field study instead of a contrived example. How do we know what makes sense to have in the application’s service layer? Well, write all of your presentation layer code assuming that you have magic boxes behind interfaces that cater to your every whim. If you do this, unless you have a pure forms-over-data CRUD app, you’ll find that your presentation layer wants different things than your repositories provide, and this is going to define your service level API.

Take a look at this relatively simple example that I dreamed up, based loosely on a stripped down version of something I’ve been doing:

One thing you’ll notice straightaway is that my controller is pretty stripped down. It really just worries about the arguments to the HTTP methods, a terse summary of what to do, and then what to return or where to redirect. The only things that will make this more complex as time goes on are GUI related things — enriching the model, adding more actions, altering the user’s workflow, etc. This makes unit testing/TDD a breeze and it keeps unpleasantness such as “how to send an email” or “where do we get employees” elsewhere. But, most importantly, it also defines an API.

Right here I’m thinking in the purest terms. I want to show my users a list of employees and I want to let them filter by department. I also want to be able to add a new employee. So, let’s see, I’m going to need some means of getting employees and some means of creating a new one. Oh, and the requirement says I need to send a welcome email in most circumstances (times I wouldn’t based on an elided GUI setting), so I’ll need something that does that too.

Now, you’re probably noticing a serious potential flaw with this approach, which is that having a service that sends emails and fetches customers seems as though it will violate the Single Responsibility Principle. You’re completely right. It will (would). But we’re not done here by any means. We’re not defining what a service will do, but rather what our controller won’t do (as well as what it will do). Once we’re done with the controller, we can move on to figuring out appropriate ways to divvy up the behavior of the service or services that this will become.

Here’s another thing I like about this approach. I’m in the habit of defining a “Types” assembly in which I put the interfaces that the layers use to talk to one another. So, the Presentation layer doesn’t actually know about any concrete implementations in the Service layer because it doesn’t even have a reference to that assembly. I use an IoC container, and I get to do this because of it:

Right there in the controller’s source file, below it, I stick a dummy implementation of the service and I wire up the IoC to use it. This is really handy because it lets me simulate, in memory, the way the application will behave assuming we later define real, actual service implementations. So I can use TDD to define the behavior of the controllers, but then I can also use this dummy service to define the layout, appearance, and flow of the views using actual, plausible data. And all of this without worrying about data access or anything else for the time being. I’m bifurcating the application and separating (to a large degree) the fulfillment of user stories from the nitty gritty technical details. And, perhaps most importantly, I’m defining what the service API should be.

I like having that dummy class there. It creates a warning for me in Code Rush (multiple classes in the same source file) that nags at me not to forget to delete it when I’m done. I can modify the GUI behavior right from the controller. As I add methods to the service while going along with my TDD for the controller, it’s easy enough to add implementers here as needed to compile. I consider this a win on my many fronts.

And when I’m done with the presentation layer and satisfied, I can now look at the single service I’ve defined and say to myself things like “should this be more than one class?” or “is some of this functionality already implemented?” I’ve nailed the presentation layer and I’m ready to do the same with the service layer, knowing exactly what kind of API my ‘client’ wants.

By

Why Social Situations Exhaust Introverts: A Programmer’s Take

I’m going to apologize in advance if this winds up being a long post, but it’s a topic that requires a great deal of introspection and I find that attempting to explain myself is one of the hardest things to abbreviate. Over the years, I’ve read a bit about the topic of introversion versus extroversion and, being in an industry in which introversion is often assumed, I’ve also seen a number of memes about it. This one is probably my favorite, if for no other reason than seeing the poor introvert hissing like a cat at some invasive extrovert. This comic provides a memorable graphical explanation of what other sources such as wikipedia explain more dryly: that extroverts draw energy from social interactions and that introverts spend or use up energy during those same interactions.

On the whole, I find this explanation pretty satisfying as it more or less explains my life and experience. I’m the classic example of “not all introverts are shy or socially awkward.” I am competent in social situations and even fine with things like public speaking — it’s just that, after a long evening of spending time with people, I tend to get home and think, “wow, finally…” I’m not a huge fan of the vague and sort of hand-wavy idea of “mental energy” and it seems likely to me that there’s a more concrete physiological explanation involving adrenaline and dopamine or something, but the effect on me, personally, is undeniable.

The thing I’d like to explore is how and why these interactions are taxing to me. Maybe you’ll find that my explanation resonates with you. Maybe I’m just a lone weirdo.

Control and the Unknown

I have a memory that’s simultaneously very specific and very vague. The vague parts are that I was some age or another, probably in junior high, and that I had a crush on a girl, but honestly don’t remember which one. Assuming I’m right about the age, it probably varied weekly. But what I remember with incredible clarity was sitting alone in my bedroom, staring at the phone, and contemplating calling this girl to ask her to go to a movie with me or something. I really wanted to do this. If it had gone well, I would have been in junior high hog-heaven, and if it had gone poorly, I’m sure I would have recovered from the embarrassment in relatively short order, but I just sat there, analyzing, brain churning furiously. I’d pick up the phone and start to dial and then hang up. I’d think. Go through the conversation in my head. Rehearse what I’d say. Anticipate her response. Rehearse my response to what I imagined her response to be. Etc, ad nauseum.

Man, I’m tired just thinking about it, and that’s probably why I remember it. I never called the girl, which is probably why I don’t remember who she was (and I think I might have gone through this exercise with more than one), but young, introverted Erik was exhausted by a social situation that never even actually happened. Imagine how exhausting the phone call would been had I summoned up the intestinal fortitude to go through with it.

I’ll come back to that in a moment, but first I’d like to talk about how much I dislike conversations about the weather on a variety of levels. When talking about the weather, there are three possible categories of conversation: trite, tactical, and pseudo-religious. The first category is barely worth mentioning in that it is the “hot enough for you?” nervous drivel that serves as an awkward social lubricant in situations where people feel the need to make small talk and no alcohol is present. The second kind of conversation is planning that revolves around the weather such as “we should maybe reschedule our picnic for tomorrow because it seems like it’s going to rain.” The third category is the kind of long-ranging predictions about the weather that people tend to engage in knowing tones for the sake of having opinions: “well, after this brutal winter, we’re probably going to be in for a mild summer.”

When it comes to why I dislike weather conversations, it depends on the flavor. Not surprisingly, I find the trite weather observations to be, well trite — restatements of plainly observable facts aren’t the stuff of scintillating dialog. I find tactical weather discussions annoying because far more often than not they come up in the form of impediments like altered plans, grounded planes, traffic, etc. The pseudo-religious conversations I find bemusing and wholly unrelatable since weather is simply a chaotic system like a financial market or the movement of all of the fish in the ocean. Trying to predict it without unimaginable leaps in processing power or a wholly new form of mathematics is a waste of time and claiming to understand what’s coming is most likely the manifestation of a very human desire to make sense of the senseless and to see purpose in all things. This is why I call them “pseudo-religious” — they all assign moral meaning to the whims of chaotic systems, such as suggesting that storms are Divine punishment for our moral degradation or, alternatively, suggesting that the Earth is going to be uninhabitable because of our present eco-sins. But the fact that an ordered universe (or weather system) is more appealing doesn’t magically create purpose to make it somehow predictable and just.

So weather is either obvious and mundane, obvious and important, or unknowable. And, for this reason, as a serial problem solver, obsessive pattern-matcher (more or this in a subsequent post), and introvert, I find the weather completely uninteresting. It’s either a non-problem, a relatively easily solved problem (have your picnic inside if it’s raining), or an unsolvable problem about which speculation is pointless. If I tried to solve the problem of what the weather would be like in a month, I’d become exhausted by my own failure — in much the same way I became exhausted by the problem of trying to figure out how the girl that would have been on the other end of the phone line would react to my interest and invitation to a date. But, unlike the weather, the date situation had a relatively limited set of parameters and outcomes and much more potential benefit, so I at least labored to the point of exhaustion instead of saying, “why bother in the first place?” I had more control over that situation by far than the weather, but my control was still limited.

Programming, Safe Feedback, and Blissful Introversion

I’m at my happiest when I’m in my office succeeding quickly at small tasks. I made a post some time back about how I create a list of small tasks in an Excel sheet and change their background color from yellow to green as I work. I’m at my happiest when doing some TDD and checking things off the list. I write a test, see red, change the code, see green, refactor. I do this a few times, and I turn a spreadsheet cell from yellow to green. I’m moving efficiently through a mountain of work with small, steady, repeatable victories.

I’m in my own world. If I try something that doesn’t work, the test doesn’t go green and I learn from the experience and try other things until it does. If I’m stumped, I hop on google or stack overflow and see if I can find a solution. I experiment. I change the task list. I do a lot of different things where the pattern is “change something, see the results, and proceed accordingly.” My most productive days are large, beautiful crystals made from lattice structures of tiny examples of the scientific method: hypothesis (red test), experiment (change the code), analysis (green/move on or still red/try again).

In my own world, life is extremely predictable and within my control. Things change only when I change them and I know the results quickly and in a safe, consequence-free way. If I was wrong about something, I just hit control-Z and lesson learned with no harm done. There are endless mulligans as I go about my cycle of learning and building. I need not venture forth into the world with my products or conclusions until I know that things are bullet-proof. I can prove that the code works with automated scripts. I can back up my arguments with well-researched support. I find this not to be tiring but to be therapeutic and invigorating. After a day of uninterrupted, productive coding, I’m usually pretty energized and will head to the gym to burn it off.

Social Situations and Exhaustion

I’m less happy during the day when progress isn’t measured easily and the feedback loop is longer or non-existent. If, for instance, I leave my office and sit in several meetings where people offer opinions and try to reach consensus (more on this as well in a subsequent post), I grow tired fairly quickly. Such things are almost never people taking turns presenting evidence and well-crafted arguments, but far more often rapid fire opinions ‘substantiated’ with hearsay and conjecture. I can’t prepare for these conversations because I have no idea what people will dream up to talk about and when volume and charisma count for as much as reasoning and evidence, there’s no predicting what kind of outcome will follow.

And even if it isn’t meetings, people throw weird curveballs at me all day. Someone will come and claim that something is a crisis when it really isn’t, and I have to stop and spend time calming this person down or trying to persuade them to look at the bigger picture. I’ll speak with coworkers that are having a personal issue with one another. I’ll get invited to lunch when I have a lot of work to do, but I don’t want to be rude by saying no. These situations are quasi-chaotic. They aren’t chaotic like the weather or a market, but they’re extremely hard to predict and there’s no good way to back out of a bad choice and try the other branch. If I turn the guys down for lunch and see their faces drop, there’s no taking back that my initial reaction was to reject them, even if I reverse course quickly.

None of this is to day that I don’t like dealing with other people or that I’m some kind of hermit. I like going out to lunch with friends and coworkers. I like shooting the breeze sometimes. I understand that things come up that require my attention. And I’ll even grudgingly admit that every now and then a meeting is mildly productive. But all of these things are tiring. (There are two exceptions that I’ll cover in a subsequent post as well — times where I’m speaking/presenting to an audience and times when I’m mostly just listening to someone offer opinions for long stretches without feedback) I just want to get back to my office, sit at my desk, and be in a world of controlled experiments, careful reasoning, and strictly knowable and measurable outcomes. After a day without these, I’m usually too tired for the gym.

Maybe others have different reasons for their introversion than I do. But I’m willing to bet that I’m not alone in thinking that it’s a matter of preferring controlled environments and predictable outcomes. And what’s more, I bet that the correlation between introversion and certain personality types or vocations (i.e. programmers, among others) can be partially explained by this “introverts as highly analytical” notion. Food for thought on this Friday, anyway. I have more to say on this subject, but I’ll probably space these posts out a bit, since they’re about as far from the standard technical/workplace fare as I get on this blog.

By

Cleaning Up Your Build

Today, I’d like to make a post that’s relatively short and to the point. And it’s relatively short and to the point because what I’m going to talk about is so basic and fundamental that it won’t take long to say it. It is absolutely critical that you have nothing standing between the stuff checked into your project’s source control and a completely successful build and deployment of your software.

Here’s the first thing that should absolutely be true. Someone brand new to the team should be able to take a computer with nothing special installed on it, point it at the latest good version of your project in source control, get the latest code, build it in the IDE, and be able to run your software when the build is done. If when this person runs the software, weird things happen or it crashes, you’ve failed. If this person can’t even successfully build the software, you’ve failed badly. If this person can’t even compile the software, things are really ugly. If this person can’t get the software out of source control, you’re probably using Rational Clear Case, and that poor person coming to the team has no idea what’s coming for the next months and years. Not being able to get things out of source control without jumping through hoops is total fail.

It is absolutely critical that right away, on the first day, someone can get your software from source control, build it and run it without issues. If this isn’t possible right now, make a project plan/user story/whatever for making it be possible in the future and give this a high priority. Think of yourself as a restaurant that has severe health code violations. Sure you could ignore them and continue cranking out your signature dish, “Spaghetti with Botulism,” but it’s not advisable. You need to have a clean and sanitary work environment, free from nasty cruft and residue that’s worked its way into being a normal part of your process.

Once you’ve got a “source control to runtime” path that’s pristine, you need to make sure this is also the case for deployment. You should be able to fire up a clean target machine or VM, deploy your deliverables to it in some automated fashion, and have it work. What you shouldn’t need to do is install MS Word on there or remember to copy over those six license files and that .trx thing in that one directory. Oh yeah, and a registry setting or something.

As soon as you’re doing stuff like that, you have a polluted build because you have a point of failure. You have your “automated” process and then you have the thing that you just have to remember to do every time or things go badly. If any of your process is manual, you WILL mess it up and cause problems. We’re humans and its inevitable. This is especially true if you aren’t agile and deployments happen only rarely. If you can, eliminate it as a step, but if you can’t, then automate it. Deploying should be dead simple.

And something else to bear in mind is that past sins aren’t forgiven. In other words, if you have a deployment process now that’s simple and one click and works every time with something like XCopy, that doesn’t mean you’re out of the woods. The “on a clean machine” requirement is critical. If you’re XCopying over existing files to deploy, you might have some weird one-off thing that you did to the server 2 years ago and have forgotten all about. You need to make sure you can nuke your whole deployment, redo it, and have it work.

If it sounds as though I’m being a hardliner or extremist, perhaps that’s the case, but I think it’s justifiable on this subject. You can’t negotiate with cargo cult build processes. They have to be eliminated because there is absolutely no upside and absolutely pure downside. Think about your own source control, build and deployment processes and ask yourself if there are things that need to be weeded out. And you know what? Just take a crack at it. I’ve done this sort of thing myself on a lot of different projects and I’ve always found it’s never as hard to fix the problems as you think it’s going to be. Usually it’s just that no one thinks to try.

By

Interface Segregation Principle: A Practical Example

I’ve had this partially completed post in my drafts folder for a while, and, thanks to a sort of half-hearted New Years resolution to either finish or discard really old drafts, I’m going to finish this one. I changed the title and re-did the focus a bit, since this one was a year and a half old, and the code in question here is hazier and hazier in my mind. But it’s a tale of Singletons, needless coupling, poor cohesion, and woe.

I’ve been interviewing some candidates of late and one of the things I typically ask about is familiarity with the SOLID principles. An encouraging amount of people say things like, “oh sure, classes should have only one purpose” or “oh, yeah, we use IoC containers!” It seems as though S, O, and D get the most love, while L and I (Liskov Substitution Principle and Interface Segregation Principle, respectively) are rarely mentioned. Today, I’ll talk about I with an example that may hit home with you.

I worked on a project once with a fairly unique ‘architecture.’ It was a GUI-heavy desktop application with a reasonably straightforward set of domain objects that were read from persistence and stored in memory. This was accomplished using what really kind of amounted to an old school, C-style memory map of structs. There was a root object, and everything was hierarchically possessed by this root object. So, let’s say that you were modeling a parking lot full of cars — you’d navigate to the car parts via calls like Root.Cars[12].Engine.Battery or Root.Cars[5].Body.Exterior.Paint. And naturally, Root was implemented as a Singleton so that you could access this memory map from anywhere at any time to read or write. Yep. That was a long project.

This Singleton probably started out modestly enough, but by the time I had worked on the project, its sizable gravitational pull had roped smaller satellites, code-moons and free-falling bits of flotsam into it, creating a runaway juggernaut of anti-pattern. This thing was, if memory serves, pushing 10K lines of code and perhaps even spreading out into some partial classes. I think by the time I moved onto another project, it was showing the first signs of having an event horizon.

LargePlanet

There were several different skins on this desktop app, and for some reason, each of them had their own mini-versions of this Singleton (a much more modest 1K to 2K LOC, each), I guess to contain the skin-specific sprawl that would have created a circular reference situation with the behemoth itself. It was one of these mini-versions that inspired this post, along with a story a friend of mine told me after I’d left the project.

A few of us had, while working in this code base, endeavored to extract at least tiny pockets of code from the massive gravitational field of the Singletons so that we could write some unit tests and avoid the bug whack-a-mole game that ensued whenever you changed something in the global state. I was probably the most successful in fighting this battle and, my friend, having less success after I’d left, complained to me one day over lunch. He said that he’d had a perfectly testable class, but in a code review someone in a position of relative authority had pointed out that something he was doing was already done in one of the satellite Singletons and he should use that code. His request to hide the Singleton behind a wrapper or even an interface implementation was denied. A large portion of his class became thoroughly untestable because naturally, the first call to his stuff triggered the first lazy call to the satellite, which triggered the first lazy call to the black hole Singleton, which fired up every threading model, logger, aspect, and bit of file I/O in the history of creation, crushing the test runner like an insignificant gnat.

And this, to me, is perhaps the best argument for the Interface Segregation Principle that I’ve ever heard. As a quick recap, the ISP says that “clients should not be forced to depend upon interfaces that they don’t use.” In my friend’s case, depending on the utility method that he was being forced to use, in turn, made him depend on an entire, 1K+ LOC Singleton being initialized and, indirectly, a whole host of other application functionality. This was about the most egregious violation that I’d ever encountered.

What’s the alternative? Well, years later, it’s hard to list specifics, but how about pulling that utility out somewhere? If it relied on no global state, this is a no-brainer — just put it into its own class somewhere. Static, instance — it doesn’t matter — just not in that Singleton. If it relied on global state, then create a small interface with one method, have the satellite Singleton implement it, and hand that interface to the class in question. In either case, my friend’s class depends on a single method for that functionality and not the entire application domain and the code responsible for loading it (as well as loggers and other ancillary nonsense).

The Interface Segregation Principle asks you to keep your dependencies to a minimum. This will help you unit test, but it will also help your sanity at maintenance time. After all, would you rather debug a class that you knew depended on an external method or two, or one whose behavior could be explained by any one of tens of thousands of lines of code? These things will matter to you or whoever maintains the application. A lot. So think of this example and think of the ISP as you’re writing your code.

By

Help Yourself To My Handy Extension Methods

There’s nothing like 3 years (or almost 3 years) of elapsed time to change your view on something, such as my take on C# extension methods. At the time I was writing this post, I was surrounded by wanton abuse of the construct, compliments of an Expert Beginner, so my tone there was essentially reminiscent of someone who had just lost a fingertip in a home improvement accident writing a post called “why I don’t like skill saws.” But I’ve since realized that cutting lots of lumber by hand isn’t all it’s cracked up to be. Judicious use of extension methods has definitely aided in the readability of my code.

I’m going to post some examples of ones that I’ve found myself using a lot. I do this almost exclusively in test assemblies, I think because, while code readability is important everywhere, in your unit tests you should literally be explaining to people how your API works using code as prose (my take anyway). So, here are some things I do to further that goal.

ToEnumerable

One of the things that annoyed me for a long time in C# was the thought that initializing lists was clunkier than I want. This isn’t a criticism of the language, per se, as I don’t really know how to make list initialization in the language much more concise. But I wanted it to be on a single line and without all of the scoping noise of curly brackets. I’m picky. I don’t like this line of code:

I’m arranging this retrieval service (using Telerik’s JustMock) so that its GetAll() method returns a list of Foos with just one Foo. The two different instances of Foo are redundant and I don’t like those curly braces. Like I said, picky. Another issue is that a lot of these methods that I’m testing deal in enumerables rather than more defined collection types. And so I wrote this:

And writing this method and its overload change the code I don’t like to this:

One occurrence of Foo, zero curly brackets. Way more readable, for my money and, while YMMV, I don’t know if it’s going to vary that much. Is it worth it? Absolutely, in my book. I’ve eliminated duplication and made the test more readable.

IntOrDefault

Do you know what I hate in C#? Safe parsing of various value types from strings. It’s often the case that I want something like “pull an int out of here, but if anything goes wrong, just set it to zero.” And then you know the drill. The declaration of an int. The weird out parameter pattern to Int.TryParse(). The returning of said int. It’s an ugly three lines of code. So I made this:

Now, if I want to take a whack at a parse, but don’t really care if it fails, I have client code that looks like this:

What if that indexed value is empty? Zero. What if it has a bunch of non-numeric characters? Zero. What if it is null? Zero. No worries. If there’s actually an int in that string (or whatever it is), then you get the int. Otherwise, 0 (or, technically, default(int)).

I actually created this for a bunch of other primitive types as well, but just showed one here for example’s sake.

GetModel<T>

This is an MVC-specific one for when I’m trying to unit test. I really don’t like that every time I want to unit test a controller method that returns ViewResult I have to get the model as an object and then cast it to whatever I actually want. This syntax is horribly ugly to me:

Now, that may not look like much, but when you start chaining it together and have to add a second set of parentheses like ((Customer)view.Model).SomeCustomerProperty, things get ugly fast. So I did this instead — falling on the ugliness grenade.

It still fails fast with an invalid cast exception, but you don’t need to look at it, and it explains a lot more clearly what you’re doing:

Mocking Function Expression Arguments

This is a little more black belt, but if you’re an architect or senior developer and looking to make unit testing easier on less experienced team members, this may well help you. I have a setup with Entity Framework hidden behind a repository layer, and mocking the repositories gets a little… brutal… for people who haven’t been dealing with lambdas for a long time:

“Don’t worry, that just means you can pass it any Expression of Func of Customer to bool!” Now, you’re thinking that, and I’m thinking that, but a lot of people would be thinking, “Take your unit testing and shove it — I don’t know what that is and I’m never going to know.” Wouldn’t it be easier to do this:

Intent is still clear — just without all of the noise. Well, you can with this extension method:

Now, I’ll grant you that this is pretty specific. It’s specific to JustMock and to my implementation of repository methods, but the idea remains. If you’re dealing with Expressions like this, don’t make people trying to write tests type out those nasty argument matcher expressions over and over again. They’re extremely syntactically noisy and add nothing for clarity of purpose.

Edit: Thanks, Daniel and Timothy for the feedback about Chrome. I’ve had intermittent issues with the syntax highlighter plugin over the last few months in Chrome, and the problem stops whenever I change a setting in the syntax highlighter, making me think it’s fixed, but then it comes back. Good news is, seeing it impact others lit a fire under me to go find a replacement, which I did: Crayon Syntax Highlighter. I already like it better, so this is a “make lemonade out of lemons situation.” Thanks again for the feedback and please let me know if you have issues with this new plugin.

By

How Do You Make Software?

There is more than one way to skin a cat, as a morbid expression goes. And, less morbidly, there is more than one way to bring software into existence. Today, I’d like to talk about that in sort of a linguistic, descriptive sense. If that sounds weird, what I mean to say is that I’m not going to talk about how you can use different languages, software development methodologies, etc., but rather I’m going to talk about different attitudes toward the construction of software.

I once worked at a place that made heavy machinery controlled by software. The control software needed to have extremely precise timings and, for a GUI… well, it didn’t need much in that department. Just enough that it could be operated by technicians. As such, the approach to this software wasn’t really one of craftsmanship but one of utilitarian construction since the core business of the company was selling hardware. At this company, we wrangled and hacked software together.

I’ve worked at a place that was process-heavy. It was oh-so process heavy. And, when too much process hamstrung productivity, more process was added to make the unproductive process more productive. Design decisions were usually made by committee. Having a class implement an interface was not a decision to be taken lightly — the council of elders had to discuss it a length. Not a whole lot of software was written, truth be told, with some programmers not assigned any programming work for entire releases. At this company, we tentatively nudged the existing software after much deliberation.

Another experience I had was working in an environment where the goal was to get the software done as quickly as possible, in the interests of besting competition and, in some cases, improving margins. The marching orders were to do the best to make software of decent quality, but to get it done as quickly and efficiently as possible. At this company, we cranked out software.

CodeGrinder

These days, almost everything I do, software-wise, is iterative and agile, and I’d assign a different verb to describe what happens. We improve software. When you really get down to what a lot of the craftsmanship principles and agile practices are about, they drive you toward steady, incremental progress and improvement — shaping the software into the best solution to your users’ problem. But doesn’t “improve” imply that you’re not actually created it, but changing it?

Yes, absolutely! If you have a two week sprint, the only time you create software is during sprint one. At the end of sprint one, you deliver software to the user, and from there forward, you’re changing it rather than creating it. And, hopefully, you’re changing it for the better, which is why I pick “improve.”

So what does your organization do when it comes to software? Do you crank it out? Hack it together? Do you happily craft it? Angrily dump it? Grudgingly fork it over? I think this is a good exercise to engage in to tell you a bit about your organization and your perception thereof. And if you don’t like what you do, what can you do to change it? Can you change it? And, if not, and you want to go somewhere else, this exercise may help you in your search for a place you can be happy. Maybe you even ask this in an interview and see what they say. “So how do you make software?” If they don’t look at you funny and think that you’re crazy, you might just have an interesting discussion.

By

Intro to Unit Testing 10: The Business Value of Unit Tests

Backstory

I worked for a consulting firm for a while. We didn’t make anything particularly exciting — line of business applications and the like was about the extent of it. The billing model for clients was dead simple and resembled the way that lawyers charge; consultants had an hourly rate and we kept diligent track of our time, to the nearest quarter hour. There was a certain feel-good element to this oversimplification of knowledge work in the same way that it’s pleasant to lean back, watch Superman defeat Lex Luthor and delight in a PG world where Good v Evil grudge matches always end with Good coming out the victor.

It’s pleasant to think that writing software has the predictable, low-thought cadence of an activity like chopping wood where each 15 minutes spent produces a fairly constant amount of value to the recipient of the labor. (Cue background song, Lou Reed, “A Perfect Day“) Chop for 15 minutes, collect $3, hand over X chopped logs. Chop for 1 hour, collect $12, hand over 4X chopped logs. Write software for 15 minutes, produce a working, 15 LOC application for $25. Write software for 1 hour, produce a working, 60 LOC application for $100. Oh, such a perfect day.

When I started at the company, I asked some people if they wrote unit tests. The answer was generally ‘no’ and the justification for this was that you’d have to run it by the client and the client most likely wouldn’t want to pay for you to write unit tests. What they meant by this was that since we billed in quarter hour increments and supplied invoices with detailed logs of all activity, it’d be sort of hard to sneak in 15 minutes of writing automated test code. Presumably, the fear was that the client would say, “what’s this ‘unit testing’ stuff and why did you do it when you didn’t say anything about it.” I say “presumably” because this wasn’t the reason people didn’t unit test at this company just like whatever excuse they have at your company isn’t the real reason for not unit testing there. The real reason is usually not knowing how to do it.

Why did I start out with this anecdote and its centerpiece of the quarter hour billing and development cadence? Well, simply because software development is a creative exercise and far too spastic to flow along smoothly in a low viscosity stream of lines of code per minute. You may sit and stare blankly at a computer screen while contemplating design for half an hour, code for 4 minutes, stare blankly again for an hour, code for 20 minutes, and then finish the product. So, 24 minutes — is that a billable half hour or 15 minutes? Closer to 30, I suppose. Do you count the blank staring? On the one hand, this was the real work — the knowledge work — in a way that the typing certainly wasn’t. So should you bill 1.5 hours instead and just count the typing as a brainless exercise? Or should you bill 2 hours because the work is a gestalt? I personally think that the answer is obvious and the gestalt billing model cuts right to the notion that software development is a holistic exercise that involves delivering a working product, and the breakdown may include typing, thinking, white-boarding, searching Stack Overflow, debugging, squinting at a GUI, talking to another developer, going for a 5 minute walk for perspective, running a static analysis tool, tracking down a compiler warning, copying 422 files to a target directory, and yes, my friend, unit testing.

Those are all things that you do as part of writing good software. And, in a consulting paradigm, you wouldn’t cut one of them out and say, “the client wouldn’t want to pay for that,” because the client doesn’t know what its talking about when you’re under the software-writing hood — that’s why they’re paying you. They wouldn’t want to pay for “searching Stack Overflow” or “Squinting at the GUI” either, but you don’t refuse to do those things when you’re writing software. And so refusing to unit test for this same reason is a cop-out. When a younger developer at that firm asked me why I wrote unit tests and how I accounted for them in my billing, this was essentially the argument I gave — I asked him how he accounted for the time he spent compiling, debugging, and running the application and, bright guy that he is, he understood what I was saying immediately.

Core Business Value

This may seem like a roundabout and long introduction to this chapter, but it really cuts to the core of the business value proposition for unit tests. During development, why do developers compile, run, and debug? Well, they do it to see if their code is doing what they think it should do. Write some code, then make sure it’s doing what you expect. So why write unit tests? To make sure your code is doing what you expect, and to make sure it keeps doing what you expect via automation. The core business value of unit tests is that they serve as progress markers, sign-posts, and guard rails on the road to an application that does what you expect.

Unit tested applications are more predictable and better documented than their non-unit tested counter parts (assuming the same amount of API documentation and commenting are done), and there is an enormous amount of business value in predictability and clarity of intent. With a good unit test suite, you’ll know in minutes if you’ve introduced a regression bug. Without that unit test suite, when will you know? When you run the application GUI? When QA runs it a week later? When the customer runs it a month later? When something randomly goes haywire a year later? Each one of these delays becomes exponentially more expensive.

That’s not the only value-add from a business perspective (and I’ll list some other ones next), but it’s the main one, as far as I’m concerned. It also explains why the notion that you need to carve out some extra time for unit tests and figure out whether the customer wants them or not is preposterous. Do you think the customer is going to get angry if you explain that part of your development process is to execute the code you just wrote to make sure it doesn’t crash? If the answer to that is “no, of course not, that’s ridiculous” then you also have the answer to whether or not a customer would care if you happened to automate that process.

Of course, one thing to bear in mind is that a customer may not want to pay for you to learn on the job to unit test, and that’s a fair point. But if the customer (or your company, internally, if you aren’t a consultant) doesn’t want to foot the bill for this, then you should strongly consider picking it up on your own and then switching customer/company if they don’t buy in to something as fundamental as automating predictability. Unit tests are the software equivalent of accountants practicing double entry bookkeeping, doctors washing their hands, electricians turning power back on before leaving and plumbers doing the same with the water. Imagine if your plumber sweat welded a joint for your new shower, sized it up and then said, “meh, I’m sure it’s fine” and left without ever running the water. That’s what tens of thousands of us do every day when we just assume some piece of code works because it worked a month ago and you don’t remember touching it since then. Ship it? Meh, sure, whatever — it’s probably fine. The business value of unit tests is a stronger assurance that we know what’s going on than “meh, sure, whatever.”

Ancillary Business Value

Here are some other ways in which unit tests add value to the business beyond confirming that the application behaves as expected.

First, unit tests tend to serve nicely as documentation. This may sound strange at first; you’re probably thinking, “how is a bunch of code documentation when we have a whole activity associated with documenting our code?” Well, fact of the matter is that documentation in the form of writeups, code comments, instruction manuals, etc tends to get out of date as the product ages. Unit tests, however, are never out of date because if they were, the build would fail (or at least you’d see red when you ran them) and you’d be forced to go back and “fix the documentation.” If you keep your unit test methods clean and give them good names as described in earlier chapters of this series, they’ll also read more like a book than like code, and they’ll document purpose and intended behavior of the system.

Unit tests also guard against regression. When you write the tests, you’re confirming that the software does what you expect it to at that moment. But what about later? Maybe later you forget what you intended in that moment or decide that you intended something different and you change the code. Will it still work? In a lot of legacy code bases, the answer to that question is, “yikes — who knows?” With a thoughtfully unit tested code base, you can rig it so that a test goes red if a design assumption that you made is no longer true. For instance, say you write some method with the intention that it never return null, and say that eventually you and your teammates build on this method and its assumed post-condition, grabbing values returned by the method and never checking for null before dereferencing them. If someone later modifies that method and adds a condition in which it returns null, the only thing standing between them and introducing a regression bug is a unit test that fails if that method returns null.

The practice of automated unit testing has after the fact benefits such as documentation and guards against regression bugs, but it also helps during the course of development by having a positive impact on your design. I’ve long been a fan of and have previously linked to this excellent talk by Michael Feathers called “The Deep Synergy Between Testability and Good Design.” The general idea is that writing code with the knowledge that you’re going to be writing tests for it (or practicing TDD) leads you to write small, factored classes and methods that are loosely coupled and that this practice, in turn, creates flexible and maintainable code. Or, consider the converse and think of how hard it is to write unit tests for giant, procedural methods and classes. Unit tests make it harder to do things that make your code awful.

Lastly, I’ll throw in a benefit that summarizes my take on this entire subject and really drives things home. It’s a huge bit of editorializing, but I feel somewhat entitled to do so in my own conclusion. I believe that a serious piece of value added by unit testing is that it lends you or your group legitimacy and credibility. In this day and age, the question, “should you unit test your code” is basically considered to be settled case law in the industry. So the question, to a large extent, boils down to whether you write tests or whether you have excuses, legitimate or otherwise (and there are legitimate ones, such as “I don’t know how, yet.”) Don’t be in the camp that has excuses.

Forget justifying what you or your organization has done up to this point, and imagine yourself as a customer of software development. You’ve got a budget, and you’re looking to have some software written that you don’t have time, yourself, to write. All other things being equal, which group do you hire? Do you hire a group that responds to “do you unit test” with “no, we don’t think our customers would want that?” How about a group that responds with “well, there’s this database and this GUI and sometimes there’s hardware, so we really can’t?” Or do you hire a group that responds with, “we sure do, would you like to see some samples?” I bet it’s the last one, if you’re honest with yourself.

So be that last group. Add value to your users and your business. Write good software and consider your design carefully, and, just as importantly, automate the process of ensuring your software does what you think it does. Your credibility and the credibility of your software is at stake.

Acknowledgements | Contact | About | Social Media