DaedTech

Stories about Software

By

The Developer Feedback Loop

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, check out some of the products that they offer, including GhostDoc and CodeIt.Right.

If you write software, the term “feedback loop” might have made its way into your vocabulary.  It charts a slightly indirect route from its conception and into the developer lexicon, though, so let’s start with the term’s origin.  A feedback loop in general systems uses its output as one of its inputs.

Kind of vague, huh?  I’ll clarify with an example.  I’m actually writing this post from a hotel room, so I can see the air conditioner from my seat.  Charlotte, North Carolina, my temporary home, boasts some pretty steamy weather this time of year, so I’m giving the machine a workout.  Its LED display reads 70 Fahrenheit and it’s cranking to make that happen.

When the AC unit hits exactly 70 degrees, as measured by its thermostat, it will take a break.  But as soon as the thermostat starts inching toward 71, it will turn itself back on and start working again.  Such is the Sisyphean struggle of climate control.

TerrifiedOfFurnace

Important for us here, though, is the mechanics of this system.  The AC unit alters the temperature in the room (its output).  But it also uses the temperature in the room as input (if < 71, do nothing, else cool the room).  Climate control in buildings operates via feedback loop.

Appropriating the Term for Software Development

It takes a bit of a cognitive leap to think of your own tradecraft in terms of feedback loops.  Most likely this happens because you become part of the system.  Most people find it harder to reason about things from within.

In software development, you complete the loop.  You write code, the compiler builds it, the OS runs it, you observe the result, and decide what to do to the code next.  The output of that system becomes the input to drive the next round.

If you have heard the term before, you’ve probably also heard the term “tightening the feedback loop.”  Whether or not you’ve heard it, what people mean by this is reducing the cycle time of the aforementioned system.  People throwing that term around look to streamline the write->build->run->write again process.

Read More

By

Are You Changing the Rules of the Road?

Happy Friday, all.  A while back, I announced some changes to the blog, including a partnership with Infragistics, who sponsors me.  Part of my arrangement with them and with a few other outfits (stay tuned for those announcements) is that I now write blog posts for them.  Between writing posts for this blog, writing posts for those blogs, and now writing a book, I’m doing a lot of writing.  So instead of writing Friday’s post late Thursday evening, I’m going to do some work on my book instead and link you to one of my Infragistics posts.

The title is, “Are You Changing the Rules of the Road?”  Please go check it out.  Because they didn’t initially have my headshot and bio, it’s posted under the account “DevToolsGuy,” but it’s clearly me, right down to one of Amanda’s signature drawings there in the post.  I may do this here and there going forward to free up a bit of my time to work on the book.  But wherever the posts reside, they’re still me, and they’re still me writing for the same audience that I always do.

 

 

CodersBlock

 

By

What Story Does Your Code Tell?

I’ve found that as the timeline of my life becomes longer, my capacity for surprise at my situation diminishes. And so my recent combination of types of work and engagements, rather than being strange in any way to me, is simply ammo for genuineness when I offer up the cliche, “variety is the spice of life.” Of late, I’ve been reviewing a lot of code in a coaching capacity as well as creating and giving workshops on story telling and creative writing. And given how much practice I’ve had over the last several years at multi-purposing my work, I’m quite vigilant for opportunities to merge story-telling and software advice. This post is one such opportunity, if a small one.

A little under a year ago, I offered up a post in which I suggested some visualization mnemonics to help make important software design principles more memorable. It was a relatively popular post, so I assume that people found it helpful. And the reason, I believe, that people found it helpful is that stories engage your brain far more than simple conveyance of information. When you read a white-paper explaining the Law of Demeter, the part of your brain that processes natural language activates and decodes the words. But when I tell you a story about a customer in a convenience store removing his pants to pay for a soda, your brain processes this text as if it were experiencing the event. Stories really engage the brain.

One of the most difficult aspects of writing code is to find ways to build abstraction and make your code readable so that others (or you, months later) can read the code as easily as prose. The idea is that code is read far more often than written or modified, so readability is important. But it isn’t just that the code should be readable — it should be understandable and, in some way, even memorable. Usually, understandability is achieved through simplicity and crisp, clear abstractions. Memorability, if achieved at all, is usually created via Principle of Least Surprise. It’s a cheat — your code is memorable not because it captivates the reader, but because the reader knows that mapping what she’s used to will probably work. (Of course, I recognize that atrocious code will be memorable in the vivid, conversational sense, but I’m talking about it being memorable in terms of its function and exact behavior).

It’s therefore worth asking what story your code is telling. Look at this code. What story is it telling?

Read More

By

What I Learned from Learning about SpecFlow

In my ChessTDD series, I was confronted with the need to create some actual acceptance tests.  Historically, I’d generally done this by writing something like a console application that would exercise the system under test.  But I figured this series was about readers/viewers and me learning alongside one another on a TDD journey through a complicated domain, so why not add just another piece of learning to the mix.  I started watching a Pluralsight course about SpecFlow and flubbing my way through it in episodes of my series.

But as it turns out, I picked up SpecFlow quickly.  Like, really quickly.  As much as I’d like to think that this is because I’m some kind of genius, that’s not the explanation by a long shot.  What’s really going on is a lot more in line with the “Talent is Overrated” philosophy that the deck was stacked in my favor via tons and tons of deliberate practice.

SpecFlow is somewhat intuitive, but not remarkably so.  You create these text files, following a certain kind of format, and they’re easy to read.  And then somehow, through behind the scenes magic, they get tied to these actual code files, and not the “code behind” for the feature file that gets generated and is hard to read.  You tie them to the code files yourself in one of a few different ways.  SpecFlow in general relies a good bit on this magic, and anytime there’s magic involved, relatively inexperienced developers can be thrown easily for loops.  To remind myself of this fact, all I need to do is go back in time 8 years or so to when I was struggling to wrap my head around how Spring and an XML file in the Java world made it so that I never invoked constructors anywhere.  IoC containers were utter black magic to me; how does this thing get instantiated, anyway?!

BrandNewSetup

Read More

By

Flexibility vs Simplicity? Why Not Both?

Don’t hard code file paths in your application. If you have some log file that it’s writing to or some XML file that it’s reading, there’s a well established pattern for how to keep track of the paths of those files: an external configuration scheme. This might be a .config file or a settings.xml file or even a yourapp.ini file if you’re gray enough in the beard. Or, perhaps it’s something more exotic like a database table or web service that stores key value configuration pairs. Maybe it’s something as simple as command line parameters that specify the path. Whatever the case may be, everyone knows that you don’t hard code — you don’t store the file path right in the source code. That’s amateur hour.

You can imagine how this began. Maybe a long time ago someone said, “hey, let’s log critical application events to a file so that we can review and troubleshoot if things go wrong.” They shipped this for some machine running Windows 3.1 or something, and were logging to C:\temp, which was fine unless users didn’t have a C:\temp directory. In that case, it blew up spectacularly and they were flooded with support calls at which point, they could tell their users to create the directory or they could ship a new set of floppy disks with the new source code, amended to log to a directory guaranteed to exist. Or something like that, anyway.

The lesson couldn’t be more obvious. If they had just thought ahead, they would have realized their choice for the path of the log file, which isn’t even critical anyway, was a poor one. It would have been good if they had chosen better, but it would have been almost as good if they’d just made this configurable somehow so that it needn’t be a disaster. They could have made the path configurable or they could have just made it a configurable option to create C:\temp if it didn’t exist. Next time, they’d do better by building flexibility into the application. They’d create a scheme where the application was flexible and thus the cost of not getting configuration settings right up-front was dramatically reduced.

This approach made sense and it became the norm. User settings and preferences would be treated as data, which would make it easy to create a default experience but to allow users to customize it if they were sufficiently sophisticated. And the predecessor to the “Advanced” menu tab was born. But the other thing that was born was subtle complexity, both for the users and for the programmers. Application configurability is second nature to us now (e.g. the .NET .config file), but make no mistake — it is a source of complexity even if you’re completely used to it. Think of paying $300 per month for all of your different telco concerns — the fact that you’ve been doing this for years does’t mean you’re not shelling out a ton of money.

What’s even more insidious is how this mentality has crept into application development in other ways. Notice that I called this practice “preferences as data” rather than “future-proofing,” but “future-proofing” is the lesson that many took away. If you design your application to be flexible enough, you can insulate yourself against bad initial guesses about user preferences or usage scenarios and you can ensure that the right set of tweaks, configuration alterations, and hacks will allow users to achieve what they want without you needing to re-deploy.

So, what’s the problem, apart from a huge growth in the number of available settings in a config file? I’d argue that the problem is the subtle one of striving for configurability as a first class goal. Rather than express this in general, definition-oriented terms, consider an example that may be the logical conclusion to take this thinking as far as it will go. You have some method that you expose publicly, called, ProcessOrder and it takes a parameter. Contrary to what you might think, the parameter isn’t an order ID and it isn’t an order: it’s an object. Why? Because this API is endlessly flexible. The method signature will suffice even if the entire order processing mechanism changes and even if the structure of the order itself changes. Heck, it won’t need to be altered if you decide to alter ProcessOrder(object order) to send emails. Just pass in an “Email” object and add a check for typeof(Email) to ProcessOrder. Awesome, right?

SwissArmy

Yeah, ugh. Flexibility run amok. It’d be easy to interpret my point thus far as “you need to find the balance between inflexibility/simplicity on one end and flexibility/complexity on the other.” But that’s a consultant answer at best, and a non-point at worst. It’s no great revelation that these tradeoffs exist or that it’d be ideal if you could understand which trait was more valuable in a given moment.

The interesting thing here is to consider the original problem — the one we’ve long since file away as settled. We shipped a piece of software with a setting that turned out to be a mistake, so what lesson do we take away from that? The lesson we did take away was that we should make mistakes less costly by adding a configurability out. But what if we made the mistake less costly by making rollouts of the software trivial and inexpensive? Imagine a hypothetical world where rollout didn’t mean shipping a bunch of shrink-wrapped boxes with floppy disks in them but rather a single mouse click and high confidence that everything would go well. If this were a reality, hard-coding a log file path wouldn’t really be a big deal because if that path turned out to be a problem, you could just alter that source code file, click a button, and correct the mistake. By introducing and adjusting a previously unconsidered variable, you’ve managed to achieve both simplicity and flexibility without having to trade one for the other.

The danger for software decision makers comes from creating designs with the goal of satisfying principles or interim goals rather than the goal of solving immediate business problems. For instance, the problem of hard-coding tends to arise from (generally inexperienced) software developers optimizing for their own understanding and making code as procedurally simple as possible — “hardcoding is good because I can see right where the file is going when I look at my code.” That’s not a reasonable business goal for the software. But the same problem occurs with developers automatically creating a config file for application settings — they’re following the principle of “flexibility” rather than thinking of what might make the most sense for their customers or their situation. And, of course, this also applies to the designer of the aforementioned “ProcessOrder(object)” API. Here the goal is “flexibility” rather than something tangible like “our users have expressed an interest in changing the structure of the Order object and we think this is a good idea and want to support them.”

Getting caught up in making your code conform to principles will not only result in potentially suboptimal design decisions — it will also stop you from considering previously unconsidered variables or situations. If you abide the principle “hard-coding is bad,” without ever revisiting it, you’re not likely to consider “what if we just made it not matter by making deployments trivial?” There is nothing wrong with principles; they make it easy to communicate concepts and lay the groundwork for making good decisions. But use them as tools to help you achieve your goals and not as your actual goals. Your goals should always be expressible as humans interacting with your software — not characteristics of the software.