DaedTech

Stories about Software

By

Optimizing Proto-Geeks for Business

In a recent post, I talked about the importance of having Proto-Geeks in your software group rather than Loafers and the toxic impact of too many Loafers in the group. If you’ll recall, Proto-Geeks are automaters (in other words, developers) that are enthusiastic about new technologies, and Loafers are ones that don’t much care for it and have a purely utilitarian desire to automate. Loafers are interested in automating only to maximize their efficiency so that their benefit-to-effort ratio is maximized.

One thing that I mentioned only briefly in the last post was the idea that Loafers, in spite of having no love for new technologies or ideas, would have the best business sense. Where Proto-Geeks might go bounding off on unprofitable digressions simply for the sake of solving technological puzzles, Loafers will keep their eyes on the prize. I dismissed this as an issue by saying that Loafers were locally maximizing and self-interested rather than being concerned with the best interests of the group as a whole. In retrospect, I think that this is a bit of an oversimplification of both the motivations of the Loafer and the best way to navigate the balance between interest in new technology and conservative business sense.

Let me be clear: the best approach is to have as few Loafers as possible. So the real question is how to reign in Proto-Geeks and get them to have business sense. And I think the answer is a two-pronged one: use gamification strategies and do away with the tired notion that programmers don’t “know business” and that you stick them in a room, slide pizza under the door, and get software in return at some point.

PizzaUnderTheDoor

No More Project Managers

Typically, organizations create the role of “Project Manager” and “Software Developer” and, along with these roles, create the false dichotomy that the former “does business” and the latter “does technology.” Or, in the parlance from these posts, the former is a Fanboy and the latter is a Proto-Geek or a Loafer. There’s an interesting relationship between the project manager Fanboys and the Loafers, as compared with the PMs and the Proto-Geeks.

Specifically, PMs tend to like and identify with Loafers and be annoyed by Proto-Geeks. Why would Fanyboy identify with Loafer? That doesn’t seem to make a lot of sense, given that they’re in opposite quadrants. Fanboys root for technology while not being particularly adept or inclined toward automation on their own, while Loafers are leery of technology but reasonably good at automating things. So, what common ground do these archetypes find? Well, they meet in two ways:

  1. Because of the division of labor, PMs have to root for automation without doing it themselves, which means rooting for and depending on automaters.
  2. PMs that got their start in programming used to be or would have been Loafers.

If you consider the canonical question posed to Padiwan developers fresh out of school at an annual review, “do you want to be on the technical track to architect or the project management track,” you’re basically asking “are you a Proto-Geek or a Loafer?” If that seems like a harsh parallel to draw, think about what each response means: “I love technology and want to rise to the top of my craft,” versus, “programming is really just a means to a different end for me.” These are cut-and-dry responses for their respective quadrants.

So while you have one breed of Loafer that loosely corresponds to Lifer and just wants to bang out code and collect a paycheck, you have another breed that wants to bang out code just long enough to get a promotion and trade in Visual Studio and Eclipse for MS Project and a 120% booked Outlook calendar. Once that happens, it’s easy to trade in the reluctant automater card for enthusiastic tech Fanboy.

But a tension emerges from this dynamic. On the one hand, you have the people developing along the technical track, getting ahead because of their expertise in the craft for which everyone in the group is hired. On the other hand, you have a group that tends to underperform relatively in the same role, looking opportunistically for a less technical field of expertise and authority. The (incredibly cynical) logical conclusion of this dynamic is the “Dilbert Principle,” which holds that the least competent programmers will be promoted to project management where they can’t do as much damage to the software, consumed as they are with Gantt Charts and Six Sigma certifications and whatnot.

However cynical the Dilbert Principle might be and however harsh the “PM as disinterested programmer” characterization might be, there’s no altering the fact that a very real tension is born between the “tech track” and the “project management” track. This is exacerbated by the fact that “project manager” has “manager” in the title whereas “senior software engineer” or “architect” does not. Seem silly? Ask yourself what life might be like if Project Manager was renamed to the (more accurate) title “Status Reporter/Project Planner,” as observed on the Half Sigma blog:

It is often suggested that the most natural next move “up” is into project management. But the first problem with this situation is that project management sucks too. It doesn’t even deserve to have the word “management” in the title, because project management is akin to management as Naugahyde leather is to leather. Project planner and status reporter is the more correct title for this job. Once you take the word “manager” out of title, it loses a lot of its luster, doesn’t it? Everyone wants to be a manager, but few would want to be a project planner and I daresay no one would want to be a status reporter. Status reporting is generally the most hated activity of anyone who endeavors to do real work.

There’s little doubt that project managers are often de facto developer managers–generally not in the org chart, but certainly in terms of who is allowed by the organization to boss whom around. And so there tends to be a very real tension between the top technical talent and the top business-savvy talent. Ace Proto-Geeks resent being ordered around by their perceived inferiors who couldn’t hack it doing “real work” and ace Project Managers hate having to deal with prima donna programmers too stupid in the ways of office politics and business sense to put themselves on track toward real organizational power.

I submit that this is an entirely unnecessary wall to build. You can eliminate it by eliminating the “project manager” role from software groups. To take care of the “project planner” role responsible for perpetually inaccurate Gantt charts and other quixotic artifacts, just go agile. Involve the customer stakeholders directly with the development group in planning and prioritizing features. Really. Your devs are knowledge workers and bright people–you don’t need an entire role to run interference between them and customers. As for the “status reporter” role, come on. It’s 2013. If you don’t have an ALM tool that allows a C-level executive to pull up a snazzy, progress reporting chart automatically, you’re doin’ it wrong.

So the first step is to stop hiring Loafers with the specific intent that they’ll conflict with and reign in the Proto-Geeks. Running a business like US Congress isn’t a good idea. Split the meta-project/housekeeping tasks among the developers, rather than creating a clashing, non-technical position on the team for this purpose and pumping up its title in terms of office politics.

Gamify

So you’ve eliminated the Loafers, but now you need to get your Proto-Geeks to think about the bottom line once in a while. You can round them up and lecture them all you like, and if you take a hardline kind of approach, it might work after a fashion. But I really don’t recommend this because (1) Proto-Geeks are knowledge workers that are very good at gaming metrics, and (2) real talent is likely to leave. You need their buy-in and this requires you to partner with them instead of treating them like assembly line workers.

But the thing that runs a Proto-Geek’s motor is automating things using cool tools and technologies. At their core, they are puzzle solvers and tinkerers and they enjoy collecting a paycheck for figuring out how to get their geeky toys and playthings to solve problems in your organization’s problem domain. You need some way to collect and analyze the performance of machines and workers on your manufacturing floor? Dude, they can totally do that with the latest version of ASP MVC, and they can even use this new open-source site profiler to analyze the performance in realtime, and… what’s that? You only need the site and you don’t want them to get carried away with the profiler and whatever else they were about to say?

Well, just present this need to them as another problem within your domain and give them another puzzle to solve. Challenge them to build a site with as few dependencies as possible or tell them that they get to pick two and only two new toys for the next project or tell them that they can have as many toys as they want, but if they don’t make their deadlines, they don’t get any for the next project.

Another great trend from a business geek perspective is the way that cloud solutions like Azure and Salesforce work, where extra CPU cycles, memory usage, and disk space actually cost small amounts of money in real time. Nothing drives a Proto-Geek to do great things like this level of gamification where he knows–just knows–that after an all-nighter and some Mountain Dew, he can shave $40 per day here and $120 per day there.

These examples are really just off the cuff, so take them with a grain of salt, but the underlying message is important. You don’t need to hire people that are skeptical of any release of VBA after MS Access 2000 or who want to coast through programming at the entry level until they have enough seniority to be project managers in order to have team members focused on making the business profitable. You just need to have innovative and appropriate incentives in place. It’s the Proto-Geek’s job to get excited about technologies and problems and using technologies to solve problems. It’s management’s job to give them incentives that make them want to solve the right problems to ensure profitability.

By

I Screwed Up… You’re Welcome!

I’ve had trouble with the hosting of this site lately. As you may have noticed if you visit via the site instead of a feed reader, page load times had slowed to a crawl and sometimes the site was actually down. The hosting company, hostmonster, has provided me with good service over the last couple of years, both in terms of quality of hosting and responsiveness. I expected this to be no different a few days ago.

I called the first time I noticed substantial slowness, and I was told that it was some kind of aberration and everything was fine. A few days later, there was a problem again, and this time the source, I was told, was that there was a node down somewhere between me as a client and my site. Between these two incidents, my site had been slow. I shrugged it off and then noticed the next day that, yet again, my site was pretty much non-functional. Hoping for quicker turn-around and to see if anyone else was having issues, I mentioned this on Twitter and was quickly replied to by the Hostmonster help account with this message: “We see our admins quickly fixed an issue that came up on your server. It is already running now again.”

The only problem was that, in their hurry to congratulate themselves, they’d forgotten to see whether the problem had been fixed. It hadn’t. I called again and was told that everything looked fine to them, at which point I took to Twitter to solicit suggestions for alternative hosting options. A couple of days and a number of phone calls later, Hostmonster did work with me and the issue appears to be resolved. Apparently, a user sharing a physical machine with my site had been infested with some kind of SPAM or something and was causing resource thrashing and throttling of my site. They eventually migrated me to another machine. I haven’t yet decided what to do about hosting. I’m weighing the annoyance of my site being unresponsive for a few days against the hassle of migrating and the fact that there were some very helpful people there that I eventually got to deal with after the first few had told me what a great job they’d done at fixing my imaginary problem.

But pulling back from my experience and looking a little more critically, I think the point at which my reaction as a user went from “annoyed but understanding” to “pissed” holds some lessons for consultants or anyone who deals with end users in a support capacity. I base this on examining the situation and discovering what I found galling.

  1. Don’t publicly declare a user’s problem fixed without confirming the user’s agreement.
  2. Avoid minimizing the user’s problems with words like “little” to describe the scope of the problem or, as in my case, “[quick]” to describe the downtime.
  3. Make every reasonable attempt to reproduce a user’s problem and prove to them that you’re doing so.

I know that there’s a balance to be struck between avoiding unfair blame (i.e. marketing and spinning your product/service in the best light) and between providing your users with solutions. I also know that there is no shortage of user error to go around. But I think it’s really important to understand and empathize with users when they’re frustrated. Go on Twitter or some other public forum and read the really negative comments about software or technology services, and, at the core, you’ll see extremely frustrated users.

You’ll see users whose websites have been down for two days and who don’t know when they’ll be back up. You’ll see users who have been struggling for hours to figure out how to get “hello world” out of the stupid thing. You’ll see users who just needed to get some work done for a deadline when your horrible update rendered their software non-functional and caused them to catch flak at a meeting. And the list goes on. These are the people you’re dealing with. They’re not malicious or looking to drag your name through the mud; rather, they’re looking to vent. And that’s your opportunity to be a sympathetic ear and to turn an incensed customer into a loyal one.

How’s that, you ask? Well, as much as people hate it when they’re frustrated, they can be mollified by the sense that you see that they’re having a problem, you understand it, you can fix it, and that you’re sorry that it happened to them. “Geez, I’m really sorry that update made you miss your meeting–I’d be mad too.” I bet that’d take the edge off of most people’s anger. Probably more so than “that update ran fine for me here, and check out that awesome new font it gave you–you’re welcome!”

I’m going to make it a point to try to harness the memory of these experiences as a user to make me better and more conscientious as a software provider. I’d rather be remembered for the kind things that users say about me than the flattering things I say about myself when others are within earshot.

(I’m still weighing my options for the hosting situation and running tests to confirm the response times continue to be better than they were, but my thanks to Daniel and Bradyn at Hostmonster who were actually quite helpful following my initial period of frustration.)

By

In with the Proto-Geeks, Out with the Loafers

Technology Progressives vs Conservatives Reconsidered

When it comes to our relationship with technology, most people think in terms of the classic struggle between inventors that capture the imagination and laborers that don’t want to be imagination-captured right out of a steady paycheck. I think this is a rather obvious way to consider technology in our lives, but that there’s another dimension that sort of kicks in as an “aha” moment in much the same way that a line between “left” and “right” politically is trumped by the political compass quadrant system that also addresses libertarianism vs authoritarianism.

Towards that end, I propose quadrants related to technology. On one axis, there is the line from Luddite to Technologist. This is the line along which enjoyment of technology ranges when it comes to aesthetics and emotions. Luddites have a visceral dislike for technology, often because it makes them feel insecure or afraid (e.g. laborers in a factory who are about to be replaced by robots). Technologists have an emotional connection with technological advancement, generally because they find change and new possibilities exciting and intriguing (e.g. people that read about scientific advances, hobbyists that build robots, etc.).

On the other axis, there is a range from Laborer to Automater. Along this line, applied and practical interaction with technology varies. The Laborer will have little opportunity and/or motivation to use technology to automate day-to-day tasks (e.g. an artisan or a member of a think tank). Automaters embrace optimizations brought about through the use of technology and will typically seek to be as efficient as possible. People at this end of the spectrum value technology in a pragmatic sort of way, using it as well as process to optimize for time, cost, etc. People who use mobile devices to track appointments and tasks to the point of hyper-organization are good examples.

These two axes yield the following graph, with labeled quadrants.
TechGraph

The most obvious quadrants are the Luddite-Laborer and Technologist-Automater, as those are the roles that we think of as the Curmudgeon and the Proto-Geek, respectively. Curmudgeons hate and regard with suspicion anything that is newfangled and unfamiliar, while their polar opposite, Proto-Geeks, embrace things specifically because they’re new and flashy. All of the elements of the ethos of hard work and humble living (Protestant Work Ethic to the point of absurdity) versus endless, powerful novelty (decadence, convenience, playing God) come into play here, and this represents the linear spectrum that most envision about technology. The less frequently discussed quadrants are the ones that I’m calling, “Fanboy” and, “Loafer.”

“Fanboy” is the set of people that enjoy technology for its aesthetics and have a concept of tech-chic. These are people who will buy something like a cell phone for an accessory more so than for the various practical purposes of automating parts of their lives such as the aforementioned task tracking and organization. Non-technical gamers may also fall in this quadrant. The other less commonly considered quadrant is the Loafer, who prefers to automate things in life but has no love for new technologies and techniques. Contradictory as this may seem, it’s actually relatively common. And worse, it’s toxic to your software development group.

Inside the Mind of the Loafer

To understand the problem, it’s essential to understand what makes the Loafer tick. First of all, the Loafer isn’t a malingerer in the sense that he’ll shirk his responsibilities or do work that is of inferior quality. Rather, the Loafer seeks to optimize efficiency so that the value proposition of doing work is improved. In other words, if you task a Loafer with alphabetizing a list of 50,000 names, he’ll write a program to do it because that gets the job done more quickly. He’ll charge you the same rate regardless of methodology, though, so he’s automating specifically to allow himself more free time (or profit, as it were).

By way of contrast, consider the Proto-Geek’s response to this same task. She’d most likely blurt out, “That’s easy! I could do that in any programming language–you know what? I’ll do it in Ruby because that will give me a chance to play with Ruby. Cost? Oh, geez, I dunno. Maybe ten bucks…? It’s really no big deal.” The Proto-Geek is interested in solving the problem and leveraging cool new techs to do it. Profit/time optimizing is a secondary concern. (It is also interesting to consider the reaction of the other quadrants to this scenario. Curmudgeon would probably extol the virtues of doing it by hand on your mental acuity, and Fanboy would probably do it by hand while tweeting about how there should be something that did this automatically.)

Proto-Geeks view automation as having a high cool factor. This is in contrast to Loafer, who views it as having a high profit factor. But why is this a detriment to your group? After all, it stands to reason that Loafer has the best business sense. It seems as though it’d be important to have people on the team that chose the best tool for the job rather than because they were simply excited to use it. And that is true. The problem is that Automaters further down the chart toward Luddite are operating in a way to maximize their own personal efficiency and profit, rather than the group’s.

Most Loafers in a software group don’t particularly like what they do, but they know that if they do more of it faster, they get more money–or more time to, well, Loaf. So once they find a set of tools, languages, frameworks, etc. and become facile with them, they’re not only disinterested in new technologies–they’re actively resistant. Loafers want nothing more than to continue to solve all problems that come their way as quickly and with as little effort as possible, but with a locally-maximizing, greedy approach; investing a lot of extra effort in the shorter term to learn new ways of doing things more efficiently down the road is not appealing.

If you have one or two Loafers in your group, they can serve the useful purpose as natural conservatives that prevent Proto-Geeks from running buck-wild with new techs and frameworks. But this dynamic must be carefully monitored, like an inoculation, since the proportions can quickly become toxic. Tip the group’s dynamic a little too far into the Loafer quadrant and suddenly you have a bad case of “not invented here” and complete resistance to new ideas. There’s also frequently a strong correlation between Loafers and Expert Beginnerism; established Loafers often have a track record of some success with their approach and thus a powerful, built-in argument for inertia and the status quo. And finally, Loafers tend to kill enthusiasm for learning pretty quickly since presentation of new technologies is met with sarcasm, skepticism, and derision, making it essentially uncool to learn and embrace new things lest one be accused naivete or being some kind of shill.

If you’re in a position of authority within a group and recognize a Loafer dynamic, I’d suggest shaking things up to see if you can re-awaken or inspire a Technologist in some of them. Or perhaps you can infuse the group with some fresh-faced Proto-Geeks. If, on the other hand, you’re a powerless Proto-Geek in a Loafer group, you have a range of options from persuasion to proof-of-concept pitches to management to dusting off your resume. In either case, I think it’s important to recognize your surroundings and to try to seek out the quadrant in which you’re most comfortable.

By

Leading Software Teams of Humans

I’ve been working lately on a project where I handle the build and deployment processes, as well as supply code reviews and sort of serve as the steward of the code base and the person most responsible for it. Not too long ago, a junior developer delivered some changes to the code right before I had to roll out a deployment. I didn’t have time to review the changes or manually regression test with them in place (code behind and GUI, so the unit tests were of no use), so I rolled them back and did the deployment.

That night, I logged back on and reviewed the code that I had deleted prior to the build. I read through it carefully and made notes, and then I went back to the version currently checked in and implemented a simpler version of the fix checked in by the junior developer. That done, I drafted an email explaining why I had reverted the changes. I also included in it some praise for getting the defect fix right, my own solution requiring less code, and a few pointers as to what made my solution, in my opinion, preferable.

By the time I finished this and saved a draft to send out the next day, it was pretty late at night and I was tired. As I got ready for bed, I contemplated what had motivated me along this course of action when none the extra work I did was really required. Clearly I thought this course of action was important, but my motivations were mainly subconscious. I wanted to understand what guiding principles of leadership I might be abiding by here and hammer them out a bit. And I think I got it figured out.

  1. Justify and explain your decisions.
  2. Prove the merits of your approach–show, don’t tell.
  3. Let members of your team take risks, using you as a safety net.

Explain Yourself

As I discussed in a previous post, a leader/manager issuing orders “because I say so” is incrementally losing the respect of subordinates/team members. And that’s exactly what’s going on if you simply make decisions without getting input or explaining the reasoning after the fact–as would have been the case in my situation had I not sent the email. What you’re saying to people when you unilaterally take action isn’t really clear because it’s cryptic and terse. So they’re left to interpret for themselves. And interpret they will.

They’ll wonder if maybe you’re just a jerk or paranoid. They’ll assume that you think they’re not important enough to bother with an explanation. Perhaps they’ll surmise that you think they’re too stupid to understand the explanation. They’ll wonder if they did something wrong or if they could improve somehow.

And really, that last one is the biggest bummer of all. If you’re a team lead and you need to overturn a decision or go in a different direction for a valid reason, that’s a very teachable moment. If you sit with them and help them understand the rationale for your decision, you’re empowering them to mimic your decision-making next time around and improve. If you just undo all their work with no fanfare, they learn nothing except that fate (and the team lead) is cruel. Seriously–that’s the message. And there are few things more demoralizing to an engineer than feeling as though the feedback mechanism is capricious, unpredictable, and arbitrary.

Show, Don’t Tell

“Because I said so” costs you respect in another important way on a technical team, as well. Specifically, it threatens to undermine your techie cred. You may think you’re Guru and you may even be Guru, but if you don’t occasionally offer demonstrations, the team members will start to think that you’re just a loud-mouthed armchair quarterback suited only for criticism, yelling, and second-guessing.

ArmchairQuarterback

If you lead a team and you don’t know what you’re doing technically, you’ll obviously lose them. But if you’re leading a team and you have a cloud architecture “do as I say” approach at all times, the outcome will differ only in that you’ll lose them more slowly. You should do something to demonstrate your technical chops to them. This will go a long way. Rands does an excellent job of explaining this (and I pay a subtle homage to his ideas with my post’s title). You have to build something. You have to show them that you’re the leader for a reason other than because you’ve hung around the company for a while or you have friends in high places. Developers usually crave meritocracy, and flashing some chops is very likely to make them feel better about following your lead. And you get bonus points if the flashing of said chops saves them work/teaches them something/shows them something cool.

You Are the Safety Net

I worked in a group once that was permeated by fear. It was a cultural thing born out of the fact that the loudest group members were the most paranoid and their skepticism flirted dangerously with cynicism. Nobody could be trusted with changing the code base, so code reviews were mandatory, often angry affairs that treated would-be committers as dastardly saboteurs out to commit acts of digital terrorism. The result of this Orwellian vibe was that progress and change happened inordinately slowly. Group members were much more terrified of doing the wrong thing than of missing deadlines or failing to implement features, so progress crawled.

I could also tell that there were intense feelings of stress for some people working this way. The feeling that you have to constantly avoid making mistakes is crippling. It is the province of a culture of learned helplessness–get yelled at enough for making code changes that aren’t good enough and you’ll just resort to asking the yellers to sit with you and show you exactly what to do every time. You’ll take no risks, run no experiments, and come to grips with no failures. You won’t improve–you’ll operate as a grunt and collect a paycheck.

As a team leader, this is a toxic environment if performance is important to you (and it might not be–some people just like being boss and having the luxury of not being judged on lack of performance). A stagnant, timid team isn’t going to have the autonomy necessary to handle the unexpected and to dream up new and innovative ways of doing things. In order for those things to happen, your team has to be comfortable and confident that mistakes and failed experiments will be viewed as valuable lessons learned rather than staging points for demerits, blame, and loss of political capital.

SafetyNet

So if someone on your team checks in code that breaks the build or slips into QA with problems or what-have-you, resist at all costs the impulse to get worked up, fire off angry emails, publicly shame them, or anything else like that. If you’re angry or annoyed, take some deep breaths and settle down. Now, go fix what they did. Shield them from consequences of their good-faith actions that might make them gun-shy in the future. Oh, don’t get me wrong–you should certainly sit them down later and explain to them what they did, what the problem was, and what fixing it involved. But whatever you do, don’t do it in such a way that makes them scared of coding, changing things, and tinkering.

Will this approach mean some late nights for you? Yep. Does it suck to have to stay late cleaning up someone else’s mess? Sure it does. But you’re the lead. Good leaders work hard and make the people around them better. They offer enthusiasm and encouragement and a pleasant environment in which people can learn and grow. They don’t punch out at five like clockwork and throw anyone that threatens their departure time under the bus. Leadership isn’t a matter of entitlement–it’s a position of responsibility.

By

Flyweight

Quick Information/Overview

Pattern Type Structural
Applicable Language/Framework Agnostic OOP
Pattern Source Gang of Four
Difficulty Easy

Image courtesy of dofactory

Up Front Definitions

  1. Client: Code that uses the flyweight implementation
  2. Concrete Flyweight: A class that actually implements the abstract class/interface flyweight.
  3. Extrinsic State: State that is not common to the flyweights and varies per instance.
  4. Flyweight: An interface that defines and exposes the extrinsic state.
  5. Flyweight Factory: Class used by clients to retrieve instances of the flyweights.
  6. Intrinsic State: State that is common across all flyweights of the same type.

The Problem

Most of these design patterns posts deal with compile-time problems brought about by painting oneself into a corner with design decisions. In other words, if you make certain missteps when designing a system, it becomes rigid and difficult to change in the future. Patterns such as the Factories, Composite, Decorator, etc., all address prevention of such missteps by isolating responsibilities and making change management simple. Today, the problem we’re going to look at is a runtime problem rather than a compile-time problem.

Let’s say that you’re asked to create a model that simulates the distribution of mail in the USA for a given day. You start by establishing some pretty reasonable business objects to represent the domain concepts in question: letters and postcards. Each of these things has certain properties you’re going to need to keep track of, such as dimensions and postage, so that you can report in aggregate on paper usage, revenues, etc. Here is your first crack at business objects:

public abstract class MailPiece
{
    public abstract decimal Postage { get; set; }
    public abstract decimal Width { get; set; }
    public abstract decimal Height { get; set; }
    public abstract decimal Thickness { get; set; }
 
    public int DestinationZip { get; set; }
}
public class Letter : MailPiece 
{
    public override decimal Postage { get; set; }
    public override decimal Width { get; set; }
    public override decimal Height { get; set; }
    public override decimal Thickness { get; set; }
 
    public Letter()
    {
        Postage = 0.46M;
        Width = 11.5M;
        Height = 6.125M;
        Thickness = 0.25M;
    }
}
public class Postcard : MailPiece
{
    public override decimal Postage { get; set; }
    public override decimal Width { get; set; }
    public override decimal Height { get; set; }
    public override decimal Thickness { get; set; }
 
    public Postcard()
    {
        Postage = 0.33M;
        Width = 6.0M;
        Height = 4.25M;
        Thickness = 0.016M;
    }
}

Alright, awesome. Now, according to the USPS, there are 563 million mail pieces processed in the USA each day (as of 2011). So, let’s just use that as our loop counter, and off we’ll go:

static void Main(string[] args)
{
    var pieces = new List<MailPiece>();
    for (long index = 0; index < 563000000; index++)
    {
        pieces.Add(new Letter() { DestinationZip = (int)(index % 100000) });
    }
 
    Console.ReadLine();
}

Satisfied, you fire up the program to make sure everything here is in good working order–although, really, why bother? I mean, what could possibly go wrong? The program chugs along for a bit, and it strikes you that it’s taking a while and your desktop is starting to crawl. You fire up the process manager and see this:

Danger, Will Robinson

Yikes! That’s… troubling. You start to wonder if maybe you should intervene and stop the program, but Visual Studio isn’t being very responsive and you can’t even seem to get it to hit a breakpoint. Just as you’re contemplating drastic action, Visual Studio mercifully puts an end to things:

OutOfMemory

You’re not very close to getting all of the records in, so you decide to divide the original 563 million by 24 in order to model the mail being processed on an hourly basis, but that’s really all the more granular that you can do. You fire up the program again to run with 23,458,333 and hit out of memory once again.

So, What to Do?

If you look at your creation of letters (and presumably later, postcards) you should notice that there’s a fair bit of waste here. For every letter you create, you’re creating four decimals that are exactly the same for every letter, which means that you’re storing four unnecessary decimals for all but one of the letters you create, which in turn means that you’re storing somewhere in the neighborhood of two billion unnecessary decimal instances in memory. Whoah.

Let’s try a different approach. What we really seem to want here is 23 and a half million zip codes and a single letter. So let’s start off simply by building a list of that many zip codes and seeing what happens:

static void Main(string[] args)
{
    var pieces = new List<MailPiece>();
    for (long index = 0; index < 23458333; index++)
    {
        pieces.Add(new Letter() { DestinationZip = (int)(index % 100000) });
    }
 
    Console.ReadLine();
}

Woohoo! That runs, and for the first time we have something that doesn’t crash. But what we’re really looking to do is cycle through all of those mail pieces, printing out their zip, postage, and dimensions. Let’s start off by cheating:

static void Main(string[] args)
{
    var zipCodes = new List<int>();
    var letter = new Letter();
    for (int index = 0; index < 23458333; index++)
    {
        zipCodes.Add(index % 100000);
        Console.Write(string.Format("Zip code is {0}, postage is {1} and height is {2}",
            zipCodes[index], letter.Postage, letter.Height));
    }
 
    Console.ReadLine();
}

That gets us what we want, but there’s a lot of ugliness. The client class has to know about the exact mechanics of the mail piece’s printing details, which is a no-no. It also has the responsibility for keeping track of the flyweight letter class. There are better candidates for both of these responsibilities. First of all, let’s move the mechanism for printing information into the mail piece class itself.

public abstract class MailPiece
{
    public abstract decimal Postage { get; set; }
    public abstract decimal Width { get; set; }
    public abstract decimal Height { get; set; }
    public abstract decimal Thickness { get; set; }
 
    public string GetStatistics(int zipCode)
    {
        return string.Format("Zip code is {0}, postage is {1} and height is {2}",
            zipCode, Postage, Height);
    }
}

Notice that we’ve removed the settable “DestinationZip” and added a method that prints piece statistics, given a zip code. That allows us to simplify the client code:

static void Main(string[] args)
{
    var zipCodes = new List<int>();
    var letter = new Letter();
    for (int index = 0; index < 23458333; index++)
    {
        zipCodes.Add(index % 100000);
        string pieceStatistics = letter.GetStatistics(zipCodes[index]);
        Console.Write(pieceStatistics);
    }
 
    Console.ReadLine();
}

Asking the letter object for its statistics definitely feels like a better approach. It’s nice to get that bit out of the client implementation, particularly because it will now work for postcards as well and any future mail pieces that we decide to implement. But we’re still worrying about instance management of the flyweights in the client class, which really doesn’t care about them. Let’s introduce a new class:

public class MailPieceFactory
{
    private readonly Dictionary<char, MailPiece> _mailPieces = new Dictionary<char,MailPiece>();
 
    public MailPiece GetPiece(char key)
    {
        if (!_mailPieces.ContainsKey(key))
            _mailPieces[key] = BuildPiece(key);
 
        return _mailPieces[key];            
    }
 
    private static MailPiece BuildPiece(char key)
    {
        switch (key)
        {
            case 'P': return new Postcard();
            default: return new Letter();
        }
    }
}

Here we have a public method called “GetPiece” to which we pass a key indicating what sort of piece we want. As far as client code goes, that’s all that matters. Under the hood, we’re managing the actual flyweights. If the dictionary doesn’t have key we’re interested in, we build a piece for that key. If it does, we simply return the flyweight from the hash. (Note that the scheme of indexing them by characters will scale, but isn’t really optimal at the moment–you could use an enumeration here, restrict the set of allowed keys with preconditions, or have a special default scheme.)

Let’s see what the client code looks like.

static void Main(string[] args)
{
    var factory = new MailPieceFactory();
    var zipCodes = new List<int>();
 
    for (int index = 0; index < 23458333; index++)
    {
        zipCodes.Add(index % 100000);
        var randomPiece = factory.GetPiece(GetRandomKey());
        string pieceStatistics = randomPiece.GetStatistics(zipCodes[index]);
        Console.Write(pieceStatistics);
    }
 
    Console.ReadLine();
}
 
private static char GetRandomKey()
{
    return new Random().Next() % 2 == 0 ? 'P' : 'L';
}

There. Now the client code doesn’t worry at all about keeping track of the flyweights or about how to format the statistics of the mail piece. It simply goes through a bunch of zip codes, creating random pieces for them, and printing out their details. (It could actually dispense with the list of zip codes, as implemented, but leaving them in at this point underscores the memory benefit of the flyweight pattern.)

There’s now some nice extensibility here as well. If we implemented flats or parcels, it’d be pretty easy to accommodate. You’d just add the appropriate class and then amend the factory, and you’d be off and running. When dealing with scales of this magnitude, memory management is always going to be a challenge, so using a pattern up front that reuses and consolidates common information will be a big help.

A More Official Explanation

According to dofactory, the purpose of the Flyweight pattern is:

Use sharing to support large numbers of fine-grained objects efficiently.

The Flyweight pattern requires that you take the objects whose information you want to share for scaling purposes and divide their state into two categories: intrinsic and extrinsic. Intrinsic state is the state that will be shared across all instances of the same type of object, and extrinsic state is the state that will be stored outside of the instance containing common information. The reason for this division is to allow as much commonality as possible to be stored in a single object. In a way, it is reminiscent of simple class inheritance where as much common functionality as possible is abstracted to the base class. The difference here is that a single instance stores common information and an external interest holds the differences.

To make the most of the pattern, flyweight creation and management is abstracted into one class (the factory), intrinsic state to another set of classes (the concrete flyweights), and extrinsic state management to a third class (the client). Separating these concerns allows the client to incur only the memory overhead of the part of an object that varies while still reaping all of the benefits of OOP and polymorphism. This is a powerful optimization.

Other Quick Examples

  1. The iconic example of the Flyweight pattern is to have flyweights represent characters in a word processing document, with the extrinsic state of position in the document.
  2. Another use is rendering in graphical applications with many similar shapes or patterns on the screen in configurable locations.
  3. It can also be used to represent some fungible commodity of which there are many in a video game, such as enemies, soldiers, scenery elements, etc.
  4. One of the more interesting and practical examples is string “interning” in C#

A Good Fit – When to Use

As the cost of memory and disk space continues to plummet, emphasis on this pattern is seeming to wane a bit. However, there will always be occasions in which resource management and optimization are crucial, and bottlenecks will always exist where even liberally purchased resources need to be used wisely. The flyweight pattern is a good fit any time you have redundant, common state in many objects that you’re creating. If you find yourself in this position, the pattern is really a no-brainer since it’s simple and costs very little to implement in terms of code complexity.

Square Peg, Round Hole – When Not to Use

This isn’t the kind of pattern where you’re likely to go out looking for excuses to try it out since it solves a very specific problem that is different from a lot of other patterns. I would exercise caution more when choosing extrinsic and intrinsic state in that you’re going to experience downstream pain if you label things “intrinsic” that actually can vary. Doing this threatens to break the pattern and the abstraction that you’ve created since modifying state that’s categorized as intrinsic will mean that your flyweights are no longer interchangeable and you’re going to start getting stuff wrong.

So this isn’t a good fit if you don’t have a good grasp yet on what should be extrinsic and what should be intrinsic. It’s also a bad fit if all state is extrinsic, as then there is no gain from the pattern. Another potential misuse that’s always worth mentioning with an optimization strategy is premature optimization. While I personally consider the elimination of duplication at runtime simply to be good design, I’d say that you might want to consider not bothering with the additional compile-time complexity if you’re not going to have very many objects. If, for instance, you were simulating twenty pieces of mail instead of tens of millions, it might not be worth the additional complexity.

So What? Why is this Better?

This one pretty much sells itself. In the example above, you can run the code without crashing or altering your data structures. Generally speaking, this is a pattern that helps you reduce your application’s memory footprint simply by creating a clever abstraction and shuffling the internal representation. If you can identify when there will be common information in volume, you can separate it from the unique information and make sure you’re being as efficient as possible.

By

New Theme for DaedTech

As you may have noticed, if you’re not reading this via RSS, I’ve changed the look and feel of the site a bit. I wanted to keep the overall theme of the site while getting a little more modern with the UX, and this is the result. The content portion doesn’t expand to the full width of your screen on higher resolution, things are a little more tiled, and the blog home no longer displays all text, pictures, etc.

I accomplished this by installing the catch box theme and then modifying the CSS to get it to look mostly similar to daedtech as it existed before. I also made modifications to some of the theme’s PHP pages. I looked at a pretty good cross-section of posts and pages to make sure I ironed out all of the kinks, but no doubt some of you will find ones I missed in the first 45 minutes the theme is live, so please feel free to let me know if anything looks weird or wrong to you, either via email or comments.

My hope is that the theme choice and modifications help to improve the site’s performance. I’m also planning to make some additional modifications along those lines to speed things up in the next few weeks as well, time allowing.

Cheers!

By

Introduction to Web API: Yes, It’s That Easy

REST Web Services Are Fundamental

In my career, I’ve sort of drifted like a wraith among various technology stacks and platforms, working on web sites, desktop apps, drivers, or even OS/kernel level stuff. Anything that you might work on has its enthusiasts, its peculiar culture, and its best practices and habits. It’s interesting to bop around a little and get some cross-pollination so that you can see concepts that are truly transcendent and worth knowing. In this list of concepts, I might include Boolean logic, the DRY principle, a knowledge of data structures, the publish/subscribe pattern, resource contention, etc. I think that no matter what sort of programmer you are, you should be at least aware of these things as they’re table stakes for reasoning well about computer automation at any level.

Add REST services to that list. That may seem weird when compared with the fundamental concepts I’ve described above, but I think it’s just as fundamental. At its core, REST embodies a universal way of saying, “here’s a thing, and here’s what you can do to that thing.” When considered this way, it’s not so different from “DRY,” or “data structures,” or “publish/subscribe” (“only define something once,” “here are different ways to organize things,” and, “here’s how things can do one way communication,” respectively). REST is a powerful reasoning concept that’s likely to be at the core of our increasing connectedness and our growing “internet of things.”

So even if you write kernel code or Winforms desktop apps or COBOL or something, I think it’s worth pausing, digressing a little, and understanding a very shallow-dive into how this thing works. It’s worth doing once, quickly, so you at least understand what’s possible. Seriously. Spend three minutes doing this right now. If you stumbled across this on google while looking for an introduction to Web API, skim no further, because here’s how you can create your very own REST endpoint with almost no work.

Getting Started with Web API

Prerequisites for this exercise are as follows:

  1. Visual Studio (I’m using 2012 Professional)
  2. Fiddler

With just these two tools, you’re going to create a REST web service, run it in a server, make a valid request, and receive a valid response. This is going to be possible and stupid-easy by virtue of a framework called Web API.

  1. Fire up Visual Studio and click “New->Project”.
  2. Select “Web” under Visual C# and then Choose “ASP.NET MVC 4 Web Application”
    MVC Project
  3. Now, choose “Web API” as the template and click “OK” to create the project.
    WebApi

  4. You will see a default controller file created containing this class:
    [gist id=”5167466″]
  5. Hit F5 to start IIS express and your web service. It will launch a browser window that takes you to the default page and explains a few things to you about REST. Copy the URL, which will be http://localhost:12345, where 12345 is your local port number that the server is running on.
  6. Now launch fiddler and paste the copied URL into the URL bar next to the dropdown showing “GET” and add api/values after it. Go to the request header section and add “Content-Type: application-json” and press Execute (near top right)
    FiddlerRequest
    (Note — I corrected the typo in the screenshots after I had already taken and uploaded them)
  7. A 200 result will appear in the results panel at the left. Double click it and you’ll see a little tree view with a JSON and “value1″ and “value2″ under it as children. Click on the “Raw” view to see the actual text of the response.
    FiddlerResponse

  8. What you’re looking at is the data returned by the “Get()” method in your controller as a raw HTTP response. If you switch to “TextView”, you’ll see just the JSON [“value1″,”value2″] which is what this thing will look like to most JSON-savvy parsing tools.

So what’s all of the fuss about? Why am I so enamored with this concept?

Well, think about what you’ve done here without actually touching a line of code. Imagine that I deployed this thing to http://www.daedtech.com/api/values. If you want to know what values I had available for you, all you’d need to do is send a GET request to that URL, and you’d get a bare-bones response that you could easily parse. It wouldn’t be too much of a stretch for me to get rid of those hard-coded values and read them from a file, or from a database, or the National Weather Service, or from Twitter hashtag “HowToGetOutOfAConversation,” or anything at all. Without writing any code at all, I’ve defined a universally accessible “what”–a thing–that you can access.

We generally think of URLs in the context of places where we go to get HTML, but they’re rapidly evolving into more than that. They’re where we go to get a thing. And increasingly, the thing that they get is dictated by the parameters of the request and the HTTP verb supplied (GET, POST, etc.–I won’t get too detailed here). This is incredibly powerful because we’re eliminating the question “where” and decoupling “how” from “what” on a global scale. There’s only one possible place on earth you can go to get the Daedtech values collection, and it’s obviously at daedtech.com/api/values. You want a particular value with id 12? Well, clearly that’s at daedtech.com/api/values/12–just send over a GET if you want it. (I should note you’ll just 404 if you actually try these URLs.)

So take Web API for a test drive and kick the tires. Let the powerful simplicity of it wash over you a bit, and then let your mind run wild with the architectural possibilities of building endpoints that can talk to each other without caring about web server, OS, programming language, platform, device-type, protocol setup and handshaking, or really anything but the simple, stateless HTTP protocol. No matter what kind of programming you do at what level, I imagine that you’re going to need information from the internet at some point in the next ten years–you ought to learn the basic mechanics of getting it.

By

Characterization Tests

The Origins of Legacy Code

I’ve been reading the Michael Feathers book, Working Effectively with Legacy Code and enjoying it immensely. It’s pushing ten years old, but it stands the test of time quite well–probably much better than some of the systems it uses as examples. And there is a lot of wisdom to take from it.

When Michael describes “legacy code,” he isn’t using the definition as you’re probably accustomed to seeing. I’d hazard a guess that your definition would be something along the lines of “code written by departed developers” or maybe just “old, bad code.” But Michael defines legacy code as any code that isn’t covered by automated regression tests (read: unit tests). So it’s entirely possible and common for developers to be writing code that’s legacy code as soon as it’s checked in.

HouseOfCardsI like this definition a lot, and not, as some might suspect, out of any purism. I’m not equating “legacy” with “bad,” embracing the definition as a backhanded way to say that people who don’t develop the way that I do write bad code. The reason I like the “test-less” definition of “legacy code” is that it brings to the fore the largest association that I have with legacy code, which is fear of changing it.

Think about what runs through your head when you’re tasked with making changes to some densely-packed, crusty old system. It’s probably a sense of honest to goodness unease or demotivation as you realize the odds of getting things right are low and the odds of headaches and blame are high. The code is rat’s nest of dependencies and weird work-arounds, and you know that touching it will be painful.

Now consider another situation that’s different but with similar results. You have some assignment that you’ve worked on for weeks or months. It’s complicated, the customer isn’t sure what he wants, there have been lots of hiccups and setbacks, and there’s budget and deadline pressure. At the bitter end, after a few all-nighters, a bit of scope reduction, and some concessions, you somehow finally get all of the key features working for the most part. You check in the code for shipping, thinking, “I have no idea how this is working, but thank God it is, and I am never touching that again!” You’ve written code that was legacy code from the get-go.

Legacy code isn’t just the bad code that the team before you wrote, or some crusty old stuff from three language versions ago, or some internal homegrown VBA and Excel written by Steve, who’s actually an accountant. Legacy code is any code that you don’t want to touch because it’s fragile.

Getting Things Under Control

In his book, Michael Feathers lays out a lot of excellent strategies for taming out-of-control legacy code. I highly recommend giving it a read. But he coins a term and technique that I’d like to mention today. It’s something that I think programmers should be aware of because it helps lower the barriers to getting started with unit testing. And that term is “characterization tests.”

Characterization tests are the “I’m Okay, You’re Okay,” Rorschach approach to documenting code. There are no wrong answers–just documenting the way things exist. So if you have a method called AddTwoNumbers(int, int) and it returns 12 when you feed it 1 and 1, you don’t say “that’s wrong.” Instead you write a test that documents that return value and you move on, seeing and documenting what it does with other inputs.

Sound crazy? Well, it’s really not. It’s not crazy because things like this actually happen in real life. When code goes live, people work their processes around it, however much its behavior may be goofy or unintended. Once code is in the wild, “right” and “wrong” cease to matter, and the requirements as they existed some time in the past are forgotten. There only is “what is.” Sounds very zen, and perhaps it is.

One of the most common objections when it comes to unit testing is from developers that work on legacy systems where code is hard to test and no tests exist. They’ll say that they’d do things differently if starting from scratch (which usually turns out not to be true), but that there’s just no tackling it now. And this is a valid objection–it can be very hard to get anything under test. But characterization tests at least remove one barrier to testing, which is having extensive experience writing proper unit tests.

With characterization tests, it’s really easy. Just write a unit test that gets in the vicinity of what you want to document, finagle it until it doesn’t throw runtime exceptions, assert something–anything–and watch the test fail. When it fails, make note of what the expected and actual were, and just change the expected to the actual. The test will now pass, and you can move on. Change some method parameters or variables in other classes or even globals–whatever you have access to and can change without collapsing the system.

Through this poking, prodding, and documenting, you’ll start getting a rudimentary picture of what the system does. You’ll also start getting the hang of the characterization test approach (and perhaps unit testing for real as an added bonus). But most importantly, you’ll finally have the beginnings of an automated safety net. There’s no right and wrong per se, but you will start to be able to see when your changes to the system are making it behave differently in ways you didn’t expect. In legacy, different is dangerous, so it’s invaluable to have this notification system in place.

Characterization tests aren’t going to save the day, and they probably aren’t going to be especially easy to write. At times (global state, external dependencies, etc.) they may even be impossible. But if you can get some in place here and there, you can start taking the fear out of interacting with legacy code.

By

Don’t Take My Advice

A Brief History of My Teeth

When it comes to teeth, I hit the genetic jackpot. I’m pretty sure that, instead of enamel, my teeth are coated in some sort of thin layer of stone. I’ve never had a cavity and my teeth in general seem impervious to the variety of maladies that afflict other people’s teeth. So this morning, when I was at the dentist, the visit proceeded unremarkably. The hygienist scraped, prodded, and fluoridized my teeth with a quiet efficiency, leaving me free to do little but listen to the sounds of the dentist’s office.

HygienistThose sounds are about as far from interesting as sounds get: the occasional hum of the HVAC, people a few rooms over exchanging obligatory inanities about the weather, coughs, etc. But for a brief period of time, the hygienist in the next station over took to scolding her patient like a child (he wasn’t). She ominously assured him that his lack of flossing would catch up to him someday, though I quietly wondered which day that would be, since he was pushing retirement age. I empathized with the poor man because I too had been seen to by that hygienist during previous visits.

The first time I encountered her a couple of years ago, she asked me if I drank much soda, to which I replied that I obviously did since my employer stocked the fridge with it for free. I don’t drink coffee, so if I want to partake in the world’s most socially acceptable upper to combat grogginess, it’s Diet Pepsi or Mountain Dew for me. She tsked at me, told me a story of some teenager whose teeth fell out because he had a soda fountain in his basement, and then handed me a brochure with pictures of people’s mouths who were apparently ravaged by soda to such a state where I imagine that the only possible remedy was a complete head transplant. My 31 year steak of complete imperviousness to cavities in spite of drinking soda was now suddenly going to end as my teeth fell inexorably from my head. That is, unless I repented my evil ways and started drinking water and occasionally tea.

I agreed to heed her advice in the way that one generally does when confronted with some sort of pushy nutjob on a mission–I pretended to humor her so that she would leave me alone. When I got home, I threw out the Pamphlet of Dental Doom and didn’t think about her for the next six months until I came back to the dentist. The first thing that she asked when I came in was whether I was still drinking a lot of soda or not. I was so dumbfounded that I didn’t have the presence of mind to lie my way out of this confrontation. I couldn’t decide whether it’d be creepier if she remembered me from six months ago and remembered that I drank a lot of soda or if she had made herself some kind of note in my file. Either way, I was in trouble. The previous time she had been looking to save the sinful, but now she was just pissed. Like the man I silently commiserated with this morning, she proceeded to spin spiteful tales of the world of pain coming my way.

Freud might have some interesting ideas about a person that likes to stick her fingers in other people’s mouths while trying to scare them, and I don’t doubt that there’s a whole host of ground for her to cover on the proverbial therapist couch, but I’ll speak to the low hanging fruit. She didn’t like that I ignored her advice. She is an expert and I am an amateur. She used her expertise to make a recommendation to me, and I promptly blew it off, which is a subtle slap in the face should an expert choose to view it that way.

It Isn’t Personal

It’s pretty natural to feel slighted when you offer expert advice and the recipient ignores it. This is especially true when the advice was solicited. I recall going along once with my girlfriend to help her pick out a computer and feeling irrationally irritated when we got to the store and she saw one that she liked immediately, losing complete interest in my interpretation of the finer points of processor architectures and motherboard wiring schemes. I can also recall it happening at times professionally when people solicit advice about programming, architecture, testing, tooling, etc. I’m happy–thrilled–to provide this advice. How dare anyone not take it.

But while it’s often hard to reason your way out of your feelings, it’s a matter of good discipline to reason past irrational reactions. As such, I strive not to take it personally when my advice, even when solicited, goes unheeded. It’s not personal, and I would argue it’s probably a good sign that the members of your team may be nurturing budding self-sufficiency.

Let’s consider three possible cases of what happens when a tech lead or person with more experience offers advice. In the first case, the recipient attempts to heed the advice but fails due to incompetence. In the second, the recipient takes the advice and succeeds with it. In the third and most interesting case, the recipient rejects the advice and does something else instead. Notice that I don’t subdivide this case into success and failure. I honestly don’t think it matters in the long term.

In the first case, you’re either dealing with someone temporarily out of their depth or generally incompetent, who might be considered an outlier on the lower end of the spectrum. The broad middle is populated with the second case people who take marching orders well enough and are content to do just that. The third group also consists of outliers, but often high-achieving ones. Why do I say that? Well, because this group is seeking data points rather than instructions. They want to know how an expert would handle the situation, not so that they can copy the expert necessarily, but to get an idea. Members of this group generally want to blaze their own trails, though they may at times behave like the second group for expediency.

But this third group consists of tomorrow’s experts. It doesn’t matter if they succeed or fail in the moment because, hey, success is success, but you can be very sure they’ll learn from any failures and won’t repeat their mistakes. They’re learning lessons by fire and experimentation that the middle-of-the-roaders learn as cargo cult practice. And they’re not dismissing your advice to offend you–they earnestly want to understand and assimilate your expertise–but rather to learn from you.

So when this happens to you as a senior team member/architect/lead/etc., try to fight the urge to be miffed or offended and exercise some patience. Give them some rope and see what they do–they can always be reigned in later if they aren’t doing well, but it’s hard to let the rope out if you’ve extinguished their experimental and creative spirit. The last thing you want to be is some woman in a dentist’s office, getting irrationally angry that adults aren’t properly scared of the Cavity Creeps.

By

Subtle Naming and Declaration Violations of DRY

It’s pretty likely that, as a software developer that reads blogs about software, you’ve heard of the DRY Principle. In case you haven’t, DRY stands for “Don’t Repeat Yourself,” and the gist of it is that there should only be one source of a given piece of information in a system. You’re most likely to hear about this principle in the context of copy-and-paste programming or other, similar rookie programming mistakes, but it also crops up in more subtle ways. I’d like to address a few of those today, and, more specifically, I’m going to talk about violations of DRY in naming things while coding.

DRY

Type Name in the Instance Name

Since the toppling of the evil Hungarian regime from the world of code notation, people rarely do things like naming arrays of integers “intptr_myArray,” but more subdued forms of this practice still exist. You often see them appear in server-templated markup. For instance, how many codebases contain text box tags with names/ids like “CustomerTextBox?” In my experience, tons.

What’s wrong with this? Well, the same thing that’s wrong with declaring an integer by saying “int customerCountInteger = 6.” In static typing schemes, the compiler can do a fine job of keeping track of data types, and the IDE can do a fine job, at any point, of showing you what type something is. Neither of these things nor anyone maintaining your code needs any help in identifying what the type of the thing in question is. So, you’ve included redundant information to no benefit.

If it comes time to change the data type, things get even worse. The best case scenario is that maintainers do twice the work, diligently changing the type and name of the variable. The worst case scenario is that they change the type only and the name of the variable now actively lies about what it points to. Save your maintenance programmers headaches and try to avoid this sort of thing. If you’re having trouble telling at a glance what the datatype of something is, download a plugin or a productivity tool for your IDE or even write one yourself. There are plenty of options out there without baking redundant and eventually misleading information into your code.

Inheritance Structure Baked Into Type Names

In situations where object inheritance is used, it can be temping to name types according to where they appear in the hierarchy. For instance, you might define a base class named BaseFoo and then a child of that named SpecificFoo, and a child of that named EvenMoreSpecificFoo. So EvenMoreSpecificFoo : SpecificFoo : BaseFoo. But what happens if during a refactor cycle you decide to break the inheritance hierarchy or rework things a bit? Well, there’s a good chance you’re in the same awkward position as the variable renaming in the last section.

Generally you’ll want inheritance schemes to express “is a” relationships. For instance, you might have Sedan : Car : Vehicle as your grandchild : child : parent relationship. Notice that what you don’t have is SedanCarVehicle : CarVehicle : Vehicle. Why would you? Everyone understands these objects and how they relate to one another. If you find yourself needing to remind yourself and maintainers of that relationship, there’s a good chance that you’d be better off using interfaces and composition rather than inheritance.

Obviously there are some exceptions to this concept. A SixCylinderEngine class might reasonably inherit from Engine and you might have a LoggingFooRetrievalService that does nothing but wrap the FooRetrievalService methods with calls to a logger. But it’s definitely worth maintaining an awareness as to whether you’re giving these things the names that you are because those are the best names and/or the extra coupling is appropriate or whether you’re codifying the relationship into the names to demystify your design.

Explicit Typing in C#

This one may spark a bit of outrage, but there’s no denying that the availability of the var keyword creates a situation where having the code “Foo foo = new Foo()” isn’t DRY. If you practice TDD and find yourself doing a lot of directed or exploratory refactoring, explicit typing becomes a real drag. If I want to generalize some type reference to an interface reference, I have to do it and then track down the compiler errors for its declarations. With implicit typing, I can just generalize and keep going.

I do recognize that this is a matter of opinion when it comes to readability and that some developers are haunted by the variant in VB6 or are reminded of dynamic typing in Javascript, but there’s really no arguing that this is technically a needless redundancy. For the readability concern, my advice would be to focus on writing code where you don’t need the crutch of specific type reminders inline. For the bad memories of other languages concern, I’d suggest trying to be more idiomatic with languages that you use.

Including Namespace in Declarations

A thing I’ve seen done from time to time is fully qualifying types as return values, parameters, or locals. This usually seems to occur when some automating framework or IDE goody does it for speculative disambiguation in scoping. (In other words, it doesn’t know what namespaces you’ll have so it fully qualifies the type during code generation to minimize the chance of potential namespace collisions.) What’s wrong with that? You’re preemptively avoiding naming problems and making your dependencies very obvious (one might say beating readers over the head with them).

Well, one (such as me) might argue that you could avoid namespace collisions just as easily with good naming conventions and organization and without a DRY violation in your code. If you’re fully scoping all of your types every time you use them, you’re repeating that information everywhere in a file that you use the type when just once with an include/using/import statement at the top would suffice. What happens if you have some very oft-used type in your database and you decide to move it up a level of namespace? A lot more pain if you’ve needlessly repeated that information all over the place. Perhaps enough extra pain to make you live with a bad organization rather than trying to fix it.

Does It Matter?

These all probably seem fairly nit-picky, and I wouldn’t really dispute it for any given instance of one or even for the practices themselves across a codebase. But practices like these are death by a thousand cuts to the maintainability of a code base. The more you work on fast feedback loops, tight development cycles, and getting yourself in the flow of programming, the more that you notice little things like these serving as the record skip in the music of your programming.

When NCrunch has just gone green, I’m entering the refactor portion of red-green-refactor, and I decide to change the type of a variable or the relationship between two types, you know what I don’t want to do? Stop my thought process related to reasoning about code, start wondering if the names of things are in sync with their types, and then do awkward find-alls in the solution to check and make sure I’m keeping duplicate information consistent. I don’t want to do that because it’s an unwelcome and needless context shift. It wastes my time and makes me less efficient.

You don’t go fast by typing fast. You don’t go fast, as Uncle Bob often points out, by making a mess (i.e. deciding not to write tests and clean code). You really don’t go fast by duplicating things. You go fast by eliminating all of the noise in all forms that stands between you and managing the domain concepts, business logic, and dependencies in your application. Redundant variable name changes, type swapping, and namespace declaring are all voices that contribute to that noise that you want to eliminate.

Acknowledgements | Contact | About | Social Media