DaedTech

Stories about Software

By

How Developers Stop Learning: Rise of the Expert Beginner

Beyond the Dead Sea: When Good Software Groups Go Bad

I recently posted what turned out to be a pretty popular post called “How to Keep Your Best Programmers,” in which I described what most skilled programmers tend to want in a job and why they leave if they don’t get it. Today, I’d like to make a post that works toward a focus on the software group at an organization rather than on the individual journeys of developers as they move within or among organizations. This post became long enough as I was writing it that I felt I had to break it in at least two pieces. This is part one.

In the previous post I mentioned, I linked to Bruce Webster’s “Dead Sea Effect” post, which describes a trend whereby the most talented developers tend to be the most marketable and thus the ones most likely to leave for greener pastures when things go a little sour. On the other hand, the least talented developers are more likely to stay put since they’ll have a hard time convincing other companies to hire them. This serves as important perspective for understanding why it’s common to find people with titles like “super-duper-senior-principal-fellow-architect-awesome-dude,” who make a lot of money and perhaps even wield a lot of authority but aren’t very good at what they do. But that perspective still focuses on the individual. It explains the group only if one assumes that a bad group is the result of a number of these individuals happening to work in the same place (or possibly that conditions are so bad that they drive everyone except these people away).

I believe that there is a unique group dynamic that forms and causes the rot of software groups in a way that can’t be explained by bad external decisions causing the talented developers to evaporate. Make no mistake–I believe that Bruce’s Dead Sea Effect is both the catalyst for and the logical outcome of this dynamic, but I believe that some magic has to happen within the group to transmute external stupidities into internal and pervasive software group incompetence. In the next post in this series, I’m going to describe the mechanism by which some software groups trend toward dysfunction and professional toxicity. In this post, I’m going to set the stage by describing how individuals opt into permanent mediocrity and reap rewards for doing so.

Learning to Bowl

Before I get to any of that, I’d like to treat you to the history of my bowling game. Yes, I’m serious.

I am a fairly athletic person. Growing up, I was always picked at least in the top 1/3rd or so of any people, for any sport or game that was being played, no matter what it was. I was a jack of all trades and master of none. This inspired in me a sort of mildly inappropriate feeling of entitlement to skill without a lot of effort, and so it went when I became a bowler. Most people who bowl put a thumb and two fingers in the ball and carefully cultivate tossing the bowling ball in a pattern that causes the ball to start wide and hook into the middle. With no patience for learning that, I discovered I could do a pretty good job faking it by putting no fingers and thumbs in the ball and kind of twisting my elbow and chucking the ball down the lane. It wasn’t pretty, but it worked.

It actually worked pretty well the more I bowled, and, when I started to play in an after work league for fun, my average really started to shoot up. I wasn’t the best in the league by any stretch–there were several bowlers, including a former manager of mine, who averaged between 170 and 200, but I rocketed up past 130, 140, and all the way into the 160 range within a few months of playing in the league. Not too shabby.

But then a strange thing happened. I stopped improving. Right at about 160, I topped out. I asked my old manager what I could do to get back on track with improvement, and he said something very interesting to me. Paraphrased, he said something like this:

There’s nothing you can do to improve as long as you keep bowling like that. You’ve maxed out. If you want to get better, you’re going to have to learn to bowl properly. You need a different ball, a different style of throwing it, and you need to put your fingers in it like a big boy. And the worst part is that you’re going to get way worse before you get better, and it will be a good bit of time before you get back to and surpass your current average.

I resisted this for a while but got bored with my lack of improvement and stagnation (a personal trait of mine–I absolutely need to be working toward mastery or I go insane) and resigned myself to the harder course. I bought a bowling ball, had it custom drilled, and started bowling properly. Ironically, I left that job almost immediately after doing that and have bowled probably eight times in the years since, but c’est la vie, I suppose. When I do go, I never have to rent bowling shoes or sift through the alley balls for ones that fit my fingers.

Dreyfus, Rapid Returns and Arrested Development

In 1980, a couple of brothers with the last name Dreyfus proposed a model of skill acquisition that has gone on to have a fair bit of influence on discussions about learning, process, and practice. Later they would go on to publish a book based on this paper and, in that book, they would refine the model a bit to its current form, as shown on wikipedia. The model lists five phases of skill acquisition: Novice, Advanced Beginner, Competent, Proficient and Expert. There’s obviously a lot to it, since it takes an entire book to describe it, but the gist of it is that skill acquirers move from “dogmatic following of rules and lack of big picture” to “intuitive transcending of rules and complete understanding of big picture.”

All things being equal, one might assume that there is some sort of natural, linear advancement through these phases, like earning belts in karate or money in the corporate world. But in reality, it doesn’t shake out that way, due to both perception and attitude. At the moment one starts acquiring a skill, one is completely incompetent, which triggers an initial period of frustration and being stymied while waiting for someone, like an instructor, to spoon-feed process steps to the acquirer (or else, as Dreyfus and Dreyfus put it, they “like a baby, pick it up by imitation and floundering”). After a relatively short phase of being a complete initiate, however, one reaches a point where the skill acquisition becomes possible as a solo activity via practice, and the renewed and invigorated acquirer begins to improve quite rapidly as he or she picks “low hanging fruit.” Once all that fruit is picked, however, the unsustainably rapid pace of improvement levels off somewhat, and further proficiency becomes relatively difficult from there forward. I’ve created a graph depicting this (which actually took me an embarrassingly long time because I messed around with plotting a variant of the logistic 1/(1 + e^-x) function instead of drawing a line in Paint like a normal human being).

This is actually the exact path that my bowling game followed in my path from bowling incompetence to some degree of bowling competence. I rapidly improved to the point of competence and then completely leveled off. In my case, improvement hit a local maximum and then stopped altogether, as I was too busy to continue on my path as-is or to follow through with my retooling. This is an example of what, for the purposes of this post, I will call “arrested development.” (I understand the overlap with a loaded psychology term, but forget that definition for our purposes here, please.) In the sense of skills acquisition, one generally realizes arrested development and remains at a static skill level due to one of two reasons: maxing out on aptitude or some kind willingness to cease meaningful improvement.

For the remainder of this post and this series, let’s discard the first possibility (since most professional programmers wouldn’t max out at or before bare minimum competence) and consider an interesting, specific instance of the second: voluntarily ceasing to improve because of a belief that expert status has been reached and thus further improvement is not possible.. This opting into indefinite mediocrity is the entry into an oblique phase in skills acquisition that I will call “Expert Beginner.”

The Expert Beginner

When you consider the Dreyfus model, you’ll notice that there is a trend over time from being heavily rules-oriented and having no understanding of the big picture to being extremely intuitive and fully grasping the big picture. The Advanced Beginner stage is the last one in which the skill acquirer has no understanding of the big picture. As such, it’s the last phase in which the acquirer might confuse himself with an Expert. A Competent has too much of a handle on the big picture to confuse himself with an Expert: he knows what he doesn’t know. This isn’t true during the Advanced Beginner phase, since Advanced Beginners are on the “unskilled” end of the Dunning Kruger Effect and tend to epitomize the notion that, “if I don’t understand it, it must be easy.”

As such, Advanced Beginners can break one of two ways: they can move to Competent and start to grasp the big picture and their place in it, or they can ‘graduate’ to Expert Beginner by assuming that they’ve graduated to Expert. This actually isn’t as immediately ridiculous as it sounds. Let’s go back to my erstwhile bowling career and consider what might have happened had I been the only or best bowler in the alley. I would have started out doing poorly and then quickly picked the low hanging fruit of skill acquisition to rapidly advance. Dunning-Kruger notwithstanding, I might have rationally concluded that I had a pretty good aptitude for bowling as my skill level grew quickly. And I might also have concluded somewhat rationally (if rather arrogantly) that me leveling off indicated that I had reached the pinnacle of bowling skill. After all, I don’t see anyone around me that’s better than me, and there must be some point of mastery, so I guess I’m there.

The real shame of this is that a couple of inferences that aren’t entirely irrational lead me to a false feeling of achievement and then spur me on to opt out of further improvement. I go from my optimistic self-assessment to a logical fallacy as my bowling career continues: “I know that I’m doing it right because, as an expert, I’m pretty much doing everything right by definition.” (For you logical fallacy buffs, this is circular reasoning/begging the question). Looking at the graphic on the right, you’ll notice that it depicts a state machine of the Dreyfus model as you would expect it. At each stage, one might either progress to the next one or remain in the current one (with the exception of Novice or Advanced Beginner who I feel can’t really remain at that phase without abandoning the activity). The difference is that I’ve added the Expert Beginner to the chart as well.

The Expert Beginner has nowhere to go because progression requires an understanding that he has a lot of work to do, and that is not a readily available conclusion. You’ll notice that the Expert Beginner is positioned slightly above Advanced Beginner but not on the level of Competence. This is because he is not competent enough to grasp the big picture and recognize the irony of his situation, but he is slightly more competent than the Advanced Beginner due mainly to, well, extensive practice being a Beginner. If you’ve ever heard the aphorism about “ten years of experience or the same year of experience ten times,” the Expert Beginner is the epitome of the latter. The Expert Beginner has perfected the craft of bowling a 160 out of 300 possible points by doing exactly the same thing week in and week out with no significant deviations from routine or desire to experiment. This is because he believes that 160 is the best possible score by virtue of the fact that he scored it.

Expert Beginners in Software

Software is, unsurprisingly, not like bowling. In bowling, feedback cycles are on the order of minutes, whereas in software, feedback cycles tend to be on the order of months, if not years. And what I’m talking about with software is not the feedback cycle of compile or run or unit tests, which is minutes or seconds, but rather the project. It’s during the full lifetime of a project that a developer gains experience writing code, source controlling it, modifying it, testing it, and living with previous design and architecture decisions during maintenance phases. With everything I’ve just described, a developer is lucky to have a first try of less than six months, which means that, after five years in the industry, maybe they have ten cracks at application development. (This is on average–some will be stuck on a single one this whole time while others will have dozens.)

What this means is that the rapid acquisition phase of a software developer–Advanced Beginnerism–will last for years rather than weeks. And during these years, the software developers are job-hopping and earning promotions, especially these days. As they breeze through rapid acquisition, so too do they breeze through titles like Software Engineer I and II and then maybe “Associate” and “Senior,” and perhaps eventually on up to “Lead” and “Architect” and “Principal.” So while in the throes of Dunning-Kruger and Advanced Beginnerism, they’re being given expert-sounding titles and told that they’re “rock stars” and “ninjas” and whatever by recruiters–especially in today’s economy. The only thing stopping them from taking the natural step into the Expert Beginner stage is a combination of peer review and interaction with the development community at large.

But what happens when the Advanced Beginner doesn’t care enough to interact with the broader community and for whatever reason doesn’t have much interaction with peers? The Daily WTF is filled with such examples. They fail even while convinced that the failure is everyone else’s fault, and the nature of the game is such that blaming others is easy and handy to relieve any cognitive dissonance. They come to the conclusion that they’ve quickly reached Expert status and there’s nowhere left to go. They’ve officially become Expert Beginners, and they’re ready to entrench themselves into some niche in an organization and collect a huge paycheck because no one around them, including them, realizes that they can do a lot better.

Until Next Time

And so we have chronicled the rise of the Expert Beginner: where they come from and why they stop progressing. In the next post in this series, I will explore the mechanics by which one or more Expert Beginners create a degenerative situation in which they actively cause festering and rot in the dynamics of groups that have talented members or could otherwise be healthy.

How Software Groups Rot: Legacy of the Expert Beginner

Edit: The E-Book is now available. Here is the publisher website which contains links to the different media for which the book is available.

By

You Can Tell a Lot About Developers From Their Scratchpads

What is a Scratchpad?

When you have a question like “what exception is thrown when I try to convert a long that is too big to an integer” or “what is the difference in performance between iterating over a list or a dictionary converted to a list” what do you do? Google it? Phone a friend? Open up an instance of your IDE and give it a try? I think that sooner or later, most developers ‘graduate’ to the latter, particularly for things that can’t be answered with a quick search or question of a coworker. And, if they wind up doing this enough times, they start creating a project or project(s) with titles like “dummy”, “scratchpad”, “throwaway”, “junk”, etc — you get the idea.

As this practice grows and flourishes, it starts to replace google searches and ask a coworker to a degree, even when those things might be quicker. If you get very used to setting up experiments in this fashion, it tends to be almost as quick as a search, and there’s less potential for wrong or misleading information. This is a very useful practice but, more interestingly for the purposes of this post, I think the makeup of this scratchpad (I’m using this term because it’s what I name mine) can tell you a lot about the developer using it. In fact, I believe strongly enough in this that I see a lot of value in the interview question “tell me about your scratchpad and how you use it.”

I’ll say in advance that I’m going to paint with some broad strokes here. So, please read the following while bearing in mind the caveat that there will, of course, be exceptions to the ‘rules’ that I’m describing. In the examples below that describe a single type of scratchpad, please note that I’m referring to situations where this is the primary or only scratchpad that the developer uses. I’ll describe the “arsenal” of scratchpads further down.

No Scratchpad

If someone doesn’t have something like this, they’re quite likely either not very seasoned as a developer or else not very inquisitive when it comes to understanding their craft. The lack of such a playpen project implies that they either don’t experiment or that they experiment in production code, which is pretty low rent and sloppy. (One possible exception is that they experiment in a project’s unit tests, which is actually a pretty reasonable thing to do, and an example of a reason for my “broad strokes” caveat). In either case, they’re lacking a place that they can go in and just throw some code around to see what happens.

Extrapolating out to the way a person with no scratchpad works, it seems very likely that you will observe a lot of “debugger driven development” and “programming by coincidence”. The reason I say this is that this person’s natural reaction to a lack of understanding of some facet of code will most likely be to try it in the production code, fire up the application, and hope it works. I mean, lacking a playpen and tests, what else can you really do? It’s all well and good to ask a coworker or read something on stack overflow, but at some point an implementation has to happen, and when it does, it’s going to be in the production code and it’s probably going to be copy-pasted in as a matter of faith.

I’d be leery of a programmer with no scratchpad. It might be lack of experience, lack of curiosity, or lack of interest, but I’d view this as most likely a lack of something.

Application Scratchpad

This is a scratchpad project that has the kitchen sink, from persistence on up to the GUI. In shops/groups that make a single product, it may be as simple as a duplicate of that project. In shops with more projects, it may be one of them or a composite of them. But, whatever it is, it’s a full fledged application.

Extra points here for being clever enough to have a consequence-free, dummy environment and for recognizing the value of experimentation. This is usually a sign of an inquisitive developer who wants to take things apart and poke at them, or at least a conscientious one who wants to try things out in dress-rehearsal before going live. I’d take some points away, however, for the monolithic approach here, assuming that this is the primary scratchpad and not an auxiliary or temporary approach. The reason I’m not such a fan of this is that it’s indicative of someone who hasn’t yet learned to think of an application in seams, components, modules, units, etc.

I mean, think about how this goes. “I wonder what happens if I use a string array instead of a string list here” leads to trying out the change and then firing up an entire application, complete with GUI bells and whistles, database access, file reads/writes, loggers, etc. This represents a view of the software that essentially says “it’s all or nothing, baby — here we go.” Good on them for the experimentation, but we can certainly do better.

Console Based Scratchpad

The console scratchpad (and this is a little Visual Studio specific, though it’s applicable to other IDE/environments as well) is one in which the main running application is one that writes to and reads from the console. If you wanted to time stuff you’d be doing so using a lot of System.out.println() or Console.WriteLine() or whatever the equivalent in your choice of language.

This correlates with the good characteristics of the GUI/application scratchpad (curiosity, judiciousness) but adds a bit more sophistication. A developer with a console based scratchpad understands how to decompose and abstract concepts and how to isolate problems. I think that this is potentially indicative of a competent and potentially great developer perhaps ready for a push.

Class Library Scratchpad

The most sophisticated scratchpad, in my mind, is the class library scratchpad. This is a scratchpad that has no program output of any sort and cannot be run independently. What this means is that the developer is used to reasoning about units and experimenting in very small, isolated fast feedback loops. Someone doing things this way clearly understands that increasingly complex applications are assembled from solutions to manageable problems rather than with some kind of irreducibly complex, inter-connected gestalt. Someone with this style of scratchpad is almost certainly a unit tester and most likely a practitioner of TDD (I certainly recognize my own bias here). But the most important thing here is the ultra fast and simple feedback loop, where answers to “how does this work” are provided right in the IDE itself (and extra points for anyone running a continuous integration tool like NCrunch.

Monolithic Scratchpad

A monolithic scratchpad is one that tends to be a developer’s only scratchpad. I think that this tends to be indicative of an architecturally minded developer capable of thinking in significant abstractions. The monolithic scratchpad is one that encompasses all of the projects above in one. It has test projects for fast feedback. It has a persistence model or two and a domain of sorts. It has a web front end and a desktop front end and mobile front end. In short, the developer organically evolves this scratchpad to suit whatever experimentation needs arise, demonstrating an ability to grow an application in response to whatever needs arise. Someone with this sort of scratchpad setup is someone I’d keep my eye on as a potential architect.

Arsenal of Scratchpads

An arsenal of scratchpads is indicative of a developer seasoned enough to understand that different questions call for different environments, but someone more silo-oriented and, perhaps, very well organized in a sense. While it may not occur to this person to blend scratchpad needs into an ad-hoc, decently architected application, the work nevertheless exists in such a way that it’s easy to find, access, and use. Since the response to new situations is new scratchpads, the critical fast feedback loop is there since they never get very big. I think that this practice is indicative of someone who might excel in small to medium sized picture development work or, who might do well on the path to project management. A monolith scratchpadder and an arsenal scratchpadder could probably form a pretty good leadership tandem on a project.

What Does your Scratchpad Look Like?

I’d like to reemphasize that this is a fairly lighthearted post that I fully acknowledge may completely miss the mark in categorizing some people. But, I do think there’s a grain of truth to it, and I stand by my assertion that this would make for an excellent interview question. If you have one of these types of scratchpads and either reinforce or poke holes in my premise, please weigh in. Same if you have a scratchpad setup I hadn’t thought of.

By

Announcement: New Post Categories

In the wee hours of the morning this morning/last night, I did an overhaul of post categories on the blog. When I started out, I didn’t really see the purpose of categories versus tags, so I just made everything a category and had the tag cloud display categories. That was a workable solution at the time, but ultimately not scalable as anyone who saw the giant category list can attest. Over the course of time, I came to realize this but allowed myself to be a boiled frog until last night I said “screw it” and hopped out of the pot. I downloaded a helpful wordpress plugin so that I didn’t have to mess with the database directly and changed all of my existing categories to tags.

From there, I created a handful of more general categories that I feel reflect the broader categories of my posting. Some of them are obvious (.NET and Java) and others are named for running series of posts that I do (abstractions, design patterns, etc). I went rather tediously back through all of my posts and assigned them categories that seemed a good fit, though I must admit I didn’t spend a ton of time on the oldest posts. I also am generally trying to minimize the number of categories assigned to posts – one is ideal and some may occasionally be in two, but that’s really about it, I think. We’ll see. This new classification system is rather evolutionary, so I’ll adapt it as I make new posts and use it from here forward. If you have any issues with a post’s categorization, please feel free to comment in that post or to shoot me an email (erik at my domain).

Also, I should note that there is no loss of information. The previous categories are now all tags, so if you wanted to see all “WPF” posts for instance, you can go to http://www.daedtech.com/tag/wpf instead of http://www.daedtech.com/category/wpf. I’m hoping that in general this makes the site a little more readable and organized.

By

Splitting Strings With Substrings

The String.Split() method in C# is probably something with which any C# developer is familiar.

string x = "Erik.Dietrich";
var tokens = x.Split('.');

Here, “tokens” will be an array of strings containing “Erik” and “Dietrich”. It’s not exactly Earth Shattering to tokenize a string in this fashion and some incarnation or another of this predates .NET, C# and probably even my time on this planet.

But what about if we want to split over a string instead? What about if we have “..” as a delimiter instead of ‘.’ and I want to split “Erik..Dietrich” in the same way? Probably an overload of String.Split() that takes a string instead of a char, right? Well, actually no. As it turns out, the API for string.Split() is pretty unintuitive.

First of all, that call to x.Split(‘.’) is not actually invoking Split(char), but rather Split(params char[]), notwisthanding the fact that this isn’t advertised in the MSDN page unless you drill into the individual method. So, calling x.split(‘.’) and x.Split(‘.’, ‘&’, ‘%’, ‘^’) are equally valid, syntax-wise in the case of “Erik.Dietrich” (and in this case, both will give me back my first and last name).

So, what one might expect is that there would be an overload Split(params[] string) to allow the same behavior as splitting over zero or more characters. Nope. Instead you have Split(string[] separator, StringSplitOptions options). Two things suck about this. One, I have to specify some enum that I don’t care about in the first place and that has only two options, one of which is “none”. I mean, really? You can’t just assume “none” and let users specify a different case if they want with another overload? But what sucks even more about this is that params have to be the last argument in the parameter list, so that option is out the window. You no longer get that snazzy params syntax that the char version has, and now you have to actually awkwardly create a string array. So, here is the new syntax following the old. Note that the new syntax is pretty hideous:

string x = "Erik.Dietrich";
var tokens = x.Split('.');

string y = "Erik..Dietrich";
var newTokens = y.Split(new string[] { ".." }, StringSplitOptions.None)

I was getting ready to write something to hide this mess from myself as a client, when I stumbled across a better alternative than rolling my own extension method or string splitting class: Regex.Split(). Here’s how it works:

string x = "Erik..Dietrich"
var tokens = Regex.Split(x, "..");

No fuss, no muss, and exactly what String.Split() should do. Granted, the arguments to Regex.Split() are both single strings (so if you want to specify multiple delimiters, you’ll have to cook up a regex recipe) and it’s a static method, but it has the advantage of already existing in the framework and being a much, much cleaner API than x.Split().

Use in good health!

By

Feed Paper: Help Yourself To My Intellectual Property

The Pledge

When it comes to US intellectual property law, specifically patents as they pertain to software, I have a very scorched Earth philosophy. Don’t tweak it. Don’t reform it. Don’t debate over it. Don’t worry about it. Just delete it. Yes, to me, it’s like a nasty, tangled mass of bad code where it’s easier to get rid of it and address what breaks than it is to try to salvage it in its current form. I understand the need for patents on things such as medicines that have prohibitive startup investment cost, but when it comes to things like “gravity” effects on your cell phone, it’s just not worth having a system where established powerhouses can bury the little guy or blast away at each other and drive up costs, benefiting no one but lawyers. But, this post is not intended to spark a debate about ownership of ideas and intellectual property.

Instead, I’m putting my money where my mouth is. I’m going to give away what I think could be a pretty cool and viable idea. Why? Well, because I’m not exactly made of time, and I’d rather trade the small chance that I’d ever parlay this into a winning business idea for the slightly greater chance that someone might take it and run with it (or point me to it if it already exists). In other words, I want this thing enough that I just want someone to do it. Maybe one day I’d get around to it, but if you want to do it first, faster, and better, please be my guest.

The Turn

Every day in my feed reader, an entry from Alvin Ashcraft called “Dew Drop” appears. Here is one such entry. It’s a link collection that he dutifully fills out every day, organizing by category. In the linked entry, he has “Top Links” at, well, the top, followed by Web Development, XAML, etc, ending up eventually with a set of other link collections and a book. You know what this reminds me of? A newspaper. I mean, the “Top Links” is pretty obviously “Front Page”, and then you’ve got your .NET equivalent of Business, Metro, Sports, Politics, etc.

Morning Dew is a nice collection of links, as are some of the others also linked in there (I personally look at Jason Haley’s and the Morning Brew a lot). And, if those weren’t enough, I have my own collection of links. Pictured here, in alphabetical order, are the ones that fit on my screen:

As you can see, my screen just barely fits A through C. So, I have an alphabetized list of feeds that have some parenthetical number of unread articles, and some of those articles have links to even more articles, some of which I may already have in my reader, and others that I don’t. In some of these places, some of these things are summarized, and in others, I may just have to go go with the title or the author. I think we can do better, so I propose “FeedPapyr”. Feedpaper.com was taken, so I bought the domain feedpapyr.us (see what I did there?) for 6 bucks.

What I envision is pretty simple. At its core, the site is just a feed reader. But, instead of having laundry lists of feeds, you create newspapers. You categorize the feeds into custom defined sections and the reader handles truncating them to fit into a newspaper-style layout. If the posts have images with them, those are rendered as well. They are not formatted in full post form, but rather somewhat like the view on the right in my google reader screenshot, but like a newspaper. And, when you click on an article, it “pops out” with a focused, higher z-index feel window as you read it, and retreats back when you’re done with it (there’s no need to preserve the whole “continued on page 22″ thing with the actual paper). Users can “thumb through” the paper, starting with their “Top Links” and delving into other sections if they have time. They can add to the size of their papers over the course of time, promote and demote things, and swap in and out content creators. I feel that this format would lend itself particularly well to consumption of tablet or ultra book devices.

A user can create an arbitrary number of papers, just as in real life one might subscribe to multiple newspapers. I personally would love this. I’d definitely have at least one programming paper, but probably also one about Chicago sports, one about science/technology, etc. It could have ads just like normal papers to finance it, and offer a “subscribe to avoid the ads” model as well (but financing is a little far afield right now). It would basically be a newspaper that was auto-formatted for the reader with custom content that was specifically chosen and subscribed to by the reader on a case by case basis.

The Prestige

It’s not normally my style to go buying up domain names on a whim, but feedpapyr.us is now in my possession. If this idea is something that interests you, please get in touch with me. I’m happy to brainstorm and collaborate but if you think it’s something you’d really like to run with, I’m perfectly happy to let you have the domain name (of course, maybe you don’t like that name and would rather have your own anyway). If this is all a moot point because something like this already exists and I’m just a clueless idiot, please, please point me at it because I would love it. At the end of the day, I’d like to see this get done and I don’t have a lot of time. Sooner or later, I’m going to build myself a “feedpapyr” because I just want one. But, if someone builds one first that I can use, so much the better.

By

Unit Testing DateTime.Now Without Isolation

My friend Paul pointed something out to me the other day regarding my post about TDD even when you have to discard tests. I believe that this trick was taken from the book Working Effectively With Legacy Code by Michael Feathers (though I haven’t yet read this one, so I can’t be positive.

I was writing some TDD test surrounding the following production method:

and the problem I was having is that any tests that I write and check in will be good through the end of 2012 and essentially have an expiration date of Jan 1st, 2013.

What Paul pointed out is that I could refactor this to the following:

And, once I’ve done that, I can introduce the following class into my test class:

With all that in place, I can write a test that no longer expires:

In this test, I use the new class that requires me to specify the current year. It overrides the base class, which uses DateTime.Now in favor of the “current” year I’ve passed it, which has nothing to do with the non-deterministic quantity “Now”. As a result, I can TDD ’til the cows come home and check everything in so that nobody accuses me of having a Canadian girlfriend. In other words, I get to have my cake and eat it too.

By

Announcement for RSS Subscribers: Switching Subscription Service

I am going to be switching services for RSS and content update notifications. Up until now, I’ve been using feedburner, which is a free service that has been around since I believe 2004 or so and was bought out by google in 2007. Since that acquisition, its best feature is that it is free. This is offset somewhat by pretty flaky statistics of subscriber counts and occasional burps in performance and function. However, a big drawback is that it seems as though google might kill the service. Making matters worse is the fact that google has done exactly nothing to illuminate the issue one way or the other, playing it close to the vest. “Don’t be evil, but you can be a little shady if you want.”

Other bloggers seem to be doing the same and a feedburner bug starting yesterday telling all users that we’ve lost all of our subscribers is turning the trickle of departing feedburner users into an exodus. Well one of my philosophies in life is that I like to depend as little as possible on other people and things, especially shady people and things. So, I am going to be hopping on that bandwagon and switching to FeedBlitz effective today or tomorrow (allowing enough time for this one last post to be sent out via feedburner). The good news is that this new service apparently allows readers to subscribe to posts through other media, such as receiving a tweet or a facebook/google-plus post rather than the traditional email or RSS notification. Progress and all that.

I have already created my FeedBlitz account and the migration guide from feedburner to this new service assures me that not one subscription will be interrupted… but, if your subscription is interrupted, that’s why. I intend to have a post Sunday night or Monday morning and if you don’t see something by then and want to continue reading, you may need to resubscribe through your RSS reader. Thanks for reading and for your patience!

Update: If you are subscribed directly to the Feedburner URL for the blog or for comments, please re-subscribe using the buttons on the right. These will automatically redirect you to the new, current FeedBlitz feed and you can delete your old subscription. If you are subscribed through my site directly, you need not do anything. Feel free to email/comment if you aren’t sure.

By

Regions Are a Code Smell

There’s a blogger named Iris Classon that writes a series of posts called “stupid questions”, where she essentially posts a discussion-fueling question once per day. Recently, I noticed one there called “Should I Use Regions in my Code?”, and the discussion that ensued seemed to be one of whether regions were helpful in organizing code or whether they constituted a code smell.

For those of you who are not C# developers, a C# region is a preprocessor directive that the Visual Studio IDE (and probably others) uses to provide entry points for collapsing and hiding code. Eclipse lets you collapse methods, and so does VS, but VS also gives you this way of pre-defining larger segments of code to collapse as well. Here is a before and after look:

(As an aside, if you use CodeRush, it makes the regions look much nicer when not collapsed — see image at the bottom of the post)

I have an odd position on this. I’ve gotten used to the practice because it’s simply been the coding standard on most projects that I’ve worked in the last few years, and so I find myself doing habitually even when working on my own stuff. But, I definitely think they’re a code smell. Actually, let me refine that. I think regions are more of a code deodorant or code cologne. When we get up in the morning, we shower and put on deodorant and maybe cologne/perfume before going about our daily business (most do, anyway). And not to be gross or cynical, but the reason that we do this is that we kind of expect that we’re going to naturally gravitate toward stinking throughout the day and we’re engaging in some preventative medicine.

This is how I view regions in C# code, in a sense. Making them a coding standard or best practice of sorts is like teaching your children (developers, in the metaphor) that not bathing is fine, so long as they religiously wear cologne. So, in the coding world, you’re saying to developers, “Put in your regions first so that I don’t have to look at your unwieldy methods, haphazard organization and gigantic classes once you inevitably write them.” You’re absolving them of the responsibility for keeping their code clean by providing and, in fact, mandating a way to make it look clean without being clean.

So how do I justify my hypocrisy on this subject of using them even while thinking that they tend to be problematic? Well, at the heart of it lies my striving for Clean Code, following SRP, small classes, and above all, TDD. When you practice TDD, it’s pretty hard to write bloated classes with lots of overloads, long methods and unnecessary state. TDD puts natural pressure on your code to stay lean, compact and focused in much the same way that regions remove that same pressure. It isn’t unusual for me to write classes and region them and to have the regions with their carriage returns before and after account for 20% of the lines of code in my class. To go back to the hygiene metaphor, I’m like someone who showers quite often and doesn’t sweat, but still wears deodorant and/or cologne. I’m engaging in a preventative measure that’s largely pointless but does no harm.

In the end, I have no interest in railing against regions. I don’t think that people who make messes and use regions to hide them are going to stop making messes if you tell them to stop using regions. I also don’t think using regions is inherently problematic; it can be nice to be able to hide whole conceptual groups of methods that don’t interest you for the moment when looking at a class. But I definitely think it bears mentioning that from a maintainability perspective, regions do not make your 800 or 8000 line classes any less awful and smelly than they would be a in language without regions.

By

Changing My Personal Coding Standards

Many moons ago, I blogged about creating a DX Core plugin and admitted that one of my main motivations for doing this was to automate conversion of my code to conform to a standard that I didn’t particularly care for. One of the conversions I introduced, as explained in that post, is that I like to prepend “my” as a prefix on local, method level variables to distinguish them from method parameters (they’re already distinguished from class fields, which are pretended with an underscore). I think my reasoning here was and continues to be solid, but I also think that it’s time for me to say goodbye to this coding convention.

It will be tough to do, as I’ve been in the habit of doing this for years. But after a few weeks, or perhaps even days, I’m sure I’ll be used to the new way of doing things. But why do this? Well I was mulling over a problem in the shower the other day when the idea first took hold. Lately, I’ve been having a problem where the “my” creeps its way into method parameters, thus completely defeating the purpose of this convention. This happens because over the last couple of years, I’ve been relying ever-more heavily on the “extract method” refactoring. Between Code Rush making this very easy and convenient, the Uncle Bob, clean-coding approach of “refactor ’til you drop”, and my preference for TDD, I constantly refactor methods, and this often results in what were local variables becoming method parameters but retaining their form-describing (misleading) “my”.

What to do? My first thought was “just be diligent about renaming method parameters”, but that clearly violates my philosophy that we should make the bad impossible. My second thought was to write my own refactoring and perform some behind-the-scenes black magic with the DXCore libraries, but that seems like complete overkill (albeit a fun thing to do). My third thought bonked me in the head seemingly out of nowhere: why not just stop using “my”?

I realized that the reasons I had for using it had been slowly phased out by changes my approach to coding. I wanted to be able to tell what scope a member was instantly by looking at it, but that’s pretty easy to do regardless of what you name them when you’re writing methods that are only a few lines long. It also becomes easy to tell the scope of things when you give longer, more descriptive names to local, instead of using constants. And techniques like command query separation make it rare that you need to worry about the scope of something before you alter it since the small method’s purpose (alteration or querying) should be readily apparent. Add to that the fact that other people I collaborate with at times seem not to be a fan of this practice, and the reasons to do it have all kind of slipped away for me except for the “I’ve always done it that way” reason, which I abhor.

The lesson here for me and hopefully for anyone reading is that every now and then, it’s a good idea to examine something you do out of pure habit and decide whether that thing makes sense any longer. The fact that something was once a good idea doesn’t mean that it always will be.

By

There is No Such Thing as Waterfall

Don’t try to follow the waterfall model. That’s impossible. Instead, only try to realize the truth: there is no waterfall.

Waterfall In Practice Today

I often see people discuss, argue and debate the best approach or type of approach to software development. Most commonly, this involves a discussion of the merits of iterative and/or agile development, versus the “more traditional waterfall approach.” What you don’t see nearly as commonly, but do see every now and then is how the whole “waterfall” approach is based on a pretty fundamental misunderstanding, wherein the man (Royce) who coined the term and created the iconic diagram of the model was holding it up as a strawman to say (paraphrased) “this is how to fail at writing software — what you should do instead is iterate.” Any number of agile proponents may point out things like this, and it isn’t too hard to make the case that the waterfall development methodology is flawed and can be problematic. But, I want to make the case that it doesn’t even actually exist.

I saw a fantastic tweet earlier from Energized Work that said “Waterfall is typically 6 months of ‘fun up front’, followed by ‘steady as she goes’ eventually ramping up to ‘ramming speed’.” This is perfect because it indicates the fundamental boom and bust, masochistic cycle of this approach. There is a “requirements phase” and a “design phase”, which both amount basically to “do nothing for a while.” This is actually pretty relaxing (although frustrating for ambitious and conscientious developers, as this is usually unstructured-unstructured time). After a few weeks or months or whatever of thumb-twiddling, development starts, and life is normal for a few weeks or months while the “chuck it over the wall” deadline is too far off for there to be any sense of how late or bad the software will turn out to be. Eventually, though, everyone starts to figure out that the answers to those questions are “very” and “very”, respectively, and the project kicks into a death march state and “rams” through the deadline over budget, under-featured, and behind schedule, eventually wheezing to some completion point that miraculously staves off lawyers and lawsuits for the time being.

This is so psychically exhausting to the team that the only possible option is 3 months of doing nothing, er, excuse me, requirements and design phase for the next project, to rest. After working 60 hour weeks and weekends for a few weeks or months, the developers on the team look forward to these “phases” where they come in at 10 AM, leave at 4 PM, and sit around writing “shall” a lot, drawing on whiteboards, and googling to see if UML has reached code generation Shangri La while they were imprisoned in their cubicles for the last few months. Only after this semi-vacation are they ready to start the whole chilling saga again (at least the ones that haven’t moved on to greener pastures).

Diving into the Waterfall

So, what actually happens during these phases, in a more detailed sense, and what right have I to be so dismissive of requirements and design phases as non-work? Well, I have experienced what I’m describing firsthand on any number of occasions, and found that most of my time is spent waiting and trying to invent useful things to do (if not supporting the previous release), but I realize that anecdotal evidence is not universally compelling. What I do consider compelling is that after these weeks or months of “work” you have exactly nothing that will ever be delivered to your end-users. Oh, you’ve spent several months planning, but when was the last time planning something was worked at anywhere near as much as actually doing something? When you were a high school or college kid and given class time to make “idea webs” and “outlines” for essays, how often was that time spent diligently working, and how often was that time spent planning what to do next weekend? It wasn’t until actual essay writing time that you stopped screwing around. And, while we like to think that we’ve grown up a lot, there is a very natural tendency to regress developmentally when confronted with weeks of time after which no real deliverable is expected. For more on this, see that ambitious side project you’ve really been meaning to get back into.

But the interesting part of this isn’t that people will tend to relax instead of “plan for months” but what happens when development actually starts. Development starts when the team “exits” the “design phase” on that magical day when the system is declared “fully designed” and coding can begin. In a way, it’s like Christmas. And the way it’s like Christmas is that the effect is completely ruined in the first minute that the children tear into the presents and start making a mess. The beautiful design and requirements become obsolete the second the first developer’s first finger touches the first key to add the first character to the first line of code. It’s inevitable.

During the “coding phase”, the developers constantly go to the architect/project manager/lead and say “what about when X happens — there’s nothing in here about that.” They are then given an answer and, if anyone has time, the various SDLC documents are dutifully updated accordingly. So, developers write code and expose issues, at which time requirements and design are revisited. These small cycles, iterations, if you will, continue throughout the development phase, and on into the testing phase, at which time they become more expensive, but still happen routinely. Now, those are just small things that were omitted in spite of months of designing under the assumption of prescience — for the big ones, something called a “change request” is required. This is the same thing, but with more emails and word documents and anger because it’s a bigger alteration and thus iteration. But, in either case, things happen, requirements change, design is revisited, code is altered.

Whoah. Let’s think about that. Once the coding starts, the requirements and design artifacts are routinely changed and updated, the code changes to reflect that, and then incremental testing is (hopefully) done. That doesn’t sound like a “waterfall” at all. That sounds pretty iterative. The only thing missing is involving the stakeholders. So, when you get right down to it, “Waterfall” is just dysfunctional iterative development where the first two months (or whatever) are spent screwing around before getting to work, where iterations are undertaken and approved internally without stakeholder feedback, and where delivery is generally late and over budget, probably by an amount in the neighborhood of the amount of time spent screwing around in the beginning.

The Take-Away

My point here isn’t to try to persuade anyone to alter their software development approach, but rather to try to clarify the discussion somewhat. What we call “waterfall” is really just a specifically awkward and inefficient iterative approach (the closest thing to an exception I can think of is big government projects where it actually is possible to freeze requirements for months or years, but then again, these fail at an absolutely incredible rate and will still be subject to the “oh yeah, we never considered that situation” changes, if not the external change requests). So there isn’t “iterative approach” and “waterfall approach”, but rather “iterative approach” and “iterative approach where you procrastinate and scramble at the end.” And, I don’t know about you, but I was on the wrong side of that fence enough times as a college kid that I have no taste left for it.

Acknowledgements | Contact | About | Social Media