DaedTech

Stories about Software

By

The Narrative of Mediocrity

I was defeated. Interested in getting off to a good start and impressing, I had overachieved in the course by working hard and studying diligently to make a good impression. And yet, when the first essay was returned to the class, mine had a big, fat B staring back at me, smug with the kind of curves that are refreshingly absent in a nice, crisp A. I didn’t understand how this had happened, and the fact that none of the other students had received As either was cold comfort. I’d sought to impress, but the teacher had put me in my place.

I got an A in that class. I actually don’t remember which class it was any longer because it happened in a number of them. It happened in high school, college, and graduate school. I started off a B student on subjectively-graded assignments and ‘improved’ steadily for the duration of the course until I wound up with an A. Many of my peers followed the same trajectory. It was a nice story of growth and learning. It was the perfect narrative…for the teacher.

What could be better than a fresh-faced crop of students, talented but raw, eager to learn, being humbled and improving under the teacher’s tutelage? It’s the secondary education equivalent of a Norman Rockwell painting. It gives the students humility, confidence, and a work ethic, and it makes the teacher look and feel great. Everyone wins, so really, what’s the harm in the fiction? So what if it’s a bit of a fabrication? Who cares?

Well, I did. I was a relentless perfectionist as a student and this sort of evaluation drove me nuts. I sought out explanations for early B’s in classes where this happened and found no satisfaction in the explanations. I raged against the system and eventually cynically undercut it, going out of my way to perform the same caliber of work in the before and after pictures, doggedly determined to prove conspiracy. My hypothesis was confirmed by my experiments–my grades improved even as my work did not–and my triumphant proof of conspiracy was met with collective yawns and eye-rolls by anyone who actually paused long enough to listen to me.

I learned a lesson as a child and young adult about the way the academic world worked. Upon graduation from college, I was primed to learn that the business world worked that way too.

The Career Train

There are a lot of weird symmetries, quirks, and even paradoxes in the field of macroeconomics. It’s truly a strange beast. Consider, for instance, the concept of inflation, wherein everyone gets more money and money becomes worth less, but not necessarily in completely equal proportions. We’re used to thinking of money in a zero-sum kind of sense–if I give you ten dollars, then I am ten dollars poorer and you are ten dollars richer. But through the intricacies of lending and meta-transactions surrounding money, we can conceive of a scheme where we start with ten dollars and each wind up with six dollars some time later. And so it goes in life–as time goes by, we all have more money (at least in lending-based, market economics). If things get out of whack and everyone doesn’t have more money as time goes by, you have stagnation (or deflation). If things get out of whack the other way, you wind up with runaway inflation and market instability. They system works (or at least works best) when everyone gets a little more at a measured, predictable, and homogeneous pace.

The same thing seems to happen throughout our careers. We all start in the business world as complete initiates, worth only our entry level paychecks, and we all trudge along throughout our careers, gradually acquiring better salaries, titles, accessories, and office locations. Like a nice but not-too-steep interest rate, people have an expectation of dependable, steady, slight gain throughout their career. Two promotions in your twenties is pretty reasonable. Managing a team by your mid thirties. A nice office and a VP or director title in your later forties, and perhaps a C-level executive position of some kind when you’re in your fifties to sixties. On average, anyway. Some real go-getters might show their prodigious talents by moving that timeline up by five years or so, while some laggards might move it back by the same amount, topping out at some impressive but non-executive title.

Okay, so I know what you’re thinking. You want to shout “Mark Zuckerberg!” at me. Or something along those lines–some example of a disruptive entrepreneur that proves there is a different, less deterministic path. Sure, there is. People who opt out of the standard corporate narrative do so at large risk and large possible reward. Doing so means that you might be Zuckerberg or that Instagram guy, but it means that you’re a lot more likely to be working in your garage on something that goes nowhere while your friends are putting in their time in their twenties, getting to the best cubicles, offices, and corner offices a few years before you do. By not getting on the train when all your friends do, you’re going to arrive later and behind them–unless you luck out and are teleported there by the magic teleportation fairy of success.

So forget the Zuckerbergs and the people who opt out in the negative sense and never get back in. Here in corporate land, the rest of us are on a train, and there’s not a lot of variance in arrival times on trains. If you get right to the front of the train, you may get there a few minutes early, but that’s all the wiggle-room you get. The upside to this mode of transportation is that trains are comfortable, dependable, and predictable. A lot of people prefer to travel this way, and the broad sharing of cost and resources make it worth doing. It’s a sustainable, measured pace.

Everyone Meets Expectations

They don’t stop two trains on the track so that people who are fast and serious about going fast can sprint to the next train. It may be good for a few, but it would enrage the many and throw the system out of whack. That applies to trains, and it applies to your performance reviews. The train runs on time, and the only question is whether you’re in the front (exceeds expectations) or the back (meets some expectations). If you’re perennially in front, you’ll get that C-Level corner office at fifty, but perennially in back, and you’ll just be the sales manager at fifty.

Seem cynical? If so, ask yourself this: why are there no office prodigies? In school, there were those kids who skipped a grade or who took Algebra with the eighth graders while their fellow seventh graders were in Pre-Algebra. There were people who took AP classes, aced their SATs, and who achieved great, improbable things. What happens to those outliers in the corporate world, if they don’t drop out and go the Zuckerberg route? Why is there no one talented enough to rocket through the corporate ranks the way there was in school? Doesn’t that seem odd? Doesn’t it seem like, by sheer odds, there should be someone who matches Zuckerberg as a twenty-something wunderkind CEO by coming up through the corporate world rather than budging back in from entrepreneur-land? Maybe just one, like, ever?

I would think so. I would think that corporate prodigies would exist, if I didn’t know better–if I didn’t know that the mechanism of corporate advancement was a train, a system designed to quite efficiently funnel everyone toward the middle. You might exceed expectations or fail to meet them at any given performance review, but on a long enough timeline, you meet expectations because everyone meets expectations. It’s the most efficient way to create a universal and comfortable narrative for everyone. That narrative is that all of everyone’s work and achievement through life has built toward something. That the corner office is the product of forty years of loyalty, dedication, and cleverness. After forty years of meeting expectations, you, too, can finally arrive.

This isn’t some kind of crazy conspiracy theory. This is transparently enforced via HR matrices. All across the nation and even the world, there are corporate policies in place saying that level six employees can’t receive two promotions before level seven employees receive one. It wouldn’t be fair to pay Suzy more than Steve since Steve has three more years of industry experience. Organizations, via a never-ending collection of superficially unrelated policies, rules, regulations, and laws, take a marathon and put it on a single-file people-mover.

scan0009

Wither the Performance Review

So if I had a parallel experience with a manufactured narrative in school and the corporate world, how to explain grade-skippers and AP-takers? Simple. In school, the narrative occurs for the benefit of the teacher on the micro (single quarter or semester) level. In the work world, it occurs for everyone’s benefit for the rest of your working life.

So why do organizations bother with the awkward performance review construct? Well, in part because it’s necessary to make justifications about issues like pay, position, and promotions. If people receive titular, “career-advancing” promotions every three to four years, a review is necessary in the first year to tell them that they need to “get better at business” or something. Then in the second year, they can hear that they’re making “good strides at business,” followed in the third year by a hearty congratulations for “being great at business,” and, “really earning that promotion to worker IV.” Like a scout earning a merit-badge, this manufactured narrative will be valued by the ‘earner’ because it supplies purpose to the past three years, even if the person being reviewed didn’t “get better at business” (whatever that means). But the other purpose is providing the narrative for the reviewers. If the reviewers’ reports started out “bad at business” and ‘improved’ under his tutelage, his own review narrative goes a lot better, and so on, recursively, up the chain. What a wonderful world where everyone is helping everyone get better at a very measured pace, steadily, over the course of everyone’s career.

But just as I railed against this concept in school, so do I now. I’ve never received sub-standard reviews. In general annual review parlance, mine have typically been “exceeds expectations but…” where “but” is some reason that I’m ‘not quite ready’ for a promotion or more responsibility just yet. Inevitably, this magically fixes itself.

So what if we did as Michael O. Church suggests and simply eliminate the performance reviews along these lines? Poof. Gone. I don’t know about you, but I might just find a “we’re not promoting you because that’s our policy” refreshingly honest as compared to a manufactured and non-actionably vague piece of ‘constructive’ criticism. (This is not to be confused with a piece of feedback like “your code should be more modular,” or, “you should deliver features more quickly,” both of which are specific, actionable, and perfectly reasonable critiques. But also don’t require some kind of silly annual ceremony where I find out if I’m voted onto Promotion Island or if I’ll have to play again next year.) I certainly don’t have an MBA, and I’m not an expert in organizational structuring and management, but it just seems to me as though we can do better than a stifling policy of funneling everyone toward the middle and manufacturing nonexistent deficiencies so that we can respond by manufacturing empty victories. I can only speak for myself, but you can keep the guaranteed trappings of ascending the corporate ladder if you just let me write my own story in which my reach exceeds my grasp.

By

Guerilla Guide to Developer Interviews

Over the course of my career I’ve done quite a number of technical interviews, and a pretty decent cross-section of them have ended in job offers or at least invitations to move on to the next step. That said, I am no expert and I am certainly no career coach, but I have developed some habits that seem pretty valuable for me in terms of approaching the interview process. Another important caveat here is that these are not tips to snag yourself an offer, but tips to ensure that you wind up at a company that’s as good a fit as possible. Sometimes that means declining an offer or even not getting one because you realize as you’re interviewing that it won’t be a good fit. On any of these, your mileage may vary.

So in no particular order, here are some things that you might find helpful if you’re throwing yourself out there on the market.

Avoid the Firehose

Programming jobs are becoming more and more plentiful, and, in response to that demand, and contrary to all conventional logic about markets, the supply of programmers is falling. If you work as a programmer, the several emails a week you get from recruiters stand in not-so-mute testimony to that fact. If you decide that it’s time to start looking and throw your resume up on Dice, Monster, and CareerBuilder, your voicemail will fill up, your home answering machine will stop working, and your email provider will probably throttle you or start sending everything to SPAM. You will be absolutely buried in attempts to contact you. Some of them will be for intern software tester; some of them will be for inside sales rep; some of them will be for super business opportunities with Amway; some of them won’t even be in your native language.

DrinkFirehose

Once you do filter out the ones (dozens) that are complete non-starters, you’ll be left with the companies that have those sites on some kind of RSS or other digital speed dial, meaning that they do a lot of hiring. Now, there are some decent reasons that companies may do a lot of hiring, but there are a lot of not-so-decent reasons, such as high turnover, reckless growth, a breadth-over-depth approach to initial selection, etc. To put it in more relatable terms, imagine if you posted a profile on some dating site and within seconds of you posting it, someone was really excited to meet you. It may be Providence, but it also may be a bit worrisome.

The long and short of my advice here is that you shouldn’t post your resume immediately to sites like these. Flex your networking muscle a bit, apply to some appealing local companies that you’d like to work at, contact a handful of recruiters that you trust, and see what percolates. You can always hit the big boards later if no fish are biting or you start blowing through your savings, but if you’re in a position to be selective, I’d favor depth over breadth, so to speak.

Don’t Be Fake

When it comes time to the do the actual interview, don’t adopt some kind of persona that you think the interviewers want to see. Be yourself. You’re looking to see whether this is going to be a fit or not, and while it makes sense to put your best foot forward, don’t put someone else’s best foot forward. If you’re a quiet, introverted thinker, don’t do your best brogrammer imitation because there’s a ping pong table in the other room and the interviewers are all 20-something males. You’re probably going to fail to fit in anyway, and even if you don’t, the cultural gulf is going to continue to exist once you start.

And above all, remember that “I don’t know” is the correct answer for questions to which you don’t know the answer. Don’t lie or try to fake it. The most likely outcome is that you look absurd and tank the interview when you could have saved yourself a bit of dignity with a simple, “I’m not familiar with that.” But even if this ruse somehow works, what’s the long-play here? Do you celebrate the snow-job you just pulled on the interviewer, even knowing that he must be an idiot (or an Expert Beginner) to have fallen for your shtick? Working for an organization that asks idiots to conduct interviews probably won’t be fun. Or perhaps the interviewer is perfectly competent and you just lucked out with a wild guess. In that case, do you want to hire on at a job where they think you’re able to handle work that you can’t? Think that’ll go well and you’ll make a good impression?

If you don’t know the answers to questions that they consider important, there’s a pretty decent chance you’d be setting yourself up for an unhappy stay even if you got the job. Be honest, be forthright, and answer to the best of your ability. If you feel confident enough to do so, you can always pivot slightly and, for instance, turn a question about the innards of a relational database to an answer about the importance of having a good DBA to help you while you’re doing your development work or something. But whatever you do, don’t fake it, guess, and pray.

Have the Right Attitude

One of the things I find personally unfortunate about the interview process is how it uniquely transports you back to waiting to hear whether or not you got into the college of your dreams. Were your SAT scores high enough? Did you play a varsity sport or join enough clubs? Did you have enough people edit your essays? Oh-gosh-oh-gee I hope they like me. Or, really, I hope I’m good enough.

Let me end the suspense for you. You are. The interview process isn’t about whether you’re good enough, no matter how many multiple choice questions you’re told to fill out or how much trivia an interviewer sends your way in rapid fire bursts of “would this compile!?” The interview process is ultimately about whether you and the company would be a good mutual fit. It isn’t just a process to help them determine if you’d be able to handle the work that they do. It’s also a chance for you to evaluate whether or not you’d like doing the work that they give you. Both parts are equally important.

So don’t look at it as you trying to prove yourself somehow. It’s more like going to a social event in an attempt to make friends than it is like hoping you’re ‘good’ enough for your favorite college. Do you want to hang out with the people you’re talking to for the next several years of your life? Do you have similar ideas to them as to what good software development entails? Do you think you’d enjoy the work? Do you like, respect, and understand the technologies they use? This attitude will give you more confidence (which will make you interview better), but it also sets the stage for the next point here.

Don’t Waste Your Questions

In nearly every interview that I’ve ever been a part of, there’s the time for the interviewer to assess your suitability as a candidate via asking you questions. Then there’s the “what questions do you have for me” section. Some people will say, “nothing — I’m good.” Those people, as any career site or recruiter will tell you, probably won’t get an offer. Others will take what I believe is fairly standard advice and use this time as an opportunity to showcase their good-question-asking ability or general sharpness. Maybe you ask impressive sounding things like, “what’s your five year plan,” or, “I have a passionate commitment to quality as I’m sure you do, so how do you express that?” (the “sharp question” and “question brag,” respectively).

I think it’s best to avoid either of those. You can really only ask a handful of questions before things start getting awkward or the interviewer has to go, so you need to make them count. And you’ll make them count most by asking things that you really want to know the answer to. Are you an ardent believer in TDD or agile methodologies? Ask about that! Don’t avoid it because you want it to be true and you want them to make an offer and you don’t want to offend them. Better to know now that you have fundamental disagreements with them than six months into the job when you’re miserable.

As an added bonus, your interviewer is likely to be a pretty successful, intelligent person. She’s probably got a fairly decent BS detector and would rather you ask questions to which you genuinely want to know the answers.

Forge your Questions in the Fires of Experience

So you’re going to ask real questions, but which questions to ask… My previous suggestion of “ones you want the answer to” is important, but it’s not very specific. The TDD/agile question previously mentioned is an example of one good kind of question to ask: a question which provokes an answer that interests you and gives you information about whether you’d like the job. But I’d take it further than this.

Make yourself a list of things you liked and didn’t like at previous jobs, and then start writing down questions that will help you ferret out whether the things you liked or didn’t will be true at the company where you’re interviewing. Did you like way your last company provided you with detailed code reviews because it helped you learn? Ask what kinds of policies and programs they have in place to keep developers current and sharp. Did you not like the mess of interconnected dependencies bogging down the architecture of the code at your last stop? Ask them what they think of Singleton as a design pattern. (I kid, but only kind of.)

You can use this line of thinking to get answers to tough-to-ask questions as well. For instance, you’re not going to saunter into an interview and say, “So, how long before I can push my hours to second shift and stroll in at 2 PM?” But knowing things about a company like dress code, availability of flex hours, work-from-home policy, etc. is pretty valuable. Strategize about a way to ask about these things without asking–even during casual conversation. If you say something like “rush hour on route 123 out there seems pretty bad, how do people usually avoid it,” the next thing you hear will probably be about their flex hours policy, if the company has one.

Negative Bad, Zero-Sum Fine

Another piece of iconic advice that you hear is “don’t talk badly about your former/current employer.” I think that’s great advice to be on the safe side. I mean, if I’m interviewing you, I don’t want to hear how all of your former bosses have been idiots who don’t appreciate your special genius, nor do I want to hear juicy gossip about the people at your office. Staying upbeat makes a good impression.

That said, there is a more nuanced route you can travel if you so choose, that I think makes you a pretty strong candidate. If I’m interviewing you, I also know that your former positions aren’t all smiles and sunshine or you wouldn’t be sitting in front of me. When talking about past experience, you can go negative, but first go positive to cancel it out.

My current employer has some really great training programs, and I’ve enjoyed working with every project manager that I’ve been paired with. That’s contributed to me enjoying the culture–and feeling a sense of camaraderie, too. Of course, there were some things I might have done differently in our main code base, from an architectural perspective. I’d have liked to see a more testable approach and an IoC container, perhaps, but I realize that some things take time to change, especially in a legacy code base.

Now you’ve communicated that you recognize that the architectural approach to your code base was sub-optimal, but that you maintain a positive attitude in spite of that. Instead of the interviewer hearing, “man, those guys over there are procedural-code-writing cretins,” he hears, “some things were less than ideal, and I’d like them to improve, but I grow where I’m planted.”

Gather your Thoughts

After you’re done, stop and write down what you thought. I mean it. Walk out of the building, and in your car or on a nearby bench, plop down and write your impressions while they’re fresh in your mind. What did you like, what worries you, what questions should you follow up with, what specifics can you cite? Things will be fuzzy later, and this information is solid gold now.

Your brain is going to play weird tricks on you as time goes by and you’re considering an offer or the next round of interviews. Something that struck you as a red flag might be smoothed over in your mind as you grow increasingly tired of your job hunt. I know they said that they’re as waterfall as Niagra and proud, but I think the tone of voice and non-verbal cues might have indicated a willingness to go agile. You’ll fool yourself. You’ll talk yourself into things. That is, unless you write them down and bring them up as concerns the next time you talk with the company or a representative thereof.

Maintain Perspective

Interviewing is an inherently reductionist activity, both for you and for the company. Imagine if marriage worked like job interviews. The proposition would be put to you and your potential mates this way:

Alright, so you have have about two or three cracks at this whole marriage thing before you’re too old for it, so take your time and make a good decision and all that, but do it really fast. You’re going to meet for lunch, a little Q&A, and then you’ll have just enough time to send a thank-you note before you hear thumbs up or thumbs down from your date. If it’s thumbs up, you have a few days to decide if the prenuptial agreement looks good, if you have similar opinions on when to have children and how many, yadda-yadda, and hurry up, and, “do you take this person to be your lawfully wedded, blah, blah, you may now kiss, etc., whatever, done.

Think a few important details might get missed in that exchange? Think you might be left after an inexplicable rejection, stammering, “b-b-but I know how to cook and I really have a lot to offer… why… I just don’t get it.” It’s pretty likely. There are going to be a lot of bad decisions and the divorce rate will be pretty high.

Back to the interview process, just remember to keep your chin up. You might have interviewed for a job that had already been filled except for the detail of technically having to interview a second person. Maybe the CEO’s son got the job instead of you. Maybe you wore a gray suit and the man interviewing you hates the color gray with a burning passion. Maybe you had a lapse when talking about your WPF skills and said WCF, and someone thinks that makes you a moron. The list goes on, and it often makes no sense. It makes no sense in the way that you’ll look at a company’s website and see a weirdly blinking graphic and think it looks unprofessional and decide not to apply there. You make snap judgments, and so do they. It’s the name of the game. Don’t take it personally.

By

How to Keep Method Size Under Control

Do you ever open a source code file and see a method that starts at the top of your screen and kind of oozes its way to the bottom with no end in sight? When you find yourself in that situation, imagine that you’re reading a ticker tape and try to guess at where the method actually ends. Is it a foot below the monitor? Three feet? Does it plummet through the floor and into the basement, perhaps down past the water table and into the earth’s mantle?

TickerMonitor

Visualized like this, I think everyone might agree that there’s some point at which the drop is too far, though there’s likely some disagreement on where exactly this is. Personally, I used to subscribe to the “fits on a screen” heuristic and would only start looking to pull out methods if it got beyond that. But in more recent years, I think even smaller. How small? I dunno–five or six lines, max. Small enough that you’ll only ever see one try-catch or control flow statement in there. Yeah, seriously, that small. If you’re thinking it sounds kind of crazy, I get that, but give it a try for a while. I can almost guarantee that you’ll lose your patience for looking at methods that cause you to think, “wait, where was loopCounter declared again–before the second or third while loop?”

If you accept the premise that this is a good way to do things or that it might at least be worth a try, the first thing you’ll probably wonder is how to go about doing this from a practical standpoint. I’ve definitely encountered people and even whole groups who considered method sizes like this to be impractical. The first thing you have to do is let go of the notion that classes are in some kind of limited supply and you have to be careful not to use too many. Same with modules, if your project gets big enough. The reason I say this is that having small methods means that you’re going to have a lot of them. This in turn means that they’re going to need to be spread to multiple classes, and those classes will occupy more namespaces and modules. But that’s okay. If you encounter a large application that’s well designed and factored, it’s that way because the application is actually a series of small, focused components working together. Monolithic doesn’t scale well.

Getting Down to Business

If you’ve prepared yourself for the reality of needing more classes organized into more namespaces and modules, you’ve really overcome the biggest obstacle to being a small-method coder. Now it’s just a question of mechanics and practice. And this is actually important–it’s not sufficient to just say, “I’m going to write a lot of methods by stopping at the fifth line, no matter what.” I guarantee you that this is going to create a lot of weird cross-coupling, unnecessary state, and ugly things like out parameters. Nobody wants that. So it’s time to look to the art of creating abstractions.

As a brief digression, I’ve recently picked up a copy of Uncle Bob Martin’s Clean Code: A Handbook of Agile Software Craftsmanship and been tearing my way through it pretty quickly. I’d already seen most of the Clean Coder video series, which covers some similar ground, but the book is both a good review and a source of new and different information. To be blunt, if you’re ever going to invest thirty or forty bucks in getting better at your craft, this is the thing to buy. It’s opinionated, sometimes controversial, incredibly specific, and absolute mandatory reading. It will change your outlook on writing code and make you better at what you do, even if you don’t agree with every single point in it (though I don’t find much with which to take issue, personally).

The reason I mention this book and series is that there is an entire section in the book about functions/methods, and two of its fundamental points are that (1) functions should do one thing and one thing only, and (2) that functions should have one level of abstraction. To keep those methods under control, this is a great place to start. I’d like to dive a little deeper, however, because “do one thing” and “one level of abstraction per function” are general instructions. It may seem a bit like hand-waving without examples and more concrete heuristics.

Extract Finer-Grained Details

What Uncle Bob is saying about mixed abstractions can be demonstrated in this code snippet:

public void OpenTheDoor()
{
    GrabTheDoorKnob();
    TwistTheDoorKnob();
    TightenYourBiceps();
    BendYourElbow();
    KeepYourForearmStraight();
}

Do you see what the issue is? We have a method here that describes (via sub-methods that are not pictured) how to open a door. The first two calls talk in terms of actions between you and the door, but the next three calls suddenly dive into the specifics of how to pull the door open in terms of actions taken by your muscles, joints, tendons, etc. These are two different layers of abstractions: one about a person interacting with his or her surroundings and the other detailing the mechanics of body movement. To make it consistent, we could get more detailed in the first two actions in terms of extending arms and tightening fingers. But we’re trying to keep methods small and focused, so what we really want is to do this:

public void OpenTheDoor()
{
    GrabTheDoorKnob();
    TwistTheDoorKnob();
    PullOpenTheDoor();
}

private static void PullOpenTheDoor()
{
    TightenYourBiceps();
    BendYourElbow();
    KeepYourForeArmStraight();
}

Create Coarser Grained Categories

What about a different problem? Let’s say that you have a method that’s long, but it isn’t because you are mixing abstraction levels:

public void CookQuesadilla()
{
    ChopOninons();
    ShredCheese();

    GetOutThePan();
    AddOilToPan();
    TurnOnTheStove();

    SprinkleOnionsAndCheeseOnTortilla();
    PutTortillaInPan();
    CookUntilFirm();
    FoldTortillaAndCookUntilBrown();
    FlipTortillaAndCookUntilBrown();
    RemoveCookedQuesadilla();

    RemovePanFromStove();
    ScrubPanWithBrush();
    ServeQuesadillas();
}

These items are all at the same level of abstraction, but there are an awful lot of them. In the previous example, we were able to tighten up the method by making the abstraction levels consistent, but here we’re going to actually need to add a layer of abstraction. This winds up looking a little better:

public void CookQuesadilla()
{
    PrepareIngredients();
    PrepareEquipment();
    PerformActualCooking();
    FinishUp();
}

private static void PrepareIngredients()
{
    ChopOninons();
    ShredCheese();
}
private static void PrepareEquipment()
{
    GetOutThePan();
    AddOilToPan();
    TurnOnTheStove();
}
private static void PerformActualCooking()
{
    SprinkleOnionsAndCheeseOnTortilla();
    PutTortillaInPan();
    CookUntilFirm();
    FoldTortillaAndCookUntilBrown();
    FlipTortillaAndCookUntilBrown();
    RemoveCookedQuesadilla();
}
private static void FinishUp()
{
    RemovePanFromStove();
    ScrubPanWithBrush();
    ServeQuesadillas();
}

In essence, we’ve created categories and put the actions from the long method into them. What we’ve really done here is create (or add to) a tree-like structure of methods. The public method is the root, and it had thirteen children. We gave it instead four children, and each of those children has between two and five children of its own. To tighten up methods, it’s perfectly viable to add “nodes” to the “tree” of your call stack. While “do one thing” is still a little elusive, this seems to be carrying us in that direction. There’s no individual method that you look at and think, “boy, that’s a lot of stuff going on.” Certainly its a matter of some art and taste, but this is probably a good way to think of it–organize stuff into hierarchical method categories until you look at each method and think, “I could probably memorize what that does if I needed to.”

Recognize that Control Flow Uses Up an Abstraction

So far we’ve been conceptually figuring out how to organize families of methods into well-balanced tree structures, and that’s taken us through some pretty mundane code. This code has involved none of the usual stuff that sends apps careening off the rails into bug land, such as conditionals, loops, assignment, etc. Let’s correct that. Looking at the code above, think of how you’d modify this to allow for the preparation of an arbitrary number of quesadillas. Would it be this?

public void CookQuesadillas(int numberOfQuesadillas)
{
    PrepareIngredients();
    PrepareEquipment();
    for(int i = 0; i < numberOfQuesadillas; i++)
        PerformActualCooking();
    FinishUp();
}

Well, that makes sense, right? Just like the last version, this is something you could read conversationally while in the kitchen just as easily as you do in the code. Prep your ingredients, then prep your equipment, then for some integer index equal to zero and less than the number of quesadillas you want to cook, increment the integer by one. Each time you do that, cook the quesadilla. Oh, wait. I think we just went careening into the nerdiest kitchen narrative ever. If Gordon Ramsey were in charge, he'd have strangled you with your apron for that. Hmm... how 'bout this?

public void CookQuesadillas(int numberOfQuesadillas)
{
    PrepareIngredients();
    PrepareEquipment();
    PerformActualCooking(numberOfQuesadillas);
    FinishUp();
}

private static void PerformActualCooking(int numberOfQuesadillas)
{
    for (int index = 0; index < numberOfQuesadillas; index++)
    {
        SprinkleOnionsAndCheeseOnTortilla();
        PutTortillaInPan();
        CookUntilFirm();
        FoldTortillaAndCookUntilBrown();
        FlipTortillaAndCookUntilBrown();
        RemoveCookedQuesadilla();
    }
}

Well, I'd say that the CookQuesadillas method looks a lot better, but do we like "PerformActualCooking?" The whole situation is an improvement, but I'm not a huge fan, personally. I'm still mixing control flow with a series of domain concepts. PerformActualCooking is still both a story about for-loops and about cooking. Let's try something else:

public void CookQuesadillas(int numberOfQuesadillas)
{
    PrepareIngredients();
    PrepareEquipment();
    PerformActualCooking(numberOfQuesadillas);
    FinishUp();
}

private static void PerformActualCooking(int numberOfQuesadillas)
{
    for (int index = 0; index < numberOfQuesadillas; index++)
        CookAQuesadilla();
}

private static void CookAQuesadilla()
{
    SprinkleOnionsAndCheeseOnTortilla();
    PutTortillaInPan();
    CookUntilFirm();
    FoldTortillaAndCookUntilBrown();
    FlipTortillaAndCookUntilBrown();
    RemoveCookedQuesadilla();
}

We've added a node to the tree that some might say is one too many, but I disagree. What I like is the fact that we have two methods that contain nothing but abstractions about the domain knowledge of cooking and we have a bridging method that brings in the detailed realities of the programming language. We're isolating things like looping, counting, conditionals, etc. from the actual problem solving and story telling that we want to do here. So when you have a method that does a few things and you think about adding some kind of control flow to it, remember that you're introducing a detail to the method that is at a lower level of abstraction and should probably have its own node in the tree.

Adrift in a Sea of Tiny Methods

If you're looking at this cooking example, it probably strikes you that there are no fewer than eighteen methods in this class, not counting any additional sub-methods or elided properties (which are really just methods in C# anyway). That's a lot for a class, and you may think that I'm encouraging you to write classes with dozens of methods. That isn't the case. So far what we've done is started to create trees of many small methods with a public method and then a ton of private methods, which is a code smell called "Iceberg Class." What's the cure for iceberg classes? Extracting classes from them. Maybe you turn the first two methods that prepare ingredients and equipment into a "Preparer" class with two public methods, "PrepareIngredients" and "PrepareEquipment." Or maybe you extract a quesadilla cooking class.

It's really going to vary based on your situation, but the point is that you take this opportunity pick nodes in your growing tree of methods and sub-methods and convert them into roots by turning them into classes. And if doing this leads you to having what seems to be too many classes in your namespace? Create more namespaces. Too many of those in a module? Create more modules. Too many modules/projects in a solution? More solutions.

Here's the thing: the complexity exists no matter how many or few methods/classes/namespaces/modules/solutions you have. Slamming them all into monolithic constructs together doesn't eliminate or even hide that complexity, though many seem to take the ostrich approach and pretend that it does. Your code isn't somehow 'simpler' because you have one solution with one project that has ten classes, each with 300 methods of 7,000 lines. Sure, things look simple when you fire up the IDE, but they sure won't be simple when you try to debug. In fact, they'll be much more complicated because your functionality will be hopelessly interwoven with weird temporal couplings, ad-hoc references, and hidden dependencies.

If you create large trees of functionality, you have the luxury of making the structure of the tree the representative of the application's complexity, with each node an island of simplicity. It is in these node-methods that the business logic takes place and that getting things right is most important. And by managing your abstractions, you keep these nodes easy to reason about. If you structure the tree correctly and follow good OOP design and practice, you'll find that even the structure of the tree is not especially complicated since each node provides a good representative abstraction for its sub-tree.

Having small, readable, self-documenting methods is no pipe dream. Really, with a bit of practice, it's not even very hard. It just requires you to see code a little bit differently. See it as a series of hierarchical stories and abstractions rather than as a bunch of loops, counters, pointers, and control flow statements, and the people that maintain what you write, including yourself, will thank you for it.

By

Language Basics from Unit Tests

Let’s say that in a green field code base someone puts together a type that conceptually is a collection of non-integer values. For the sake of discussion, let’s call it a graph. A graph object might store a series of two-element tuples or perhaps a series of some value type like “point.” The graph might then perform operations on this data, such as IncreaseX() or IncreaseY() or Invert() or Divide()–operations that iterate through the points and do things to them. The actual mechanics of this don’t matter a whole lot. It’s the concept that’s important.

Now let’s say that in the graph the internal representation of the points is a floating point data type such as, well, float. I’m going to save the nuance of floating point arithmetic for a future practical math post, but suffice it say that floats can exhibit some weird-seeming behavior when it comes to comparisons, truncation/rounding, certain kinds of casting and type representations, etc.

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Mind_Equals_Blown()
{
    float x = 0.2f;
    float y = 0.1f;
    float z = x + y;

    Assert.IsTrue(z == x + y);  //What the - why does this fail?!?
}

And let’s also say that the person responsible for authoring this graph class hasn’t read a practical math post about floating point arithmetic and is completely oblivious to these potential pitfalls.

And, finally, let’s say that this graph class becomes a mainstay of the business logic in a particular application. It’s modified, extended, and relied heavily upon without a whole lot of attention paid to its internal workings. At least until stuff mysteriously doesn’t work. But when that happens, the culprit isn’t immediately obvious, so strange work-arounds and cargo-cult, oddball solutions spring up when symptoms occur. Extension methods are written, and sometimes entirely different modules are added to the code base because the existing one is “tricky” or “not to be trusted.”

At the application level, this causes maintenance issues, a lot of heated and fruitless arguments, and voodoo approaches to code. From a user interface perspective, this causes quirky behavior. Occasionally a linear graph is completely displaced out of the graph and rendered on some menu somewhere, or the screen goes blank for a few seconds and then the display is restored. Defects and defect reports are created and developers dispatched to track down the issue, but after a few days of fruitless efforts, some project manager quietly sets the defect’s priority from “critical” to “cosmetic” and the software is shipped. It’s embarrassing, but whatcha gonna do. Ya know, computers have a mind of their own sometimes!

MessedUpGraph

Catching it Early

What if, instead of doing things the old-fashioned but all-too-common way, the authors of this code had been writing unit tests and/or practicing TDD? Well, there’s a very good chance that the issue stemming from the graph library is caught immediately as its API methods are being fleshed out from a functionality perspective. There’s a good chance that someone is writing a test and gets to the point that we were at in the code sample above, where they are utterly dumbfounded as to why 1+1 does not equal 2 in float land.

And then, good things happen. The developer in question takes to google or stack overflow, or perhaps he talks to other, more experienced developers on his team. He then gets an explanation, learns something about the language, and leaves the code in a correct state. Contrast this with the non-tested approach of “code it up, build a bad house on the bad foundation, and then ship the result because it’s too late.”

And what if the TDD/unit tests don’t expose this issue? Well, what they’ll do in either case is decouple the code base. So when the issue eventually does crop up via weird GUI behavior, it will be much easier to isolate. When it’s isolated, it will be much easier for the unit-test-savvy developers to write a test that exposes the defect to learn the lesson and fix the issue. It’s still a win.

The point about unit tests helping catch errors and leading to a more decoupled design is hardly controversial. But the benefits go beyond that. Unit tests provide a fast feedback loop for all points in the code base, which lends itself very well to poking and prodding things and experimenting. And that, in turn, leads to better understanding of not only the code, but also the language. If you can execute and get feedback on code extremely quickly, you’re much more likely to ask questions like, “I wonder what happens if I do x…” and then to do it and see. And that sort of experimentation, much like immersion in natural language, leads much more quickly to fluency.

By

What Drives Waterfall Projects?

To start off the week, I have a satirical post about projects developed using the waterfall ‘methodology.’ (To understand the quotes, please see my post on why I don’t think waterfall is actually a methodology at all). I figured that since groups that use agile approaches and industry best practices have a whole set of xDD acronyms, such as TDD, BDD, and DDD, waterfall deserved a few of its own. So keep in mind that while this post is intended to be funny, I think there is a bit of relevant commentary to it.

Steinbeck-Driven Development (SDD)

For those of you who’ve never had the pleasure to read John Steinbeck’s “Of Mice and Men,” any SDD practitioner will tell you that it’s a heartwarming tale of two friends who overcome all odds during the Great Depression, making it cross-country to California to start a rabbit petting zoo. And it’s that outlook on life that they bring to the team when it comes to setting deadlines, tracking milestones, and general planning.

scan0003

Relentlessly optimistic, the SDD project manager reacts to a missed milestone by reporting to his superiors that everything is a-OK because the team will just make it up by the time they hit the next one. When the next milestone is missed by an even wider margin, same logic applies. Like a shopping addict or degenerate gambler blithely saying, “you gotta spend money to make money,” this project manager will continue to assume on-time delivery right up until the final deadline passes with no end in site. When that happens, it’s no big deal–they just need a week to tie up a few loose ends. When that week is up, it’ll just be one more week to tie up a few loose ends. When that week expires, they face reality. No, just kidding. It’ll just be one more week to tie up a few loose ends. After enough time goes by, members of the team humor him with indulgent baby talk when he says this: “sure it will, man, sure it will. In a week, everything will be great, this will all be behind us, and we’ll celebrate with steaks and lobster at the finest restaurant in town.”

Spoiler alert. At the end of Steinbeck’s novel, the idyllic rabbit farm exists only in the mind of one of the friends, shortly before he’s shot in the back of the head by the other, in an act that is part merciful euthanasia and part self-preservation. The corporate equivalent of this is what eventually happens to our project manager. Every week he insists that everything will be fine and that they’re pretty close to the promised land until someone puts the project out of its misery.

Shooting-Star-Driven Development (SSDD)

Steinbeck-Driven Development is not for everyone. It requires a healthy ability to live in deluded fantasy land (or, in the case of the novel, to be a half-wit). SSDD project managers are not the relentless optimists that their SDD counterparts are. In fact, they’re often pretty maudlin, having arrived at a PM post on a project that everyone knows is headed for failure and basically running out the clock until company bankruptcy or retirement or termination or something. These are the equivalents of gamblers that have exhausted their money and credit and are playing at the penny tables in the hopes that their last few bucks will take them on an unprecedented win streak. Or, perhaps more aptly, they’re like a lonely old toy-maker, sitting in his woodshop, hoping for a toy to come to life and keep them company.

This PM and his project are doomed to failure, so he rarely bothers with status meetings, creates a bare minimum of power points, and rarely ever talks about milestones. Even his Gantt charts have a maximum of three nested dependencies. It’s clear to all that he’s phoning it in. He knows it’s unlikely, but he pins his slim hope to a shooting star: maybe one of his developers will turn out to be the mythical 100x developer that single-handedly writes the customer information portal in the amount of time that someone, while struggling to keep a straight face, estimated it would take to do.

As projects go along and fall further and further behind schedule and the odds of a shooting star developer become more and more remote, the SSDD project manager increasingly withdraws. Eventually, he just kind of fades away. If Geppetto were a real life guy, carving puppets and asking stars to make them real children, he’d likely have punched out in an 19th century sanitarium. There are no happy endings on SSDD projects–just lifeless, wooden developers and missed deadlines.

Fear-Driven Development (FDD)

There is no great mystery to FDD projects. The fate of the business is in your hands, developers. Sorry if that’s a lot of pressure, but really, it’s in your hands.

The most important part of a FDD project is to make it clear that there will be consequences–dire consequences–to the business if the software isn’t delivered by such and such date. And, of course, dire consequences for the business are pretty darned likely to affect the software group. So, now that everyone knows what’s at stake, it’s time to go out and get that complex, multi-tiered, poorly-defined application built in the next month. Or else.

Unlike most waterfall projects, FDD enters the death march phase pretty much right from the start of coding. (Other waterfall projects typically only start the death march phase once the testing phase is cancelled and the inevitability of missing the deadline is clear.) The developers immediately enter a primal state of working fourteen hours per day because their very livelihoods hang in the balance. And, of course, fear definitely has the effect of getting them to work faster and harder than they otherwise would, but it also has the side effect of making the quality lower. Depending on the nature of the FDD project and the tolerance level of the customers for shoddy or non-functional software, this may be acceptable. But if it isn’t, time for more fear. Consequences become more dire, days become longer, and weekends are dedicated to the cause.

The weak have nervous breakdowns and quit, so only the strong survive to quit after the project ends.

Passive-Aggressive-Driven Development (PADD)

One of the most fun parts of waterfall development is the the estimation from ignorance that takes place during either the requirements or design days. This is where someone looks at a series of Visio diagrams and says, “I think this project will take 17,388.12 man-hours in the low risk scenario and 18,221.48 in the high-risk scenario.” The reason I describe this as fun is because it’s sort of like that game you play where everyone guesses the number of gumballs in a giant jar of gumballs and whoever is closest without going over wins a prize. For anything that’s liable to take longer than a week, estimation in a waterfall context is a ludicrous activity that basically amounts to making things up and trying to keep a straight face as you convince yourself and others that you did something besides picking a random number.

Well, I broke this task up into 3,422 tasks and estimated each of those, so if they each take four hours, and everything goes smoothly when we try to put them all together with an estimate for integration of… ha! Just kidding! My guess is 10,528 hours–ten because I was thinking that it’d have to be five digits, the fve because it’s been that many days that we’ve been looking at these Gantt charts and sequence diagrams, and twenty-eight because that was my number in junior high football. And you can’t bid one hour over me because I’m last to guess!

But PADD PMs suck all of the fun out of this style of estimation by pressuring the hours guessers (software developers) into retracting and claiming less time. But they don’t do it by showing anger–the aggression is indirect. When the developer says that task 1,024, writing the batch file import routine, will take approximately five hours, the PADD PM says, “Oh, wow. Must be pretty complicated. Jeez, I just assumed that a senior level developer could bang that out in no more than two. My bad.” Shamed, the developer retracts: “No, no–you’re right. I figured the EDI would be more complicated than it was, so I just realized that my estimate is actually two hours.”

Repeated in aggregate, the PADD PM is some kind of spectacular black belt/level 20/guru/whatever metric is used to measure PM productivity, because he just reduced the time to market by 60% before a single line of code was ever written. Amazing! Of course, talk at the beginning of the project is cheap. The real measure of waterfall project success is figuring out who to blame and getting others to absorb the cost when the project gets way behind schedule. And this is where the PADD master really shines.

To his bosses, he says, “man, I guess I just had too much faith in our guys–I mean, I know you hire the best.” To the developers, he says, “boy, your estimates seemed pretty reasonable to me, so I would have assumed that everything would be going on time if you were just putting in the hours and elbow grease… weird.” To the end-users/stakeholders, he says, “it’s strange, all of our other stakeholders who get us all of the requirements clearly and on time get their software on time–I wonder what happened here.”

There’s plenty of blame to go around, and PADD PMs make sure everyone partakes equally and is equally dissatisfied with the project.

By

Exception Handling Basics

The other day, I was reviewing some code, and I saw a series of methods conforming to the following (anti) ‘pattern’

public class CustomerProcessor
{
    public void ProcessCustomer(Customer customer)
    {
        try
        {
            if (customer.IsActive)
                ProcessActiveCustomer(customer);
        }
        catch (Exception ex)
        {
            throw ex;
        }
    }

    private void ProcessActiveCustomer(Customer customer)
    {
        try
        {
            CheckCustomerName(customer);
            WriteCustomerToFile(customer);
        }
        catch (Exception ex)
        {
            throw ex;
        }
    }

    public void CheckCustomerName(Customer customer)
    {
        try
        {
            if (customer.Name == null)
                customer.Name = string.Empty;
        }
        catch (Exception ex)
        {
            throw ex;
        }
    }

    private void WriteCustomerToFile(Customer customer)
    {
        try
        {
            using (StreamWriter writer = new StreamWriter(@"C:\temp\customer.txt"))
            {
                writer.WriteLine(customer.Name);
            }
        }
        catch (Exception ex)
        {
            throw ex;
        }
    }
}

Every method consisted of a try with the actual code of interest inside of it and then a caught general exception that was then thrown. As I looked more through the code base, it became apparent to me that this was some kind of ‘standard’ (and thus perhaps exhibit A of how we get standards wrong). Every method in the project did this.

If you’re reading and you don’t know why this is facepalm, please read on. If you’re well-versed in C# exceptions, this will probably be review for you.

Preserve the Stack Trace

First things (problems) first. When you throw exceptions in C# by using the keyword “throw” with some exception type, you rip a hole in the fabric of your application’s space-time–essentially declaring that if no code that’s called knows how to handle the singularity you’re belching out, the application will crash. I use hyperbolic metaphor to prove a point. Throwing an exception is an action that jolts you out of the normal operation of your program by using a glorified “GOTO,” except that you don’t actually know where it’s going because that’s up to the code that called you.

When you do this, the .NET framework is helpful enough to package up a bunch of runtime information for troubleshooting purposes, including something called the “stack trace.” If you’ve ever seen a .NET (or Java) site really bomb out, you’ve probably seen one of these–it’s a bunch of code with line numbers that basically tells you, “A called B, which called C, which called D … which called Y, which called Z, which threw up and crashed your program.” When you throw an exception in C# the framework saves the stack trace that got you to the method in question. This is true whether the exception happens in your code or deep, deep within some piece of code that you rely on.

So, in the code above, let’s consider what happens when the code is executed on a machine with no C:\temp folder. The StreamWriter constructor is going to throw an exception indicating that the path in question is not found. When it does, you will have a nice exception that tells you ProcessCustomer called ProcessActiveCustomer, which called WriteCustomerToFile, which called new StreamWriter(), which threw an exception because you gave it an invalid path. Pretty neat, huh? You just have to drill into the exception object in the debugger to see all of this (or have your application configured to display this information in a log file, on a web page, etc.).

But what happens next is kind of a bummer. Instead of letting this exception percolate up somewhere, we trap it right there in the method in our catch block. At that point, we throw an exception. Now remember, when you throw an exception object, the stack trace is recorded at the point that you throw, and any previous stack trace is blown away. Instead of it being obvious that the exception originated in the StreamWriter constructor, it appears to have originated in WriteCustomerToFile. But wait, it gets worse. From there, the exception is trapped in ProcessActiveCustomer and then again in ProcessCustomer. Since every method in the code base has this boilerplate, every exception generated will percolate back up to main and appear to have been generated there.

To put this in perspective, you will never be able to see or record the stack trace for the point at which the exception was thrown. Now, in development that’s not the end of the world since you can set the debugger to break where thrown instead of handled, but for production logging, this is awful. You’ll never have the foggiest idea where anything is coming from.

How to fix this? It’s as simple as getting rid of the “throw ex;” in favor of just “throw;” This preserves the stack trace while passing the exception on to the next handler. Another alternative, should you wish to add more information when you throw, would be to do “throw new Exception(ex)” where you pass the exception you’ve caught to a new one that you’re creating. The caught exception will be preserved, intact, and can be accessed in debugging via the “InnerException” property of the one you’re now throwing.

public class CustomerProcessor
{
    public void ProcessCustomer(Customer customer)
    {
        try
        {
            if (customer.IsActive)
                ProcessActiveCustomer(customer);
        }
        catch (Exception ex)
        {
            throw;
        }
    }

    private void ProcessActiveCustomer(Customer customer)
    {
        try
        {
            CheckCustomerName(customer);
            WriteCustomerToFile(customer);
        }
        catch (Exception ex)
        {
            throw;
        }
    }

    public void CheckCustomerName(Customer customer)
    {
        try
        {
            if (customer.Name == null)
                customer.Name = string.Empty;
        }
        catch (Exception ex)
        {
            throw;
        }
    }

    private void WriteCustomerToFile(Customer customer)
    {
        try
        {
            using (StreamWriter writer = new StreamWriter(@"C:\temp\customer.txt"))
            {
                writer.WriteLine(customer.Name);
            }
        }
        catch (Exception ex)
        {
            throw;
        }
    }
}

(It would actually be better here to remove the Exception ex altogether in favor of just “catch {” but I’m leaving it in for illustration purposes)

Minimize Exception-Aware Code

Now that the stack trace is going to be preserved, the pattern here isn’t actively hurting anything in terms of program flow or output. But that doesn’t mean we’re done cleaning up. There’s still a lot of code here that doesn’t need to be.

In this example, consider that there are only two methods that can generate exceptions: ProcessCustomer (if passed a null reference) and WriteCustomerToFile (various things that can go wrong with file I/O). And yet, we have exception handling in every method, even methods that are literally incapable of generating them on their own. Exception throwing and handling is extremely disruptive and it makes your code very hard to reason about. This is because exceptions, as mentioned earlier, are like GOTO statements that whip the context of your program from wherever the exception is generated to whatever place ultimately handles exceptions. Oh, and the boilerplate for handling them makes methods hard to read.

The approach shown above is a kind of needlessly defensive approach that makes the code incredibly dense and confusing. Rather than a strafing, shock-and-awe show of force for dealing with exceptions, the best approach is to reason carefully about where they might be generated and how one might handle them. Consider the following rewrite:

public class CustomerProcessor
{
    public void ProcessCustomer(Customer customer)
    {
        if(customer == null)
            Console.WriteLine("You can't give me a null customer!");
        try
        {
            ProcessActiveCustomer(customer);
        }
        catch (SomethingWentWrongWritingCustomerFileException)
        {
            Console.WriteLine("There was a problem writing the customer to disk.");
        }
    }

    private void ProcessActiveCustomer(Customer customer)
    {
        CheckCustomerName(customer);
        WriteCustomerToFile(customer);
    }

    public void CheckCustomerName(Customer customer)
    {
        if (customer.Name == null)
            customer.Name = string.Empty;
    }

    private void WriteCustomerToFile(Customer customer)
    {
        try
        {
            using (var writer = new StreamWriter(@"C:\temp\customer.txt"))
            {
                writer.WriteLine(customer.Name);
            }
        }
        catch (Exception ex)
        {
            throw new SomethingWentWrongWritingCustomerFileException("Ruh-roh", ex);
        }
    }
}

Notice that we only think about exceptions at the ‘endpoints’ of the little application. At the entry point, we guard against a null argument instead of handling it with an exception. As a rule of thumb, it’s better to handle validation via querying objects than by trying things and catching exceptions, both from a performance and from a readability standpoint. The other point of external interaction where we think about exceptions is where we’re calling out to the filesystem. For this example, I handle any exception generated by stuffing it into a custom exception type and throwing that back to my caller. This is a practice that I’ve adopted so that I know at a glance when debugging if it’s an exception I’ve previously reasoned about and am trapping or if some new problem is leaking through that I didn’t anticipate. YMMV on this approach, but the thing to take away is that I deal with exceptions as soon as they come to me from something beyond my control, and then not again until I’m somewhere in the program that I want to report things to the user. (In an actual application, I would handle things more granularly than simply catching Exception, opting instead to go as fine-grained as I needed to in order to provide meaningful reporting on the problem)

Here it doesn’t seem to make a ton of difference, but in a large application it will–believe me. You’ll be much happier if your exception handling logic is relegated to the places in the app where you provide feedback to the user and where you call external stuff. In the guts of your program, this logic isn’t necessary if you simply take care to write code that doesn’t contain mistakes like null dereferences.

What about things like out of memory exceptions? Don’t you want to trap those when they happen? Nope. Those are catastrophic exceptions beyond your control, and all of the logging and granular worrying about exceptions in the world isn’t going to un-ring that bell. When these happen, you don’t want your process to limp along unpredictably in some weird state–you want it to die.

On the Lookout for Code Smells

One other meta-consideration worth mentioning here is that if you find it painful to code because you’re putting the same few lines of code in every class or every method, stop and smell what your code is trying to tell you. Having the same thing over and over is very much not DRY and not advised. You can spray deodorant on it with something like a code snippet, but I’d liken this to addressing a body odor problem by spraying yourself with cologne and then putting on a full body sweatsuit–code snippets for redundant code make things worse while hiding the symptoms.

If you really feel that you must have exception handling in every method, there are IL Weaving tools such as PostSharp that free you from the boilerplate while letting you retain the functionality and granularity you want. As a general rule of thumb, if you’re cranking out a lot of code and thinking, “there’s got to be a better way to do this,” stop and do some googling because there almost certainly is.

By

Up or Not: Ambition of the Expert Beginner

In the last post, I talked about the language employed by Expert Beginners to retain their status at the top of a software development group. That post was a dive into the language mechanics of how Expert Beginners justify decisions that essentially stem from ignorance–and often laziness, to boot. They generally have titles like “Principal Engineer” or “Architect” and thus are in a position to argue policy decisions based on their titles rather than on any kind of knowledge or facts supporting the merits of their approach.

In the series in general, I’ve talked about how Expert Beginners get started, become established, and, most recently, about how they fend off new ideas (read: threats) in order to retain their status with minimal effort. But what I haven’t yet covered and will now talk about is the motivations and goals of the Expert Beginner. Obviously, motivation is a complex subject, and motivations will be as varied as individuals. But I believe that Expert Beginner ambition can be roughly categorized into groups and that these groups are a function of their tolerance for cognitive dissonance.

Wikipedia (and other places) defines cognitive dissonance as mental discomfort that arises from simultaneously holding conflicting beliefs. For instance, someone who really likes the taste of steak but believes that it’s unethical to eat meat will experience this form of unsettling stress as he tries to reconcile these ultimately irreconcilable beliefs. Different people have different levels of discomfort that arise from this state of affairs, and this applies to Expert Beginners as much as anyone else. What makes Expert Beginners unique, however, is how inescapable cognitive dissonance is for them.

An Expert Beginner’s entire career is built on a foundation of cognitive dissonance. Specifically, they believe that they are experts while outside observers (or empirical evidence) demonstrate that they are not. So an Expert Beginner is sentenced to a life of believing himself to be an expert while all evidence points to the contrary, punctuated by frequent and extremely unwelcome intrusions of that reality.

So let’s consider three classes of Expert Beginner, distinguished by their tolerance for cognitive dissonance and their paths through an organization.

Xenophobes (Low Tolerance)

An Expert Beginner with a low tolerance for cognitive dissonance is basically in a state of existential crisis, given that he has a low tolerance for the thing that characterizes his career. To put this more concretely, a marginally competent person, inaccurately dubbed “Expert” by his organization, is going to be generally unhappy if he has little ability to reconcile or accept conflicting beliefs. A more robust Expert Beginner has the ability to dismiss evidence against his ‘Expert’ status as wrong or can simply shrug it off, but not Xenophobe. Xenophobe becomes angry, distressed, or otherwise moody when this sort of thing happens.

But Xenophobe’s long term strategy isn’t simply to get worked up whenever something exposes his knowledge gap. Instead, he minimizes his exposure to such situations. This process of minimizing is where the name Xenophobe originates; he shelters himself from cognitive dissonance by sheltering himself from outsiders and interlopers that expose him to it.

If you’ve been to enough rodeos in the field of software development, you’ve encountered Xenophobe. He generally presides over a small group with an iron fist. He’ll have endless reams of coding standards, procedures, policies, rules, and quirky ways of doing things that are non-negotiable and soul-sucking. This is accompanied by an intense dose of micromanagement and insistence on absolute conformity in all matters. Nothing escapes his watchful eye, and his management generally views this as dedication or even, perversely, mentoring.

This practice of micromanagement serves double duty for Xenophobe. Most immediately, it allows him largely to prevent the group from being infected by any foreign ideas. On the occasion that one does sneak in, it allows him to eliminate it swiftly and ruthlessly to prevent the same perpetrator from doing it again. But on a longer timeline, the oppressive micromanagement systematically drives out talented subordinates in favor of malleable, disinterested ones that are fine with brainlessly banging out code from nine to five, asking no questions, and listening to the radio. Xenophobe’s group is the epitome of what Bruce Webster describes in his Dead Sea Effect post.

All that Xenophobe wants out of life is to preserve this state of affairs. Any meaningful change to the status quo is a threat to his iron-fisted rule over his little kingdom. He doesn’t want anyone to leave because that probably means new hires, which are potential sources of contamination. He will similarly resist external pushes to change the group and its mission. New business ventures will be labeled “unfeasible” or “not what we do.”

KingOfSmallKingdom

Most people working in corporate structures want to move up at some point. This is generally because doing so means higher pay, but it’s also because it comes with additional status perks like offices, parking spaces, and the mandate to boss people around. Xenophobe is not interested in any of this (beyond whatever he already has). He simply wants to come in every day and be regarded as the alpha technical expert. Moving up to management would result in whatever goofy architecture and infrastructure he’s set up being systematically dismantled, and his ego couldn’t handle that. So he demurs in the face of any promotion to project management or real management because even these apparently beneficial changes would poke holes in the Expert delusion. You’ll hear Xenophobe say things like, “I’d never want to take my hands off the keyboard, man,” or, “this company would never survive me moving to management.”

Company Men (Moderate Tolerance)

Company Man does not share Xenophobe’s reluctance to move into a line or middle management role. His comfort with this move results from being somewhat more at peace with cognitive dissonance. He isn’t so consumed with preserving the illusion of expertise at all costs that he’ll pass up potential benefits–he’s a more rational and less pathological kind of Expert Beginner.

Generally speaking, the line to a mid-level management position requires some comfort with cognitive dissonance whether or not the manager came into power from the ranks of technical Expert Beginners. Organizations are generally shaped like pyramids, with executives at the top, a larger layer of management in the middle, and line employees at the bottom. It shares more than just shape with a pyramid scheme–it sells to the rank and file the idea that ascension to the top is inevitable, provided they work hard and serve those above them well.

The source of cognitive dissonance in the middle, however, isn’t simply the numerical impossibility that everyone can work their way up. Rather, the dissonance lies in the belief that working your way up has much to do with merit or talent. In other words, only the most completely daft would believe that everyone will inevitably wind up in the CEO’s office (or even in middle management), so the idea bought into by most is this: each step of the pyramid selects its members from the most worthy of the step below it. The ‘best’ line employees become line managers, the ‘best’ line managers become mid-level managers, and so on up the pyramid. This is a pleasant fiction for members of the company that, when believed, inspires company loyalty and often hard work beyond what makes rational sense for a salaried employee.

But the reality is that mid-level positions tend to be occupied not necessarily by the talented but rather by people who have stuck around the company for a long time, people who are friends with or related to movers and shakers in the company, people who put in long hours, people who simply and randomly got lucky, and people who legitimately get work done effectively. So while there’s a myth perpetuated in corporate American that ascending the corporate ‘ladder’ (pyramid) is a matter of achievement, it’s really more of a matter of age and inevitability, at least until you get high enough into the C-level where there simply aren’t enough positions for token promotions. If you don’t believe me, go look at LinkedIn and tell me that there isn’t a direct and intensely strong correlation between age and impressiveness of title.

So, to occupy a middle management position is almost invariably to drastically overestimate how much talent and achievement it took to get to where you are. That may sound harsh, but “I worked hard and put in long hours and eventually worked my way up to an office next to the corner office” is a much more pleasant narrative than “I stuck with this company, shoveled crap, and got older until enough people left to make this office more or less inevitable.” But what does all of this have to do with Expert Beginners?

Well, Expert Beginners that are moderately tolerant of cognitive dissonance have approximately the same level of tolerance for it as middle management, which is to say, a fair amount. Both sets manage to believe that their positions were earned through merit while empirical evidence points to them getting there by default and managing not to fumble it away. Thus it’s a relatively smooth transition, from a cognitive dissonance perspective, for a technical Expert Beginner to become a manager. They simply trade technical mediocrity for managerial mediocrity and the narrative writes itself: “I was so good at being a software architect that I’ve earned a shot and will be good at being a manager.”

The Xenophobe would never get to that point because asking him to mimic competence at a new skill-set is going to draw him way outside of his comfort zone. He views moving into management as a tacit admission that he was in over his head and needed to be promoted out of danger. Company Man has no such compunction. He’s not comfortable or happy when people in his group bring in outside information or threaten to expose his relative incompetence, but he’s not nearly as vicious and reactionary as Xenophobe, as he can tolerate the odd creeping doubt of his total expertise.

In fact, he’ll often alleviate this doubt by crafting an “up after a while” story for himself vis-a-vis management. You’ll hear him say things like, “I’m getting older and can’t keep slinging code forever–sooner or later, I’ll probably just have to go into management.” It seems affable enough, but he’s really planning a face-saving exit strategy. When you start out not quite competent and insulate yourself from actual competence in a fast-changing field like software, failure is inevitable. Company Man knows this on some subconscious level, so he plans and aspires to a victorious retreat. This continues as high as Company Man is able to rise in the organization (though non-strategic thinkers are unlikely to rise much above line manager, generally). He’s comfortable with enough cognitive dissonance at every level that he doesn’t let not being competent stop him from assuming that he is competent.

Master Beginners (High Tolerance)

If Xenophobes want to stay put and Company Men want to advance, you would think that the class of people who have high tolerance for and thus no problem with cognitive dissonance, Master Beginners, would chomp at the bit to advance. But from an organizational perspective, they really don’t. Their desired trajectory from an org chart perspective is somewhere between Xenophobe and Company Man. Specifically, they prefer to stay put in a technical role but to expand their sphere of influence, breadth-wise, to grow the technical group under their tutelage. Perhaps at some point they’d be satisfied to be CTO or VP of Engineering or something, but only as long as they didn’t get too far away from their domain of ‘expertise.’

Master Beginners are utterly fascinating. I’ve only ever encountered a few of these in my career, but it’s truly a memorable experience. Xenophobes are very much Expert Beginners by nurture rather than nature. They’re normal people who backed their way into a position for which they aren’t fit and thus have to either admit defeat (and, worse, that their main accomplishment in life is being in the right place at the right time) or neurotically preserve their delusion by force. Company Men are also Expert Beginners by nurture over nature, though for them it’s less localized than Xenophobes. Company Men buy into the broader lie that advancement in command-and-control bureaucratic organizations is a function of merit. If a hole is poked in that delusion, they may fall, but a lot of others come with them. It’s a more stable fiction.

But Master Beginners are somehow Expert Beginners by nature. They are the meritocratic equivalent of sociopaths in that their incredible tolerance for cognitive dissonance allows them glibly and with an astonishing lack of shame to feign expertise when doing so is preposterous. It appears on the surface to be completely stunning arrogance. A Master Beginner would stand up in front of a room full of Java programmers, never having written a line of Java code in his life, and proceed to explain to them the finer points of Java, literally making things up as he went. But it’s so brazen–so utterly beyond reason–that arrogance is not a sufficient explanation. It’s like the Master Beginner is a pathological liar of some kind (though he’s certainly also arrogant.) He most likely actually believes that he knows more about subjects he has no understanding of than experts in those fields because he’s just that brilliant.

This makes him an excellent candidate for Expert Beginnerism both from an external, non-technical perspective and from a rookie perspective. To put it bluntly, both rookies and outside managers listen to him and think, “wow, that must be true because nobody would have the balls to talk like that unless they were absolutely certain.” This actually tends to make him better at Expert Beginnerism than his cohorts who are more sensitive to cognitive dissonance, roughly following the psychological phenomenon coined by Walter Langer:

People will believe a big lie sooner than a little one. And if you repeat it frequently enough, people will sooner than later believe it.

So the Master Beginner’s ambition isn’t to slither his way out of situations where he might be called out on his lack–he actually embraces them. The Master Beginner is utterly unflappable in his status as not just an expert, but the expert, completely confident that things he just makes up are more right than things others have studied for years. Thus the Master Beginner seeks to expand aggressively. He wants to grow the department and bring more people under his authority. He’ll back down from no challenge to his authority from any angle, glibly inventing things on the spot to win any argument, pivoting, spinning, shouting, threatening–whatever the situation calls for. And he won’t stop until everyone hails him as the resident expert and does everything his way.

Success?

I’ve talked about the ambitions of different kinds of Expert Beginners and what drives them to aspire to these ends. But a worthwhile question to ask is whether or not they tend to succeed and why or why not. I’m going to tackle the fate of Expert Beginners in greater detail in my next post on the subject, but the answer is, of course, that it varies. What tends not to vary, however, is that Expert Beginner success is generally high in the short term and drops to nearly zero on a long enough time line, at least in terms of their ambitions. In other words, success as measured by Expert Beginners themselves tends to be somewhat ephemeral.

It stands to reason that being deluded about one’s own competence isn’t a viable, long-term success strategy. There is a lesson to be learned from the fate of Expert Beginners in general, which is that better outcomes are more likely if you have an honest valuation of your own talents and skills. You can generally have success on your own terms through the right combination of strategy, dedication, and earnest self-improvement, but to improve oneself requires a frank and honest inventory of one’s shortcomings. Anything short of that, and you’re simply advancing via coincidence and living on borrowed time.

Edit: The E-Book is now available. Here is the publisher website which contains links to the different media for which the book is available.

By

How Stagnation is Justified: Language of the Expert Beginner

So far in the “Expert Beginner” series of posts, I’ve chronicled how Expert Beginners emerge and how they wind up infecting an entire software development group. Today I’d like to turn my attention to the rhetoric of this archetype in a software group already suffering from Expert Beginner-induced rot. In other words, I’m going to discuss how Expert Beginners deeply entrenched in their lairs interact with newbies to the department.

It’s no accident that this post specifically mentions the language, rather than interpersonal interactions, of the Expert Beginner. The reason here is that the actions aren’t particularly remarkable. They resemble the actions of any tenured employee, line manager or person in a position of company trust. They delegate, call the shots, set policy, and probably engage in status posturing where they play chicken with meeting lateness or sit with their feet on the table when talking to those of lesser organizational status. Experts and Expert Beginners are pretty hard to tell apart based exclusively on how they behave. It’s the language that provides a fascinating tell.

Most people, when arguing a position, will cite some combination of facts and deductive or inductive reasoning, perhaps with the occasional logical fallacy sprinkled in by mistake. For instance, “I left the windows open because I wanted to air out the house and I didn’t realize it was supposed to rain,” describes a choice and the rationale for it with an implied mea culpa. The Expert Beginner takes a fundamentally different approach, and that’s what I’ll be exploring here.

False Tradeoffs and Empty Valuations

If you’re cynical or intelligent with a touch of arrogance, there’s an expression you’re likely to find funny. It’s a little too ubiquitous for me to be sure who originally coined the phrase, but if anyone knows, I’m happy to amend and offer an original source attribution. The phrase is, “Whenever someone says ‘I’m not book smart, but I’m street smart,’ all I hear is, ‘I’m not real smart, but I’m imaginary smart.'” I had a bit of a chuckle the first time I read that, but it’s not actually what I, personally, think when I hear someone describe himself as “street smart” rather than “book smart.” What I think is being communicated is “I’m not book smart, and I’m sort of sensitive about that, so I’d like that particular valuation of people not to be emphasized by society.” Or, more succinctly, “I’m not book smart, and I want that not to be held against me.”

“Street smart” is, at its core, a counterfeit status currency proffered in lieu of a legitimate one. It has meaning only in the context of it being accepted as a stand-in for the real McCoy. If I get the sense that you’re considering accepting me into your club based on the quantity of “smarts” that I have, and I’m not particularly confident that I can come up with the ante, I offer you some worthless thing called “street smarts” and claim that it’s of equal replacement value. If you decide to accept this currency, then I win. And, interestingly, if enough other people decide to accept it, then it becomes a real form of currency (which I think it’d be pretty easy to argue that “street smart” has).

Whatever you may think of the “book smart vs street smart” dichotomy notwithstanding, you’d be hard pressed to argue that the transaction doesn’t follow the pattern of “I want X,” “I don’t have that, but I have Y (and I’m claiming Y is just as good).” And understanding this attempted substitution is key to understanding one of the core planks of the language of Expert Beginners. They are extremely adept at creating empty valuations as stand-ins for meaningful ones. To see this in action, consider the following:

  1. Version control isn’t really that important if you have a good architecture where two people never have to touch the same file.
  2. We don’t write unit tests because our developers spend extra time inspecting the code after they’ve written it.
  3. Yeah, we don’t do a lot of Java here, but you can do anything with Perl that you can with Java.
  4. Our build may not be automated, but it’s very scientific and there’s a lot of complicated stuff that requires an expert to do manually.
  5. We don’t need to be agile or iterative because we write requirements really well.
  6. We save a lot of money by not splurging on productivity add-ins and fancy development environments, and it makes our programmers more independent.

In all cases here, the pattern is the same. The Expert Beginner takes something that’s considered an industry standard or best practice, admits to not practicing it, and offers instead something completely unacceptable (or even nonsensical/made up) as a stand-in, implying that you should accept the switch because they say so.

Condescension and Devaluations

This language tactic is worth only a brief mention because it’s pretty obvious as a ploy, and it squanders a lot of realpolitik capital in the office if anyone is paying attention. It’s basically the domain-specific equivalent of some idiot being interviewed on the local news, just before dying of hurricane, saying something like “I’m not gonna let a buncha fancy Harvard science-guys tell me about storms–I’ve lived here for forty years and I can feel ’em comin’ in my bones. If I need to evacuate, I’ll know it!”

In his fiefdom, an Expert Beginner is obligated to have some explanation for ignoring best practices that at least rises to the level of sophistry and offers some sort of explanation, however improbable. This is where last section’s false valuations shine. Simply scoffing at best practices and new ideas has to be done sparingly or upper management will start to notice and create uncomfortable situations. And besides, this reaction is frankly beneath the average Expert Beginner–it’s how a frustrated and petulant Novice would react. Still, it will occasionally be trotted out in a pinch and can be effective in that usage scenario since it requires no brain cells and will just be interpreted as passion rather than intellectual laziness.

The Angry Driver Effect

If you ever watch a truly surly driver on the highway, you’ll notice an interesting bit of irritable cognitive bias against literally everyone else on the road. The driver will shake her fist at motorists passing her, calling them “maniacs,” while shaking the same fist at those going more slowly, calling them “putzes.” There’s simply no pleasing her.

An Expert Beginner employs this tactic with all members of the group as well, although without the anger. For example, if she has a Master’s degree, she will characterize solutions put forth by those with Bachelor’s degrees as lacking formal polish, while simultaneously characterizing those put forth by people with PhDs as overly academic or out of touch. If the solution different from hers is presented by someone that also has a Master’s, she will pivot to another subject.

Is your solution one that she understands immediately? Too simplistic. Does she not understand it? Over-engineered and convoluted. Are you younger than her? It’s full of rookie mistakes. Older? Out of touch and hackneyed. Did you take longer than it would have taken her? You’re inefficient. Did it take you less time? You’re careless. She will keep pivoting, as needed, ad infinitum.

Taken individually, any one of these characterizations makes sense and impresses. In a way, it’s like the cold-reading game that psychics play. Here the trick is to identify a personal difference and characterize it; anything produced by its owner as negative. The Expert Beginner controls the location of the goalposts via framing in the same way that the psychic rattles off a series of ‘predictions’ until one is right, as evidenced by micro-expressions. The actual subtext is, “I’m in charge and I get to define good and bad, so good is me, and some amount less good is you.”

Interestingly, the Expert Beginner’s definition of good versus bad is completely orthogonal to any external characterizations of the same. For instance, if the Expert Beginner had been a C student, then, in her group, D students would be superior to A students because of their relative proximity to the ideal C student. The D students might be “humble, but a little slow,” while A students would be “blinded by their own arrogance,” or some such thing. It’s completely irrelevant that society at large considers A students to be of the most value.

Experts are Wrong

Given that Expert Beginners are of mediocre ability by definition, the subject of expertise is a touchy one. Within the small group, this isn’t really a problem since the Expert Beginner is the designated Expert there by definition. But within a larger scope, actual Experts exist, and they do present a problem–particularly when group members are exposed to them and realize that discrepancies exist.

For instance, let’s say that an Expert Beginner in a small group has never bothered with source control for code, due to laziness and a simple lack of exposure. This decision is likely settled case-law within the group, having been justified with something like the “good architecture” canard from the Empty Valuations section. But if any group member watches a Pluralsight video or attends a conference which exposes them to industry experts and best practices, the conflict becomes immediately apparent and will be brought to the attention of the reigning Expert Beginner. In the last post, I made a brief example of an Expert Beginner reaction to such a situation: “you can’t believe everything you see on TV.”

This is the simplest and most straightforward reaction to such a situation. The Expert Beginner believes that he and his ‘fellow’ Expert have a simple difference of opinion among ‘peers.’ While it may be true that one Expert speaks at conferences about source control best practices and the other one runs the IT for Bill’s Bait Shop and has never used source control, either opinion is just as valid. But on a long enough timeline, this false relativism falls apart due to mounting disagreement between the Expert Beginner and real Experts.

When this happens, the natural bit of nuance that Expert Beginners introduce is exceptionalism. Rather than saying, “well, source control or not, either one is fine,” and risk looking like the oddball, the Expert Beginner invents a mitigating circumstance that would not apply to other Experts, effectively creating an argument that he can win by forfeit. (None of his opponents are aware of his existence and thus offer no counter-argument.) For instance, the Bait Shop’s Expert Beginner might say, “sure, those Experts are right that source control is a good idea in most cases, but they don’t understand the bait industry.”

This is a pretty effective master-stroke. The actual Experts have been dressed down for their lack of knowledge of the bait industry while the Expert Beginner is sitting pretty as the most informed one of the bunch. And, best of all, none of the actual Experts are aware of this argument, so none of them will bother to poke holes in it. Crisis averted.

All Qualitative Comparisons Lead Back to Seniority

A final arrow in the Expert Beginner debate quiver is the simple tactic of non sequitur about seniority, tenure, or company experience. On the surface this would seem like the most contrived and least credible ploy possible, but it’s surprisingly effective in corporate culture, where seniority is the default currency in the economy of developmental promotions. Most denizens of the corporate world automatically assign value and respect to “years with the company.”

Since there is no bigger beneficiary of this phenomenon than an Expert Beginner, he plows investment into it in an attempt to drive the market price as high as possible. If you ask the Expert Beginner why there is no automated build process, he might respond with something like, “you’ll understand after you’ve worked here for a while.” If you ask him this potentially embarrassing question in front of others, he’ll up the ante to “I asked that once too when I was new and naive–you have a lot to learn,” at which time anyone present is required by corporate etiquette to laugh at the newbie and nervously reaffirm that value is directly proportional to months employed by Acme Inc.

The form and delivery of this particular tactic will vary a good bit, but the pattern is the same at a meta-level. State your conclusion, invent a segue, and subtly remind everyone present that you’ve been there the longest. “We tried the whole TDD thing way back in 2005, and I think all of the senior developers and project managers know how poorly that went.” “Migrating from VB6 to something more modern definitely sounds like a good idea at first, but there are some VPs you haven’t met that aren’t going to buy that one.”

It goes beyond simple non sequitur. This tactic serves as a thinly veiled reminder as to who calls the shots. It’s a message that says, “here’s a gentle reminder that I’ve been here a long time and I don’t need to justify things to the likes of you.” Most people receive this Expert Beginner message loudly and clearly and start to join in, hopeful for the time they can point the business end at someone else as part of the “Greater Newbie Theory.”

Ab Hominem

In the beginning of this post, I talked about the standard means for making and/or defending arguments (deductive or inductive reasoning) and how Expert Beginners do something else altogether. I’ve provided a lot of examples of it, but I haven’t actually defined it. The central feature of the Expert Beginner’s influence-consolidation language is an inextricable fusing of arguer and argument, which is the polar opposite of standard argument form. For instance, it doesn’t matter who says, “if all humans have hearts, and Jim is a human, then Jim has a heart.” The argument stands on its own. But it does matter who says, “Those of us who’ve been around for a while would know why not bothering to define requirements is actually better than SCRUM.” That argument is preposterous from an outsider or a newbie but acceptable from an Expert Beginner.

A well-formed argument says, “if you think about this, you’ll find it persuasive.” The language of the Expert Beginner says, “it’s better if you don’t think about this–just remember who I am, and that’s all you need to know.” This can be overt, such as with the seniority dropping, or it can be more subtle, such as with empty valuations. It can also be stacked so that a gentle non sequitur can be followed with a nastier “get off of my lawn” type of dressing down if the first message is not received.

In the end, it all makes perfect sense. Expert Beginners arrive at their stations through default, rather than merit. As such, they have basically no practice at persuading anyone to see the value of their ideas or at demonstrating the superiority of their approach. Instead, the only thing they can offer is the evidence that they have of their qualifications–their relative position of authority. And so, during any arguments or explanations, all roads lead back to them, their position titles, their time with the company, and the fact that their opinions are inviolate.

If you find yourself frequently making arguments along the lines of the ones that I’ve described here, I’d suggest putting a little more thought and effort into them from now on. No matter who you are or how advanced you may be, having to defend your opinions and approaches is an invaluable skill that should be kept as sharp as possible. You’ll often learn just as much from justifying your approach as formulating it in the first place. If you’re reading this article, it’s pretty unlikely that you’re an Expert Beginner. And, assuming that you’re not, you probably want to make sure people don’t confuse you with one.

Next: “Up or Not: Ambition of the Expert Beginner”

Edit: The E-Book is now available. Here is the publisher website which contains links to the different media for which the book is available.

By

How We Get Coding Standards Wrong

The other day, I sat in on a meeting where a large-ish group was discussing “standards” for their particular area of software development. I have the word standards in quotes because, by design, there wasn’t a clear definition as to what sorts of standards they would be; it was an open-ended exercise. The standard could cover anything from variable casing to development practices and principles to holistic approaches. When asked for my input, I was sort of bemused by the process, and I said that I didn’t really have much in the way of an answer. I declined to elaborate much more on that since I wasn’t interested in derailing the meeting in any way, but it did get me to thinking about why the exercise seemed sort of futile to me.

I generally have a natural leeriness when it comes to coding and development standards and especially activities designed to flesh those out, and in this post I’d like to explore why. It isn’t that I don’t believe standards should exist or that I believe they aren’t important. It’s just that I think we frequently miss the point and create standards out of some sense that it’s The Right Thing, and thus create standards that are pointless or even detrimental.

Standards by Committee Anti-Pattern

One problem with defining standards in a group setting is that any group containing some socially savvy people is going to gravitate toward diplomacy. Contentious and arbitrary subjects (so-called “religious wars”) like camel case versus Pascal case or where the bracket after a function goes will be avoided in favor or things upon which a consensus may be reached. But think about what’s actually happening–everyone’s agreeing that the things that everyone already does should be standardized. This is a fairly vacuous exercise in bureaucracy, useful only in the hypothetical realm where a new person comes on board and happens to disagree with something upon which twenty others agree.

People doing this are solving a problem that doesn’t exist: “how do we make sure everyone does this the same way when everyone’s currently doing it the same way?” It also tends to favor documenting current process rather than thinking critically about ideal process.

Let’s capture all of the stuff that we all do and write it down. Okay, so, coding standards. When working on a .NET project, first drive to the office. Then, have your keycard ready to get in the building. Next, enter the building…

Obviously this is silly, but hopefully the point hits home. The simple fact that you do something or that everyone in the group does something doesn’t mean that it’s worth capturing as trainable knowledge and enforcing on the group. And yet this is a direction I frequently see groups take as they get into a groove of “yes, and” when discussing standards. It can just turn into “let’s make a list of everything we do.”

Pointless Homogeneity

The concept of capturing the intersection of everyone’s approach and coding style dovetails into another problem with groups hashing out standards: a group-think bias. Slightly different from the notion that everything common should be documented, this is the notion that everything should be common. For instance, I once worked in a shop where developers were all mandated to use the same diff tool. I’m not kidding. If anyone bothered with a justification for this, I don’t recall what it was, other than some nod to pointless standards.

CookieCutter

You can take this pretty far. Imagine demands that you use the same syntax highlighting colors as your peers or that you keep your file system organized in the same way as everyone else. What does this have to do with the code you’re producing? Who knows…

It might seem like the kind of thing where you should just indulge the the harmless control freak driving it or the group that dreams it up as a unit, but this runs the risk of birthing a toxic culture. With everything, however inconsequential, homogenized, there is no room for creative thinkers to innovate with new approaches.

Make-Work Tasks

Another risk you run when coming up with standards is to create so-called standards that amount to codifying and mandating busy-work. I’ve made my evolving opinion of comments in code quite clear on a few occasions, and I consider them to be an excellent example. When you make “comment every method” a standard, you’re standardizing the procedure (mindlessly adding comments) and not the goal (clarity and communication).

There are plenty of other examples one might dream up. The silly mandate of “sort and organize usings” that I blogged about some time back comes to mind. This is another example of standardizing pointless make-work tasks that provide no substantive benefit. In all cases, the problem is that you’re not only asking developers to engage in brainless busy-work–you’re codifying it as an official mandate.

Getting Too Specific

Another source of issues that I’ve seen in the establishment of standards is a tendency to get too specific. “What sort of convention should we use when we declare a generic interface below an enumeration inside of a nested class?” Really? Does that come up often enough that it’s important for everyone to get on the same page about how to approach it?

I recognize the human desire for set closure; we don’t like it when a dresser is missing a drawer or when we haven’t collected the whole set, but sometimes you’ve just got to let it go. We’re not the IRS–it’s going to be alright if there are contingencies that we haven’t covered and oddball loopholes that we haven’t addressed.

Missing the Point

For me, this is the biggest one. Usually standards discussions are about superficial programming concerns rather than substantive ones, and that’s unfortunate. It is the aforementioned camel vs Pascal case wars or whether to put brackets and which kinds to use. To var or not to var? Should constants be all caps? If an interface is in a forest and doesn’t have an “I” in front of its name, is it still an interface?

I understand the benefit of consistency in naming, casing, and other syntactic considerations. I really do, in spite of my tendency to be dismissive and iconoclast on this front when discussing them. But, first off, let’s not pretend that there really is a right way with these things–there’s just the way that you’re used to doing them. And, more importantly, let’s not pretend that this is really all that important in the grand scheme of things.

We use consistent casing and naming so that a reader of the code can tell at a glance whether something is a field or a local variable or whether something is a method or a property or a constant. It’s really about promoting readability, which, in turn, is about maximizing maintainability. But you know what’s much harder on maintainability than Jones’s great constant casing blunder of 2010 where he forgot to use ALL CAPS? Writing bad code.

If you’re banging out behemoth methods with control statements eight deep, all of the camel case in the world isn’t going to make your code readable. A standard mandating that all such methods be prepended with “yuck” might help, but the real thing that you need is some standards about writing clean code. Keeping methods and classes small and focused, principles like DRY and SOLID, and other good design principles are much more important standards to which to aspire, but they’re often less concrete and harder to enforce. It’s much easier and more rote for a code reviewer to look for casing issues or missing comments than to analyze code for good software practice and object-oriented design. The latter is often less cut-and-dry and more a matter of degrees, so it’s frequently glossed over in favor of more tangible, simple things. Problem is, those tangible, simple things really aren’t all that important to the health of your applications and projects over the long haul.

It’s All Just Premature Optimization

The common thread here is that all of these standards anti-patterns result from solving non-existent problems. If you have some collection of half-baked standards at your company that go on for some pages and then say, “after that, follow the Microsoft standards,” imagine how they came about. I bet a few of the group’s original engineers or most senior people had a conversation that went something like, “We should probably have some standards.” “Yeah, I guess… but why now?” “I dunno… I think it’s, like, what you’re supposed to do.”

I suspect that if you did a survey, a lot more standards documents have started with conversations like that than with conversations about hours lost to maintenance and difficulty reading code. They are born out of cargo-cult practice rather than a necessity to solve some problem. Philosophically, they start as solutions in search of a problem rather than solutions to actual problems.

The situation is complicated by the fact that adoption of certain standards may have solved real problems in the past for developers on the team, and they’re simply doing the smart thing and carrying their knowledge forward. The trouble is that not all projects face the same problems. When discussing approaches, start with abstract and general abiding principles like SOLID and DRY and take it from there. If half of your team uses camel case and the other half Pascal and it’s causing communication and maintenance difficulties, flip a coin and set a standard. Repeat as necessary with other standards to keep the project moving and humming. But don’t make them up just for the sake of doing so. You wouldn’t start writing random code that may never solve any actual problem, so why create a standard that way?

Acknowledgements | Contact | About | Social Media